-
Notifications
You must be signed in to change notification settings - Fork 1
fix: Validate the completion params that are passed as input to the API #3 #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
… openai param being passed in isn't supported
866718e to
3230d76
Compare
kelsey-wong
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I had these comments but never submitted them!
| REQUIRED_CHAT_COMPLETION_PARAMS: FrozenSet[str] = frozenset({"messages"}) | ||
|
|
||
|
|
||
| def _validate_chat_completion_params(params: CompletionParams) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we also check that messages is non-empty? (i'm not sure if a chat completion can be generated without any messages)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and can we also validate that the message items are properly formatted, e.g. include role and content?
https://docs.litellm.ai/docs/completion/input#required-fields
| @@ -0,0 +1,53 @@ | |||
| """Validation helpers for chat completion parameter dictionaries.""" | |||
|
|
|||
| # from litellm import get_supported_openai_params | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete unused comment?
| if role != "assistant" or function_call is None: | ||
| raise ValueError(f"messages[{index}]['content'] must be a string.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this error message be more specific/helpful? Not sure the best phrasing but something like this. Also I can't tell but it seems like there might be a logical error in the condition - is it never okay for function_call to be None if the content is None? Would be helpful to explain that in the error msg, probably by splitting the conditions
| if role != "assistant" or function_call is None: | |
| raise ValueError(f"messages[{index}]['content'] must be a string.") | |
| if role != "assistant": | |
| raise ValueError(f"Non-assistant message at index {index} must have content.") | |
| if function_call is None: | |
| raise ValueError(f"messages[{index}] must have either content or function_call key.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this logic after referring to litellm's required field explanation: "content: string or list[dict] or null - The contents of the message. It is required for all messages, but may be null for assistant messages with function calls.", I'll split the the if logic up with more fitting err msg
|
There is some interaction between this PR and the new API i believe. Changes should not be huge but just wanted to make a note: #31 |
Co-authored-by: Kelsey Wong <kelsey.wong@cleanlab.ai>
Key Info
What changed?
Added hardcoded validation for openai_args to check if messages is a key. Nothing else is checked since LiteLLM raises an exception if the param being passed in isn't supported.
referenced https://docs.litellm.ai/docs/completion/input
What do you want the reviewer(s) to focus on?
Checklist