Skip to content

Conversation

@katwinkl3
Copy link
Contributor

@katwinkl3 katwinkl3 commented Dec 23, 2025

Key Info

What changed?

Added hardcoded validation for openai_args to check if messages is a key. Nothing else is checked since LiteLLM raises an exception if the param being passed in isn't supported.
referenced https://docs.litellm.ai/docs/completion/input

What do you want the reviewer(s) to focus on?


Checklist

  • Did you link the GitHub issue?
  • Did you follow deployment steps or bump the version if needed?
  • Did you add/update tests?
  • What QA did you do?
    • Added tests

… openai param being passed in isn't supported
@katwinkl3 katwinkl3 force-pushed the katwinkl3/validate_completion_params branch from 866718e to 3230d76 Compare December 23, 2025 19:32
Copy link
Contributor

@kelsey-wong kelsey-wong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I had these comments but never submitted them!

REQUIRED_CHAT_COMPLETION_PARAMS: FrozenSet[str] = frozenset({"messages"})


def _validate_chat_completion_params(params: CompletionParams) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also check that messages is non-empty? (i'm not sure if a chat completion can be generated without any messages)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and can we also validate that the message items are properly formatted, e.g. include role and content?

https://docs.litellm.ai/docs/completion/input#required-fields

@@ -0,0 +1,53 @@
"""Validation helpers for chat completion parameter dictionaries."""

# from litellm import get_supported_openai_params
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete unused comment?

Comment on lines 32 to 33
if role != "assistant" or function_call is None:
raise ValueError(f"messages[{index}]['content'] must be a string.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this error message be more specific/helpful? Not sure the best phrasing but something like this. Also I can't tell but it seems like there might be a logical error in the condition - is it never okay for function_call to be None if the content is None? Would be helpful to explain that in the error msg, probably by splitting the conditions

Suggested change
if role != "assistant" or function_call is None:
raise ValueError(f"messages[{index}]['content'] must be a string.")
if role != "assistant":
raise ValueError(f"Non-assistant message at index {index} must have content.")
if function_call is None:
raise ValueError(f"messages[{index}] must have either content or function_call key.")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this logic after referring to litellm's required field explanation: "content: string or list[dict] or null - The contents of the message. It is required for all messages, but may be null for assistant messages with function calls.", I'll split the the if logic up with more fitting err msg

@huiwengoh
Copy link
Collaborator

There is some interaction between this PR and the new API i believe. Changes should not be huge but just wanted to make a note: #31

katwinkl3 and others added 2 commits January 7, 2026 13:40
Co-authored-by: Kelsey Wong <kelsey.wong@cleanlab.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants