Encountering multiple issues when integrating the GPT-OSS models (20B and 120B) into the current pipeline. The error is specifically related to the finish_reason field in the response from the OpenAI chat completions endpoint.
Error:
ERROR: Agent 'CellDescriptionAgent' failed after 1 attempts. Last error: Invalid response from OpenAI chat completions endpoint: 1 validation error for ChatCompletion
choices.0.finish_reason
Input should be 'stop', 'length', 'tool_calls', 'content_filter', or 'function_call' [type=literal_error, input_value=None, input_type=NoneType]
For further information, visit https://errors.pydantic.dev/2.11/v/literal_error
Observations:
- This error only occurs when using the GPT-OSS models (20B and 120B).
- Other OpenAI models (GPT-5, GPT-5.1) are working as expected without issues.
- The error specifically pertains to the
finish_reason being returned as None, which is rejected by the pydantic model validation.
Providers tried:
Possible Causes:
finish_reason validation: It seems that the finish_reason value is either missing or set to None in the response from GPT-OSS models. OpenAI’s API expects the finish_reason to be one of the following values: 'stop', 'length', 'tool_calls', 'content_filter', or 'function_call'. The GPT-OSS models are returning None for finish_reason, which is causing the validation error.
- Provider behavior: The different providers return
null (or None) for finish_reason, while other OpenAI models handle this properly.
Attempts to resolve:
- Multiple providers tested: The issue persists across Groq, Ollama, and OpenRouter
- Monkeypatching
pydantic_ai: Attempted to modify pydantic_ai to accept None for finish_reason, but this resulted in empty responses instead of the expected output.
Potential Next Steps:
- Custom middleware: A custom HTTP client middleware that intercepts the OpenRouter response and injects finish_reason where missing.
- Priority: This issue is not high priority at the moment due to the time spent troubleshooting so far, but it will be revisited.
Any insights or suggestions would be appreciated.
Encountering multiple issues when integrating the GPT-OSS models (20B and 120B) into the current pipeline. The error is specifically related to the
finish_reasonfield in the response from the OpenAI chat completions endpoint.Error:
Observations:
finish_reasonbeing returned asNone, which is rejected by thepydanticmodel validation.Providers tried:
Possible Causes:
finish_reasonvalidation: It seems that thefinish_reasonvalue is either missing or set toNonein the response from GPT-OSS models. OpenAI’s API expects thefinish_reasonto be one of the following values:'stop','length','tool_calls','content_filter', or'function_call'. The GPT-OSS models are returningNoneforfinish_reason, which is causing the validation error.null(orNone) forfinish_reason, while other OpenAI models handle this properly.Attempts to resolve:
pydantic_ai: Attempted to modifypydantic_aito acceptNoneforfinish_reason, but this resulted in empty responses instead of the expected output.Potential Next Steps:
Any insights or suggestions would be appreciated.