Skip to content

fix(llm_client): remove response_format json_object for local LLM compatibility#228

Closed
ChinmayShringi wants to merge 1 commit intomainfrom
fix/lm-studio-json-object-compat
Closed

fix(llm_client): remove response_format json_object for local LLM compatibility#228
ChinmayShringi wants to merge 1 commit intomainfrom
fix/lm-studio-json-object-compat

Conversation

@ChinmayShringi
Copy link
Owner

Problem

chat_json() uses response_format={"type": "json_object"}, but LM Studio and Ollama do not support this parameter (only json_schema or text), causing API calls to fail when using local LLMs.

Related references:

Solution

Remove response_format from chat_json(). The method already has robust markdown code fence cleanup logic (L93-97) that correctly parses JSON from raw LLM output, making response_format unnecessary.

This follows the same approach as commit 985f89f (handling <think> tags from reasoning models) — improving compatibility with diverse model outputs.

Changed Files

  • backend/app/utils/llm_client.py: Remove response_format in chat_json(), rely on prompt instructions + markdown cleanup

Testing

  • LM Studio + qwen3.5-9b: Full prediction pipeline (ontology → graph → prepare → simulate → report) passes
  • Does not affect OpenAI API users (removing response_format is backwards compatible)

Original PR: 666ghj/MiroFish#122
Original Author: @ImL1s

@ChinmayShringi ChinmayShringi added the LLM API Issues related to LLM API integration label Mar 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

LLM API Issues related to LLM API integration

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant