Skip to content

fix: robust JSON extraction for mixed LLM responses#225

Closed
ChinmayShringi wants to merge 1 commit intomainfrom
fix/issue-64
Closed

fix: robust JSON extraction for mixed LLM responses#225
ChinmayShringi wants to merge 1 commit intomainfrom
fix/issue-64

Conversation

@ChinmayShringi
Copy link
Owner

SummarynnHarden backend JSON parsing for LLM responses so mixed outputs (markdown fences, pre/post text) are handled more robustly, reducing 500 errors reported during ontology generation.nn## Changesnn- Updated LLMClient.chat() to remove <think ...>...</think> tags case-insensitivelyn- Added LLMClient._extract_json_payload() to normalize and extract JSON from noisy model responsesn- Updated chat_json() to parse extracted payload instead of raw contentn- Added unit tests in backend/tests/test_llm_client_json_extract.py for fenced JSON and mixed-text extractionnn## Testingnn- Added targeted unit tests for extraction behaviorn- Could not execute tests in this environment because backend dev dependencies are not installed (pytest, flask missing)nnFixes 666ghj/MiroFish#64


Original PR: 666ghj/MiroFish#124
Original Author: @SergioChan

@ChinmayShringi ChinmayShringi added the LLM API Issues related to LLM API integration label Mar 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

LLM API Issues related to LLM API integration

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant