Skip to content

Fix: harden Ollama request handling and response validation#233

Open
Aryama-srivastav wants to merge 2 commits intofireform-core:mainfrom
Aryama-srivastav:#228
Open

Fix: harden Ollama request handling and response validation#233
Aryama-srivastav wants to merge 2 commits intofireform-core:mainfrom
Aryama-srivastav:#228

Conversation

@Aryama-srivastav
Copy link

Description

This PR fixes reliability issues in the Ollama request path used by LLM extraction.

Previously, the request flow could hang without a timeout, fail with low-context errors on transport/HTTP failures, and crash with unclear behavior when the response payload was malformed or missing the response field.

This change makes the request path resilient and explicit:

  • if network is slow → request times out with a clear TimeoutError
  • if HTTP is non-2xx → raises contextual RuntimeError with field context
  • if request fails at transport layer → raises contextual RuntimeError
  • if JSON is malformed → raises contextual ValueError
  • if response key is missing → raises explicit ValueError

Fixes #228

Type of change

Bug fix (non-breaking change which fixes an issue)

How Has This Been Tested?

Test A (timeout path):
Ran:
python -m pytest [test_llm_reliability.py] -q -k timeout

Verified behavior:

Raises TimeoutError
Error message includes field and Ollama URL context
Test B (HTTP error path):

Ran:
python -m pytest [test_llm_reliability.py] -q -k http_error

Verified behavior:

Raises RuntimeError for HTTP failure
Message includes field-specific context
Test C (invalid JSON path):

Ran:
python -m pytest [test_llm_reliability.py] -q -k invalid_json

Verified behavior:

Raises ValueError with invalid JSON context
Test D (missing response key path):

Ran:
python -m pytest [test_llm_reliability.py] -q -k missing_response_key

Verified behavior:

Raises ValueError indicating missing response key
Full focused test run:

Ran:
python -m pytest [test_llm_reliability.py] -q

Verified output:
4 passed in 0.16s

Test Configuration:

Firmware version: N/A
Hardware: Local development machine (Windows)
SDK: N/A
Python: 3.13
Shell: PowerShell

Checklist:

My code follows the style guidelines of this project

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

Copilot AI review requested due to automatic review settings March 12, 2026 17:24
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Hardens the Ollama request path used by LLM.main_loop() to avoid hangs and provide clearer, field-aware failures when Ollama is slow/unavailable or returns bad responses, and adds focused regression tests for those failure modes.

Changes:

  • Add a request timeout and more specific exception handling/validation for the Ollama request + JSON response parsing.
  • Add reliability tests covering timeout, HTTP error, invalid JSON, and missing response key scenarios.
  • Update the sample input text used by the project.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
src/llm.py Adds timeout + explicit error handling and response validation for the Ollama call in main_loop().
src/test/test_llm_reliability.py Adds tests to validate behavior for timeout/HTTP/JSON/missing-key failure paths.
src/inputs/input.txt Replaces the example input text used for extraction.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +71 to +74
except requests.exceptions.Timeout:
raise TimeoutError(
f"Request to Ollama timed out for field '{field}' at {ollama_url}."
)
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

requests.exceptions.Timeout is caught without binding the original exception, and the raised TimeoutError message drops the underlying timeout message (e.g., "timed out"). This will make the new timeout test brittle/failing and also loses useful debugging context; capture the exception (e.g., as e), include str(e) in the message, and chain it (raise ... from e).

Suggested change
except requests.exceptions.Timeout:
raise TimeoutError(
f"Request to Ollama timed out for field '{field}' at {ollama_url}."
)
except requests.exceptions.Timeout as e:
raise TimeoutError(
f"Request to Ollama timed out for field '{field}' at {ollama_url}: {e}"
) from e

Copilot uses AI. Check for mistakes.
Comment on lines 66 to 70
except requests.exceptions.ConnectionError:
raise ConnectionError(
f"Could not connect to Ollama at {ollama_url}. "
"Please ensure Ollama is running and accessible."
)
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The connection error path raises a built-in ConnectionError without including the field context, and it differs from the PR description which states transport-layer failures should raise a contextual RuntimeError. Consider including the field name and chaining the original exception (and/or using a consistent exception type across transport/HTTP failures) so callers can handle failures uniformly.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]: Ollama request path lacks timeout and robust error handling

2 participants