You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prevent oversized run_pymol_command tool results from crashing the Claude SDK JSON message reader when a PyMOL command prints a very large payload (for example print(cmd.get_session())).
Problem
While using PyMolAI interactively, a tool invocation that printed cmd.get_session() produced a huge JSON tool result. The Claude SDK message reader then failed with:
Fatal error in message reader: Failed to decode JSON: JSON message exceeded maximum buffer size of 10485760 bytes...
This leaves the AI turn unusable and requires cancelling/restarting the session.
Root Cause
run_pymol_command payloads (especially feedback_lines) are returned to the Claude SDK MCP tool server without a transport-size guard. If a command emits a large textual result, the serialized JSON tool result can exceed the SDK buffer limit.
Fix
Add _sdk_safe_run_command_payload(...) to shrink oversized run_pymol_command payloads before sending them to the SDK.
Preserve key metadata (ok, command, error) while truncating large feedback_lines to a preview.
Mark truncated payloads with truncated=True and truncation_reason="sdk_buffer_guard".
Emit a warning log when truncation is applied.
Why this approach
This is a defensive transport-layer safeguard. It avoids a hard failure in the agent loop while preserving enough context for the model and UI to continue.
Repro (before fix)
Ask the AI to run a Python tool command that prints a large object, e.g. cmd.get_session().
The tool result JSON exceeds the Claude SDK reader buffer.
The SDK reader crashes with a max-buffer decode error.
Validation
python -m py_compile modules/pymol/ai/runtime.py
Manual local repro path (large tool output) no longer overflows the SDK transport payload after truncation guard.
I am looking into this, my assumption was that the claude agent SDK should be taking care of this (batteries included was part of the reason for using the SDK)
Turns out the SDK does handle long outputs and saves them into a file (as most harnesses do for longer outputs).
And the error you faced was related to buffer size being too large, I had already increased that to 10MB by default. I tried to reproduce the same with the commands you suggested, but was not able to hit the already high default.
I suggest that you can increase the PYMOL_AI_SDK_MAX_BUFFER_SIZE in runtime.py line 117 to see if that fixes your issue.
I ran a local repro to validate the buffer issue. With default PYMOL_AI_SDK_MAX_BUFFER_SIZE=10,485,760, a ~200KB tool output serializes to 200,104 chars and does not hit the limit, which explains why your test passes. A ~12MB output serializes to 12,000,103 chars and exceeds the SDK message buffer threshold. With this patch, the same oversized payload is reduced to 128,288 chars with truncated=true and truncation_reason=sdk_buffer_guard, so transport stays below the SDK limit while preserving a useful preview/metadata.
I think this is because commands like print(cmd.get_session()): it can dump the full PyMOL session state (objects, settings, coordinates, etc.), and the serialized tool-result JSON can exceed 10MB on larger scenes.
Sorry, for the delay in dealing with this, I had gotten busy. I will run some tests again and merge.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Prevent oversized
run_pymol_commandtool results from crashing the Claude SDK JSON message reader when a PyMOL command prints a very large payload (for exampleprint(cmd.get_session())).Problem
While using PyMolAI interactively, a tool invocation that printed
cmd.get_session()produced a huge JSON tool result. The Claude SDK message reader then failed with:Fatal error in message reader: Failed to decode JSON: JSON message exceeded maximum buffer size of 10485760 bytes...This leaves the AI turn unusable and requires cancelling/restarting the session.
Root Cause
run_pymol_commandpayloads (especiallyfeedback_lines) are returned to the Claude SDK MCP tool server without a transport-size guard. If a command emits a large textual result, the serialized JSON tool result can exceed the SDK buffer limit.Fix
_sdk_safe_run_command_payload(...)to shrink oversizedrun_pymol_commandpayloads before sending them to the SDK.ok,command,error) while truncating largefeedback_linesto a preview.truncated=Trueandtruncation_reason="sdk_buffer_guard".Why this approach
This is a defensive transport-layer safeguard. It avoids a hard failure in the agent loop while preserving enough context for the model and UI to continue.
Repro (before fix)
cmd.get_session().Validation
python -m py_compile modules/pymol/ai/runtime.py