Problem (one or two sentences)
When Roo Code serializes structured data (file lists, tool results, memory entries) into LLM prompt context, it uses standard JSON — a verbose format that consumes significantly more tokens than necessary. There is currently no mechanism to substitute a token-efficient encoding format, resulting in avoidable cost and context window pressure at scale.
Context (who is affected and when)
As context windows grow and structured data is injected into prompts, token consumption becomes a cost and performance concern at scale — affecting every agent invocation across potentially hundreds of concurrent jobs.
This investigation is relevant to any Roo Code deployment where structured data is serialized into LLM prompts programmatically
Desired behavior (conceptual, not technical)
Roo Code (or its CLI) should be capable of serializing structured context data in a token-efficient format before it is injected into the LLM prompt — specifically for uniform arrays of objects (file lists, memory entries, task records, tool results) where TOON's tabular encoding can reduce token usage by ~40% versus standard JSON without loss of LLM comprehension accuracy.
The goal is not to change what the agent receives conceptually, but to reduce the token cost of how that structured data is represented in the prompt.
Constraints / preferences (optional)
No response
Request checklist
Roo Code Task Links (optional)
Identify where in the Roo Code source structured data is serialized into prompt context today (e.g. system prompt construction, context window management, tool result formatting)
Determine if JSON serialization of structured arrays is happening inline in the plugin, in the CLI layer, or assembled externally
Note any existing abstraction points (serializers, context builders, formatters) where a pluggable encoding strategy could be introduced without deep surgery
Acceptance criteria (optional)
No response
Proposed approach (optional)
No response
Trade-offs / risks (optional)
No response
Problem (one or two sentences)
When Roo Code serializes structured data (file lists, tool results, memory entries) into LLM prompt context, it uses standard JSON — a verbose format that consumes significantly more tokens than necessary. There is currently no mechanism to substitute a token-efficient encoding format, resulting in avoidable cost and context window pressure at scale.
Context (who is affected and when)
As context windows grow and structured data is injected into prompts, token consumption becomes a cost and performance concern at scale — affecting every agent invocation across potentially hundreds of concurrent jobs.
This investigation is relevant to any Roo Code deployment where structured data is serialized into LLM prompts programmatically
Desired behavior (conceptual, not technical)
Roo Code (or its CLI) should be capable of serializing structured context data in a token-efficient format before it is injected into the LLM prompt — specifically for uniform arrays of objects (file lists, memory entries, task records, tool results) where TOON's tabular encoding can reduce token usage by ~40% versus standard JSON without loss of LLM comprehension accuracy.
The goal is not to change what the agent receives conceptually, but to reduce the token cost of how that structured data is represented in the prompt.
Constraints / preferences (optional)
No response
Request checklist
Roo Code Task Links (optional)
Identify where in the Roo Code source structured data is serialized into prompt context today (e.g. system prompt construction, context window management, tool result formatting)
Determine if JSON serialization of structured arrays is happening inline in the plugin, in the CLI layer, or assembled externally
Note any existing abstraction points (serializers, context builders, formatters) where a pluggable encoding strategy could be introduced without deep surgery
Acceptance criteria (optional)
No response
Proposed approach (optional)
No response
Trade-offs / risks (optional)
No response