Skill Name
optimization/context_optimizer
What should this skill do?
Large context windows (e.g., 2M tokens) are available but are slow and extremely expensive for repetitive tasks. This skill sits directly in front of the LLM. It takes a massive document and a specific query, runs a rapid, low-cost local embedding search (like a mini-RAG cycle), and returns only the most relevant sections of the document to the main agent, saving massive compute costs on looping tasks.
Ideal Inputs & Outputs
Input:
{
"document_text": "[... 500 pages of text ...]",
"agent_goal": "Find the jurisdiction clauses for data handling.",
"max_tokens_return": 2000
}
Output:
{
"optimized_context": "[... Only the 3 pages relevant to jurisdiction & data handling ...]",
"reduction_percentage": "94%"
}
Targeted Models (if applicable)
Model Agnostic (All)
Skill Name
optimization/context_optimizer
What should this skill do?
Large context windows (e.g., 2M tokens) are available but are slow and extremely expensive for repetitive tasks. This skill sits directly in front of the LLM. It takes a massive document and a specific query, runs a rapid, low-cost local embedding search (like a mini-RAG cycle), and returns only the most relevant sections of the document to the main agent, saving massive compute costs on looping tasks.
Ideal Inputs & Outputs
Input:
{
"document_text": "[... 500 pages of text ...]",
"agent_goal": "Find the jurisdiction clauses for data handling.",
"max_tokens_return": 2000
}
Output:
{
"optimized_context": "[... Only the 3 pages relevant to jurisdiction & data handling ...]",
"reduction_percentage": "94%"
}
Targeted Models (if applicable)
Model Agnostic (All)