Skip to content

Conversation

@torresmateo
Copy link
Collaborator

@torresmateo torresmateo commented Dec 10, 2025

Preview here: https://docs-git-mateo-dev-25-write-connecting-arcade-a84eaa-arcade-ai.vercel.app/en/home/connect-arcade-to-your-llm


Note

Adds a new guide for integrating Arcade tools into LLM apps and updates the docs navigation to include it.

  • Docs:
    • New guide: app/en/home/connect-arcade-to-your-llm/page.mdx
      • Step-by-step setup using uv, environment config, and OpenRouter.
      • Example Python code: tool catalog retrieval from Arcade, auth + execution helper, and multi-turn agent loop with tool calls.
      • Includes runnable chat loop and sample prompts.
    • Navigation: Adds "connect-arcade-to-your-llm" entry in app/en/home/_meta.tsx under Guides with title "Integrate Arcade tools into your LLM application".

Written by Cursor Bugbot for commit 1065b5d. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Dec 10, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
docs Ready Ready Preview, Comment Dec 16, 2025 6:06pm

"content": tool_result,
})

continue
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Missing assistant message before tool results in history

When the LLM returns tool calls, the code appends tool result messages to history but never appends the assistant message that contained the tool_calls. The OpenAI API (and compatible APIs like OpenRouter) requires the assistant message with tool_calls to appear in the conversation history before the corresponding tool result messages. This will cause an API error on the next iteration of the loop when the malformed history is sent back to the model.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor

@nearestnabors nearestnabors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need more openrouter tokens to actually test that it works, but wanted to share what I have so far!

import { SignupLink } from "@/app/_components/analytics";

# Connect Arcade to your LLM

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No intro text? Why would someone do this? When? Where? How? What are they connecting?

OPENROUTER_API_KEY=YOUR_OPENROUTER_API_KEY
OPENROUTER_MODEL=YOUR_OPENROUTER_MODEL
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to remind folks where they can get their Arcade User ID

torresmateo and others added 2 commits December 16, 2025 14:58
Co-authored-by: RL "Nearest" Nabors <236306+nearestnabors@users.noreply.github.com>
Co-authored-by: RL "Nearest" Nabors <236306+nearestnabors@users.noreply.github.com>

# Print the latest assistant response
assistant_response = history[-1]["content"]
print(f"\n🤖 Assistant: {assistant_response}\n")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Tool result shown as assistant response when max_turns exhausted

When invoke_llm exhausts max_turns while the assistant is still making tool calls, the function returns with a tool response as the last history item. The chat() function then accesses history[-1]["content"] and prints it prefixed with "🤖 Assistant:", displaying raw tool output as if it were the assistant's response. This produces confusing output when many consecutive tool calls are needed.

Additional Locations (1)

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants