-
Notifications
You must be signed in to change notification settings - Fork 7
Mateo/dev 25 write connecting arcade tools to your llm page #595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Mateo/dev 25 write connecting arcade tools to your llm page #595
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| "content": tool_result, | ||
| }) | ||
|
|
||
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Missing assistant message before tool results in history
When the LLM returns tool calls, the code appends tool result messages to history but never appends the assistant message that contained the tool_calls. The OpenAI API (and compatible APIs like OpenRouter) requires the assistant message with tool_calls to appear in the conversation history before the corresponding tool result messages. This will cause an API error on the next iteration of the loop when the malformed history is sent back to the model.
Additional Locations (1)
nearestnabors
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need more openrouter tokens to actually test that it works, but wanted to share what I have so far!
| import { SignupLink } from "@/app/_components/analytics"; | ||
|
|
||
| # Connect Arcade to your LLM | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No intro text? Why would someone do this? When? Where? How? What are they connecting?
| OPENROUTER_API_KEY=YOUR_OPENROUTER_API_KEY | ||
| OPENROUTER_MODEL=YOUR_OPENROUTER_MODEL | ||
| ``` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You might want to remind folks where they can get their Arcade User ID
Co-authored-by: RL "Nearest" Nabors <236306+nearestnabors@users.noreply.github.com>
Co-authored-by: RL "Nearest" Nabors <236306+nearestnabors@users.noreply.github.com>
|
|
||
| # Print the latest assistant response | ||
| assistant_response = history[-1]["content"] | ||
| print(f"\n🤖 Assistant: {assistant_response}\n") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Tool result shown as assistant response when max_turns exhausted
When invoke_llm exhausts max_turns while the assistant is still making tool calls, the function returns with a tool response as the last history item. The chat() function then accesses history[-1]["content"] and prints it prefixed with "🤖 Assistant:", displaying raw tool output as if it were the assistant's response. This produces confusing output when many consecutive tool calls are needed.
Preview here: https://docs-git-mateo-dev-25-write-connecting-arcade-a84eaa-arcade-ai.vercel.app/en/home/connect-arcade-to-your-llm
Note
Adds a new guide for integrating Arcade tools into LLM apps and updates the docs navigation to include it.
app/en/home/connect-arcade-to-your-llm/page.mdxuv, environment config, and OpenRouter."connect-arcade-to-your-llm"entry inapp/en/home/_meta.tsxunder Guides with title "Integrate Arcade tools into your LLM application".Written by Cursor Bugbot for commit 1065b5d. This will update automatically on new commits. Configure here.