diff --git a/.changeset/config.json b/.changeset/config.json new file mode 100644 index 000000000..174cf479e --- /dev/null +++ b/.changeset/config.json @@ -0,0 +1,14 @@ +{ + "$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json", + "changelog": [ + "@svitejs/changesets-changelog-github-compact", + { "repo": "TanStack/ai" } + ], + "commit": false, + "access": "public", + "baseBranch": "main", + "updateInternalDependencies": "patch", + "fixed": [], + "linked": [], + "ignore": [] +} diff --git a/.cursorignore b/.cursorignore new file mode 100644 index 000000000..faba0f683 --- /dev/null +++ b/.cursorignore @@ -0,0 +1 @@ +**/reference/** \ No newline at end of file diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml new file mode 100644 index 000000000..e2ea8ad3b --- /dev/null +++ b/.github/FUNDING.yml @@ -0,0 +1,3 @@ +# These are supported funding model platforms + +github: tannerlinsley diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 000000000..67a638a45 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,100 @@ +name: 'šŸ› Bug report' +description: Report a reproducible bug or regression +body: + - type: markdown + attributes: + value: | + Thank you for reporting an issue :pray:. + + This issue tracker is for reporting reproducible bugs or regression's found in [react-ai](https://github.com/tanstack/ai) + If you have a question about how to achieve or implement something and are struggling, please post a question + inside of react-ai's [Discussions tab](https://github.com/tanstack/ai/discussions) instead of filing an issue. + + Before submitting a new bug/issue, please check the links below to see if there is a solution or question posted there already: + - TanStack AI's [Discussions tab](https://github.com/tanstack/ai/discussions) + - TanStack AI's [Open Issues](https://github.com/tanstack/ai/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc) + - TanStack AI's [Closed Issues](https://github.com/tanstack/ai/issues?q=is%3Aissue+sort%3Aupdated-desc+is%3Aclosed) + + The more information you fill in, the better the community can help you. + + - type: input + id: tanstack-ai-version + attributes: + label: TanStack AI version + description: | + - Please let us know the exact version of the TanStack AI framework adapter that you were using when the issue occurred. If you are using an older version, check to see if your bug has already been solved in the latest version. Please don't just put in "latest", as this is subject to change. + - The latest "ai" version is + placeholder: | + e.g. v8.11.6 + validations: + required: true + + - type: input + id: framework-library-version + attributes: + label: Framework/Library version + description: Which framework and what version of that framework are you using? + placeholder: | + e.g. React v17.0.2 + validations: + required: true + + - type: textarea + id: description + attributes: + label: Describe the bug and the steps to reproduce it + description: Provide a clear and concise description of the challenge you are running into, and the steps we should take to try to reproduce your bug. + validations: + required: true + + - type: input + id: link + attributes: + label: Your Minimal, Reproducible Example - (Sandbox Highly Recommended) + description: | + Please add a link to a minimal reproduction. + Note: + - Your bug may get fixed much faster if we can run your code and it doesn't have dependencies other than React. + - To create a shareable code example for web, you can use CodeSandbox (https://codesandbox.io/s/new) or Stackblitz (https://stackblitz.com/). + - Please make sure the example is complete and runnable without prior dependencies and free of unnecessary abstractions + - Feel free to fork any of the official CodeSandbox examples to reproduce your issue: https://github.com/tanstack/ai/tree/main/examples/ + - For React Native, you can use: https://snack.expo.dev/ + - For TypeScript related issues only, a TypeScript Playground link might be sufficient: https://www.typescriptlang.org/play + - Please read these tips for providing a minimal example: https://stackoverflow.com/help/mcve. + placeholder: | + e.g. Code Sandbox, Stackblitz, TypeScript Playground, etc. + validations: + required: true + + - type: textarea + id: screenshots_or_videos + attributes: + label: Screenshots or Videos (Optional) + description: | + If applicable, add screenshots or a video to help explain your problem. + For more information on the supported file image/file types and the file size limits, please refer + to the following link: https://docs.github.com/en/github/writing-on-github/working-with-advanced-formatting/attaching-files + placeholder: | + You can drag your video or image files inside of this editor ↓ + + - type: dropdown + attributes: + options: + - No, because I do not know how + - No, because I do not have time to dig into it + - Maybe, I'll investigate and start debugging + - Yes, I think I know how to fix it and will discuss it in the comments of this issue + - Yes, I am also opening a PR that solves the problem along side this issue + label: Do you intend to try to help solve this bug with your own PR? + description: | + If you think you know the cause of the problem, the fastest way to get it fixed is to suggest a fix, or fix it yourself! However, it is ok if you cannot solve this yourself and are just wanting help. + - type: checkboxes + id: agrees-to-terms + attributes: + label: Terms & Code of Conduct + description: By submitting this issue, you agree to follow our Code of Conduct and can verify that you have followed the requirements outlined above to the best of your ability. + options: + - label: I agree to follow this project's Code of Conduct + required: true + - label: I understand that if my bug cannot be reliable reproduced in a debuggable environment, it will probably not be fixed and this issue may even be closed. + required: true diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 000000000..17660b114 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,8 @@ +blank_issues_enabled: false +contact_links: + - name: Feature Requests & Questions + url: https://github.com/TanStack/ai/discussions + about: Please ask and answer questions here. + - name: Community Chat + url: https://discord.gg/mQd7egN + about: A dedicated discord server hosted by TanStack diff --git a/.github/instructions/copilot-instructions.md b/.github/instructions/copilot-instructions.md index 10df81bd1..65be40908 100644 --- a/.github/instructions/copilot-instructions.md +++ b/.github/instructions/copilot-instructions.md @@ -1,28 +1,30 @@ ---- -applyTo: '**' ---- -Provide project context and coding guidelines that AI should follow when generating code, answering questions, or reviewing changes. - -Whenever you want to build the packages to test if they work you should run `pnpm run build` from the root of the repository. - -If you want to check if the examples work you need to go to `examples/` and run `pnpm run dev`. - -When writing code, please follow these guidelines: -- Use TypeScript for all new code. -- Ensure all new code is covered by tests. -- Do not use `any` type; prefer specific types or generics. -- Follow existing code style and conventions. - -If you get an error "address already in use :::42069 you should kill the process using that port. - -If we add a new functionality add a section about it in the `docs/` folder explaining how to use it and update the `README.md` file to mention it. - -Write tests for any new functionality. - -When defining new types, first check if the types exist somewhere and re-use them, do not create new types that are similar to existing ones. - -When modifying existing functionality, ensure backward compatibility unless there's a strong reason to introduce breaking changes. If breaking changes are necessary, document them clearly in the relevant documentation files. - -When subscribing to an event using `aiEventClient.on` in the devtools packages, always add the option `{ withEventTarget: false }` as the second argument to prevent over-subscriptions in the devtools. - -Under no circumstances should casting `as any` be used in the codebase. Always strive to find or create the appropriate type definitions. Avoid casting unless absolutely neccessary, and even then, prefer using `satisfies` for type assertions to maintain type safety. \ No newline at end of file +--- +applyTo: '**' +--- + +Provide project context and coding guidelines that AI should follow when generating code, answering questions, or reviewing changes. + +Whenever you want to build the packages to test if they work you should run `pnpm run build` from the root of the repository. + +If you want to check if the examples work you need to go to `examples/` and run `pnpm run dev`. + +When writing code, please follow these guidelines: + +- Use TypeScript for all new code. +- Ensure all new code is covered by tests. +- Do not use `any` type; prefer specific types or generics. +- Follow existing code style and conventions. + +If you get an error "address already in use :::42069 you should kill the process using that port. + +If we add a new functionality add a section about it in the `docs/` folder explaining how to use it and update the `README.md` file to mention it. + +Write tests for any new functionality. + +When defining new types, first check if the types exist somewhere and re-use them, do not create new types that are similar to existing ones. + +When modifying existing functionality, ensure backward compatibility unless there's a strong reason to introduce breaking changes. If breaking changes are necessary, document them clearly in the relevant documentation files. + +When subscribing to an event using `aiEventClient.on` in the devtools packages, always add the option `{ withEventTarget: false }` as the second argument to prevent over-subscriptions in the devtools. + +Under no circumstances should casting `as any` be used in the codebase. Always strive to find or create the appropriate type definitions. Avoid casting unless absolutely neccessary, and even then, prefer using `satisfies` for type assertions to maintain type safety. diff --git a/.github/workflows/autofix.yml b/.github/workflows/autofix.yml new file mode 100644 index 000000000..4d357d862 --- /dev/null +++ b/.github/workflows/autofix.yml @@ -0,0 +1,31 @@ +name: autofix.ci # needed to securely identify the workflow + +on: + pull_request: + push: + branches: [main, alpha, beta, rc] + +concurrency: + group: ${{ github.workflow }}-${{ github.event.number || github.ref }} + cancel-in-progress: true + +permissions: + contents: read + +jobs: + autofix: + name: autofix + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v5.0.0 + - name: Setup Tools + uses: tanstack/config/.github/setup@main + - name: Fix formatting + run: pnpm prettier:write + - name: Regenerate docs + run: pnpm build:all && pnpm docs:generate + - name: Apply fixes + uses: autofix-ci/action@635ffb0c9798bd160680f18fd73371e355b85f27 + with: + commit-message: 'ci: apply automated fixes' diff --git a/.github/workflows/pr.yml b/.github/workflows/pr.yml new file mode 100644 index 000000000..9035de5be --- /dev/null +++ b/.github/workflows/pr.yml @@ -0,0 +1,50 @@ +name: PR + +on: + pull_request: + paths-ignore: + - "docs/**" + - "media/**" + - "**/*.md" + +concurrency: + group: ${{ github.workflow }}-${{ github.event.number || github.ref }} + cancel-in-progress: true + +env: + NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }} + +permissions: + contents: read + +jobs: + test: + name: Test + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v5.0.0 + with: + fetch-depth: 0 + - name: Setup Tools + uses: tanstack/config/.github/setup@main + - name: Get base and head commits for `nx affected` + uses: nrwl/nx-set-shas@v4.4.0 + with: + main-branch-name: main + - name: Run Checks + run: pnpm run test:pr + preview: + name: Preview + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v5.0.0 + with: + fetch-depth: 0 + - name: Setup Tools + uses: tanstack/config/.github/setup@main + - name: Build Packages + run: pnpm run build:all + - name: Publish Previews + run: pnpx pkg-pr-new publish --pnpm './packages/typescript/*' \ No newline at end of file diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml new file mode 100644 index 000000000..0858f39cd --- /dev/null +++ b/.github/workflows/release.yml @@ -0,0 +1,42 @@ +name: Release + +on: + push: + branches: [main, alpha, beta, rc] + +concurrency: + group: ${{ github.workflow }}-${{ github.event.number || github.ref }} + cancel-in-progress: true + +env: + NX_CLOUD_ACCESS_TOKEN: ${{ secrets.NX_CLOUD_ACCESS_TOKEN }} + +permissions: + contents: write + id-token: write + pull-requests: write + +jobs: + release: + name: Release + if: github.repository_owner == 'TanStack' + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v5.0.0 + with: + fetch-depth: 0 + - name: Setup Tools + uses: tanstack/config/.github/setup@main + - name: Run Tests + run: pnpm run test:ci + - name: Run Changesets (version or publish) + uses: changesets/action@v1.5.3 + with: + version: pnpm run changeset:version + publish: pnpm run changeset:publish + commit: 'ci: Version Packages' + title: 'ci: Version Packages' + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + NPM_TOKEN: ${{ secrets.NPM_TOKEN }} diff --git a/.gitignore b/.gitignore index 95f95223f..1dea6473a 100644 --- a/.gitignore +++ b/.gitignore @@ -49,3 +49,4 @@ output/ # My TODOs. Feel free to ignore this. *-TODO.md +.nx \ No newline at end of file diff --git a/.npmrc b/.npmrc new file mode 100644 index 000000000..84aee8d99 --- /dev/null +++ b/.npmrc @@ -0,0 +1,3 @@ +link-workspace-packages=true +prefer-workspace-packages=true +provenance=true diff --git a/.nvmrc b/.nvmrc new file mode 100644 index 000000000..1d9b7831b --- /dev/null +++ b/.nvmrc @@ -0,0 +1 @@ +22.12.0 diff --git a/.prettierignore b/.prettierignore new file mode 100644 index 000000000..91a4c871b --- /dev/null +++ b/.prettierignore @@ -0,0 +1,11 @@ +**/.nx/ +**/.nx/cache +**/.svelte-kit +**/build +**/coverage +**/dist +**/docs +pnpm-lock.yaml + +.angular +.github/** diff --git a/CHANGELOG.md b/CHANGELOG.md index 3c664d958..0f1b30c09 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -24,40 +24,6 @@ npm install @tanstack/ai-client **See:** [Package Documentation](packages/typescript/ai-client/README.md) -#### @tanstack/ai-fallback - -**New Package:** Automatic fallback wrapper for trying multiple adapters in sequence. - -**Installation:** - -```bash -npm install @tanstack/ai-fallback -``` - -**Features:** - -- āœ… Try multiple adapters until one succeeds -- āœ… Rate limit protection -- āœ… Cost optimization (try cheap/local first) -- āœ… Error handling with callbacks -- āœ… Works with all AI methods - -**Usage:** - -```typescript -import { ai } from "@tanstack/ai"; -import { fallback, withModel } from "@tanstack/ai-fallback"; - -const openAI = withModel(ai(openai()), { model: "gpt-4" }); -const anthropicAI = withModel(ai(anthropic()), { - model: "claude-3-5-sonnet-20241022", -}); - -const aiWithFallback = fallback([openAI, anthropicAI]); -``` - -**See:** [Package Documentation](packages/typescript/ai-fallback/README.md) - #### @tanstack/ai-react-ui **New Package:** Pre-built React UI components for chat interfaces. @@ -204,12 +170,12 @@ import { ChatClient, fetchServerSentEvents, PunctuationStrategy, -} from "@tanstack/ai-client"; +} from '@tanstack/ai-client' const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), + connection: fetchServerSentEvents('/api/chat'), chunkingStrategy: new PunctuationStrategy(), -}); +}) ``` **See:** [Stream Processing Quick Start](packages/typescript/ai-client/docs/STREAM_QUICKSTART.md) @@ -221,13 +187,13 @@ const client = new ChatClient({ **API:** ```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat", { - headers: { Authorization: "Bearer token" }, + connection: fetchServerSentEvents('/api/chat', { + headers: { Authorization: 'Bearer token' }, }), -}); +}) ``` **Benefits:** @@ -245,29 +211,29 @@ const client = new ChatClient({ **With React:** ```typescript -import { useChat, fetchServerSentEvents } from "@tanstack/ai-react"; +import { useChat, fetchServerSentEvents } from '@tanstack/ai-react' const chat = useChat({ - connection: fetchServerSentEvents("/api/chat"), -}); + connection: fetchServerSentEvents('/api/chat'), +}) ``` **Create Custom Adapters:** ```typescript -import type { ConnectionAdapter } from "@tanstack/ai-client"; +import type { ConnectionAdapter } from '@tanstack/ai-client' const wsAdapter: ConnectionAdapter = { async *connect(messages, data) { - const ws = new WebSocket("wss://api.example.com"); + const ws = new WebSocket('wss://api.example.com') // ... WebSocket logic }, abort() { - ws.close(); + ws.close() }, -}; +} -const chat = useChat({ connection: wsAdapter }); +const chat = useChat({ connection: wsAdapter }) ``` **Documentation:** @@ -383,16 +349,16 @@ return toStreamResponse(stream); // Exported from @tanstack/ai The `chat()` method now includes an automatic tool execution loop: ```typescript -import { chat, tool, maxIterations } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; +import { chat, tool, maxIterations } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' const stream = chat({ adapter: openai(), - model: "gpt-4o", - messages: [{ role: "user", content: "What's the weather in Paris?" }], + model: 'gpt-4o', + messages: [{ role: 'user', content: "What's the weather in Paris?" }], tools: [weatherTool], agentLoopStrategy: maxIterations(5), // Optional: control loop -}); +}) // SDK automatically: // 1. Detects tool calls from model @@ -416,20 +382,20 @@ import { maxIterations, untilFinishReason, combineStrategies, -} from "@tanstack/ai"; +} from '@tanstack/ai' // Built-in strategies -agentLoopStrategy: maxIterations(10); -agentLoopStrategy: untilFinishReason(["stop", "length"]); +agentLoopStrategy: maxIterations(10) +agentLoopStrategy: untilFinishReason(['stop', 'length']) agentLoopStrategy: combineStrategies([ maxIterations(10), ({ messages }) => messages.length < 100, -]); +]) // Custom strategy agentLoopStrategy: ({ iterationCount, messages, finishReason }) => { - return iterationCount < 10 && messages.length < 50; -}; + return iterationCount < 10 && messages.length < 50 +} ``` #### 3. ToolCallManager Class @@ -437,46 +403,46 @@ agentLoopStrategy: ({ iterationCount, messages, finishReason }) => { Tool execution logic extracted into a testable class: ```typescript -import { ToolCallManager } from "@tanstack/ai"; +import { ToolCallManager } from '@tanstack/ai' -const manager = new ToolCallManager(tools); +const manager = new ToolCallManager(tools) // Accumulate tool calls from stream -manager.addToolCallChunk(chunk); +manager.addToolCallChunk(chunk) // Check if tools need execution if (manager.hasToolCalls()) { - const results = yield * manager.executeTools(doneChunk); + const results = yield * manager.executeTools(doneChunk) } // Clear for next iteration -manager.clear(); +manager.clear() ``` #### 4. Explicit Server-Sent Events Helpers ```typescript -import { toStreamResponse, toServerSentEventsStream } from "@tanstack/ai"; +import { toStreamResponse, toServerSentEventsStream } from '@tanstack/ai' // Full HTTP Response with SSE headers -return toStreamResponse(stream); +return toStreamResponse(stream) // Just the ReadableStream (for custom response) return new Response(toServerSentEventsStream(stream), { - headers: { "X-Custom": "value" }, -}); + headers: { 'X-Custom': 'value' }, +}) ``` ### New Exports ```typescript // From @tanstack/ai -export { chat, chatCompletion }; // Separate streaming and promise methods -export { toStreamResponse, toServerSentEventsStream }; // HTTP helpers -export { ToolCallManager }; // Tool execution manager -export { maxIterations, untilFinishReason, combineStrategies }; // Loop strategies -export type { AgentLoopStrategy, AgentLoopState }; // Strategy types -export type { ToolResultStreamChunk }; // New chunk type +export { chat, chatCompletion } // Separate streaming and promise methods +export { toStreamResponse, toServerSentEventsStream } // HTTP helpers +export { ToolCallManager } // Tool execution manager +export { maxIterations, untilFinishReason, combineStrategies } // Loop strategies +export type { AgentLoopStrategy, AgentLoopState } // Strategy types +export type { ToolResultStreamChunk } // New chunk type ``` ### Migration Guide @@ -531,13 +497,11 @@ cd packages/ai && pnpm test ### Breaking Changes Summary 1. **`chat()` method**: - - No longer accepts `as` option - Now streaming-only - Includes automatic tool execution loop 2. **New `chatCompletion()` method**: - - Promise-based - Supports structured output - No automatic tool execution diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 000000000..fa111aa2d --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,81 @@ +--- +title: Code of Conduct +id: code-of-conduct +--- + +# Contributor Covenant Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, sex characteristics, gender identity and expression, +level of experience, education, socio-economic status, nationality, personal +appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment +include: + +- Using welcoming and inclusive language +- Being respectful of differing viewpoints and experiences +- Gracefully accepting constructive criticism +- Focusing on what is best for the community +- Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +- The use of sexualized language or imagery and unwelcome sexual attention or + advances +- Trolling, insulting/derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or electronic + address, without explicit permission +- Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by contacting the project team at TANNERLINSLEY@GMAIL.COM. All +complaints will be reviewed and investigated and will result in a response that +is deemed necessary and appropriate to the circumstances. The project team is +obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, +available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html + +[homepage]: https://www.contributor-covenant.org + +For answers to common questions about this code of conduct, see +https://www.contributor-covenant.org/faq diff --git a/LICENSE b/LICENSE new file mode 100644 index 000000000..308cb68dc --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2025 Tanner Linsley + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/ai-docs/AGENT_LOOP_STRATEGIES.md b/ai-docs/AGENT_LOOP_STRATEGIES.md index 318916fb9..f861faeca 100644 --- a/ai-docs/AGENT_LOOP_STRATEGIES.md +++ b/ai-docs/AGENT_LOOP_STRATEGIES.md @@ -85,22 +85,22 @@ const simple: AgentLoopStrategy = ({ iterationCount }) => { }; // Advanced: based on multiple conditions -const advanced: AgentLoopStrategy = ({ - iterationCount, - messages, - finishReason +const advanced: AgentLoopStrategy = ({ + iterationCount, + messages, + finishReason }) => { // Stop after 10 iterations if (iterationCount >= 10) return false; - + // Stop if conversation gets too long if (messages.length > 50) return false; - + // Stop on specific finish reasons if (finishReason === "length" || finishReason === "content_filter") { return false; } - + // Otherwise continue return true; }; @@ -122,17 +122,18 @@ The state object passed to your strategy function: ```typescript export interface AgentLoopState { /** Current iteration count (0-indexed) */ - iterationCount: number; - + iterationCount: number + /** Current messages in the conversation */ - messages: Message[]; - + messages: Message[] + /** Finish reason from the last model response */ - finishReason: string | null; + finishReason: string | null } ``` **Finish reasons:** + - `"stop"` - Model finished naturally - `"length"` - Hit token limit - `"tool_calls"` - Model called tools (triggers tool execution) @@ -146,8 +147,8 @@ export interface AgentLoopState { ```typescript // Stop after 3 iterations OR 20 messages const conservative: AgentLoopStrategy = ({ iterationCount, messages }) => { - return iterationCount < 3 && messages.length < 20; -}; + return iterationCount < 3 && messages.length < 20 +} ``` ### Budget Control @@ -155,12 +156,12 @@ const conservative: AgentLoopStrategy = ({ iterationCount, messages }) => { ```typescript // Stop based on estimated token usage const budgetAware: AgentLoopStrategy = ({ messages }) => { - const estimatedTokens = messages.reduce((sum, m) => - sum + (m.content?.length || 0) / 4, // Rough estimate - 0 - ); - return estimatedTokens < 10000; // Stop before 10k tokens -}; + const estimatedTokens = messages.reduce( + (sum, m) => sum + (m.content?.length || 0) / 4, // Rough estimate + 0, + ) + return estimatedTokens < 10000 // Stop before 10k tokens +} ``` ### Conditional Execution @@ -168,13 +169,15 @@ const budgetAware: AgentLoopStrategy = ({ messages }) => { ```typescript // Different limits for different scenarios const conditional: AgentLoopStrategy = ({ iterationCount, messages }) => { - const hasToolCalls = messages.some(m => m.toolCalls && m.toolCalls.length > 0); - + const hasToolCalls = messages.some( + (m) => m.toolCalls && m.toolCalls.length > 0, + ) + // Allow more iterations if tools are being used - const maxIters = hasToolCalls ? 10 : 3; - - return iterationCount < maxIters; -}; + const maxIters = hasToolCalls ? 10 : 3 + + return iterationCount < maxIters +} ``` ### Debug Mode @@ -182,9 +185,9 @@ const conditional: AgentLoopStrategy = ({ iterationCount, messages }) => { ```typescript // Stop early during development const debug: AgentLoopStrategy = ({ iterationCount }) => { - console.log(`Iteration ${iterationCount + 1}`); - return iterationCount < 2; // Only 2 iterations in debug mode -}; + console.log(`Iteration ${iterationCount + 1}`) + return iterationCount < 2 // Only 2 iterations in debug mode +} ``` ## Pattern: Strategy Factory @@ -251,30 +254,42 @@ Both are equivalent. The `maxIterations` number is automatically converted to `a ### Unit Test Example ```typescript -import { describe, it, expect } from "vitest"; -import type { AgentLoopStrategy, AgentLoopState } from "@tanstack/ai"; +import { describe, it, expect } from 'vitest' +import type { AgentLoopStrategy, AgentLoopState } from '@tanstack/ai' -describe("Custom Strategy", () => { - it("should stop after 3 iterations", () => { +describe('Custom Strategy', () => { + it('should stop after 3 iterations', () => { const strategy: AgentLoopStrategy = ({ iterationCount }) => { - return iterationCount < 3; - }; - - expect(strategy({ iterationCount: 0, messages: [], finishReason: null })).toBe(true); - expect(strategy({ iterationCount: 2, messages: [], finishReason: null })).toBe(true); - expect(strategy({ iterationCount: 3, messages: [], finishReason: null })).toBe(false); - }); - - it("should stop when finish reason is length", () => { + return iterationCount < 3 + } + + expect( + strategy({ iterationCount: 0, messages: [], finishReason: null }), + ).toBe(true) + expect( + strategy({ iterationCount: 2, messages: [], finishReason: null }), + ).toBe(true) + expect( + strategy({ iterationCount: 3, messages: [], finishReason: null }), + ).toBe(false) + }) + + it('should stop when finish reason is length', () => { const strategy: AgentLoopStrategy = ({ finishReason }) => { - return finishReason !== "length"; - }; - - expect(strategy({ iterationCount: 0, messages: [], finishReason: null })).toBe(true); - expect(strategy({ iterationCount: 0, messages: [], finishReason: "stop" })).toBe(true); - expect(strategy({ iterationCount: 0, messages: [], finishReason: "length" })).toBe(false); - }); -}); + return finishReason !== 'length' + } + + expect( + strategy({ iterationCount: 0, messages: [], finishReason: null }), + ).toBe(true) + expect( + strategy({ iterationCount: 0, messages: [], finishReason: 'stop' }), + ).toBe(true) + expect( + strategy({ iterationCount: 0, messages: [], finishReason: 'length' }), + ).toBe(false) + }) +}) ``` ## Best Practices @@ -323,34 +338,37 @@ const stream = chat({ // Aggressive limits during development const devStrategy: AgentLoopStrategy = ({ iterationCount, messages }) => { if (iterationCount >= 2) { - console.warn("DEV: Stopping at 2 iterations"); - return false; + console.warn('DEV: Stopping at 2 iterations') + return false } if (messages.length >= 10) { - console.warn("DEV: Stopping at 10 messages"); - return false; + console.warn('DEV: Stopping at 10 messages') + return false } - return true; -}; + return true +} ``` ## Migration from maxIterations Before: + ```typescript chat({ ..., maxIterations: 10 }) ``` After: + ```typescript import { maxIterations } from "@tanstack/ai"; chat({ ..., agentLoopStrategy: maxIterations(10) }) ``` Or create a custom strategy: + ```typescript -chat({ - ..., +chat({ + ..., agentLoopStrategy: ({ iterationCount, messages }) => { return iterationCount < 10 && messages.length < 50; } @@ -362,4 +380,3 @@ chat({ - [Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) - [Unified Chat API](UNIFIED_CHAT_API.md) - [Quick Reference](UNIFIED_CHAT_QUICK_REFERENCE.md) - diff --git a/ai-docs/CONNECTION_ADAPTERS_GUIDE.md b/ai-docs/CONNECTION_ADAPTERS_GUIDE.md index 5ec8d6664..4b788be68 100644 --- a/ai-docs/CONNECTION_ADAPTERS_GUIDE.md +++ b/ai-docs/CONNECTION_ADAPTERS_GUIDE.md @@ -7,6 +7,7 @@ Connection adapters provide a flexible, pluggable way to connect `ChatClient` an ## Why Connection Adapters? **Before (Hardcoded):** + - āŒ Locked to HTTP fetch - āŒ Locked to specific API format - āŒ Hard to test @@ -14,6 +15,7 @@ Connection adapters provide a flexible, pluggable way to connect `ChatClient` an - āŒ Can't customize streaming logic **After (Adapters):** + - āœ… Support any streaming source - āœ… Easy to test with mocks - āœ… Works with server functions @@ -27,11 +29,13 @@ Connection adapters provide a flexible, pluggable way to connect `ChatClient` an **For:** HTTP APIs using Server-Sent Events format **When to use:** + - Your backend uses `toStreamResponse()` from `@tanstack/ai` - Standard HTTP streaming API - Most common use case **Example:** + ```typescript import { useChat, fetchServerSentEvents } from "@tanstack/ai-react"; @@ -42,12 +46,13 @@ function Chat() { credentials: "include", }), }); - + return ; } ``` **Server format expected:** + ``` data: {"type":"content","delta":"Hello","content":"Hello",...} data: {"type":"content","delta":" world","content":"Hello world",...} @@ -60,11 +65,13 @@ data: [DONE] **For:** HTTP APIs using raw newline-delimited JSON **When to use:** + - Your backend streams newline-delimited JSON directly - Custom streaming format - Not using SSE **Example:** + ```typescript import { useChat, fetchHttpStream } from "@tanstack/ai-react"; @@ -74,12 +81,13 @@ function Chat() { headers: { "X-Custom-Header": "value" }, }), }); - + return ; } ``` **Server format expected:** + ``` {"type":"content","delta":"Hello","content":"Hello",...} {"type":"content","delta":" world","content":"Hello world",...} @@ -91,57 +99,61 @@ function Chat() { **For:** Direct async iterables (no HTTP) **When to use:** + - TanStack Start server functions - Server-side rendering - Testing with mock streams - Direct function calls **Example with Server Function:** + ```typescript import { useChat, stream } from "@tanstack/ai-react"; import { serverChatFunction } from "./server"; function Chat() { const chat = useChat({ - connection: stream((messages, data) => + connection: stream((messages, data) => serverChatFunction({ messages, data }) ), }); - + return ; } ``` **Server function:** + ```typescript // server.ts -import { chat } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; +import { chat } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' -export async function* serverChatFunction({ - messages -}: { - messages: Message[] +export async function* serverChatFunction({ + messages, +}: { + messages: Message[] }) { yield* chat({ adapter: openai(), - model: "gpt-4o", + model: 'gpt-4o', messages, - }); + }) } ``` **Example with Mock for Testing:** + ```typescript -import { ChatClient, stream } from "@tanstack/ai-client"; +import { ChatClient, stream } from '@tanstack/ai-client' const mockStream = stream(async function* (messages) { - yield { type: "content", delta: "Hello", content: "Hello" }; - yield { type: "content", delta: " world", content: "Hello world" }; - yield { type: "done", finishReason: "stop" }; -}); + yield { type: 'content', delta: 'Hello', content: 'Hello' } + yield { type: 'content', delta: ' world', content: 'Hello world' } + yield { type: 'done', finishReason: 'stop' } +}) -const client = new ChatClient({ connection: mockStream }); +const client = new ChatClient({ connection: mockStream }) ``` ## Custom Adapters @@ -151,69 +163,69 @@ You can create custom connection adapters for any streaming scenario: ### WebSocket Example ```typescript -import type { ConnectionAdapter } from "@tanstack/ai-client"; -import type { StreamChunk } from "@tanstack/ai"; +import type { ConnectionAdapter } from '@tanstack/ai-client' +import type { StreamChunk } from '@tanstack/ai' function createWebSocketAdapter(url: string): ConnectionAdapter { - let ws: WebSocket | null = null; - + let ws: WebSocket | null = null + return { async *connect(messages, data) { - ws = new WebSocket(url); - + ws = new WebSocket(url) + // Wait for connection await new Promise((resolve, reject) => { - ws!.onopen = resolve; - ws!.onerror = reject; - }); - + ws!.onopen = resolve + ws!.onerror = reject + }) + // Send messages - ws.send(JSON.stringify({ messages, data })); - + ws.send(JSON.stringify({ messages, data })) + // Yield chunks as they arrive - const queue: StreamChunk[] = []; - let resolveNext: ((chunk: StreamChunk) => void) | null = null; - let done = false; - + const queue: StreamChunk[] = [] + let resolveNext: ((chunk: StreamChunk) => void) | null = null + let done = false + ws.onmessage = (event) => { - const chunk = JSON.parse(event.data); + const chunk = JSON.parse(event.data) if (resolveNext) { - resolveNext(chunk); - resolveNext = null; + resolveNext(chunk) + resolveNext = null } else { - queue.push(chunk); + queue.push(chunk) } - - if (chunk.type === "done") { - done = true; - ws!.close(); + + if (chunk.type === 'done') { + done = true + ws!.close() } - }; - + } + while (!done || queue.length > 0) { if (queue.length > 0) { - yield queue.shift()!; + yield queue.shift()! } else { yield await new Promise((resolve) => { - resolveNext = resolve; - }); + resolveNext = resolve + }) } } }, - + abort() { if (ws) { - ws.close(); - ws = null; + ws.close() + ws = null } }, - }; + } } // Use it const chat = useChat({ - connection: createWebSocketAdapter("wss://api.example.com/chat"), -}); + connection: createWebSocketAdapter('wss://api.example.com/chat'), +}) ``` ### GraphQL Subscription Example @@ -221,58 +233,58 @@ const chat = useChat({ ```typescript function createGraphQLSubscriptionAdapter( client: GraphQLClient, - subscription: string + subscription: string, ): ConnectionAdapter { - let unsubscribe: (() => void) | null = null; - + let unsubscribe: (() => void) | null = null + return { async *connect(messages, data) { const observable = client.subscribe({ query: subscription, variables: { messages, data }, - }); - - const queue: StreamChunk[] = []; - let resolveNext: ((chunk: StreamChunk) => void) | null = null; - let done = false; - + }) + + const queue: StreamChunk[] = [] + let resolveNext: ((chunk: StreamChunk) => void) | null = null + let done = false + unsubscribe = observable.subscribe({ next: (result) => { - const chunk = result.data.chatStream; + const chunk = result.data.chatStream if (resolveNext) { - resolveNext(chunk); - resolveNext = null; + resolveNext(chunk) + resolveNext = null } else { - queue.push(chunk); + queue.push(chunk) } - - if (chunk.type === "done") { - done = true; + + if (chunk.type === 'done') { + done = true } }, error: (error) => { - throw error; + throw error }, - }).unsubscribe; - + }).unsubscribe + while (!done || queue.length > 0) { if (queue.length > 0) { - yield queue.shift()!; + yield queue.shift()! } else { yield await new Promise((resolve) => { - resolveNext = resolve; - }); + resolveNext = resolve + }) } } }, - + abort() { if (unsubscribe) { - unsubscribe(); - unsubscribe = null; + unsubscribe() + unsubscribe = null } }, - }; + } } ``` @@ -282,22 +294,22 @@ function createGraphQLSubscriptionAdapter( ```typescript const chat = useChat({ - connection: fetchServerSentEvents("/api/chat"), -}); + connection: fetchServerSentEvents('/api/chat'), +}) ``` ### 2. Authenticated API ```typescript const chat = useChat({ - connection: fetchServerSentEvents("/api/chat", { + connection: fetchServerSentEvents('/api/chat', { headers: { - "Authorization": `Bearer ${token}`, - "X-User-ID": userId, + Authorization: `Bearer ${token}`, + 'X-User-ID': userId, }, - credentials: "include", + credentials: 'include', }), -}); +}) ``` ### 3. TanStack Start Server Function @@ -306,26 +318,26 @@ const chat = useChat({ // No HTTP overhead, direct function call const chat = useChat({ connection: stream((messages) => serverChat({ messages })), -}); +}) ``` ### 4. WebSocket Real-time ```typescript const chat = useChat({ - connection: createWebSocketAdapter("wss://api.example.com/chat"), -}); + connection: createWebSocketAdapter('wss://api.example.com/chat'), +}) ``` ### 5. Testing with Mocks ```typescript const mockAdapter = stream(async function* (messages) { - yield { type: "content", delta: "Test", content: "Test" }; - yield { type: "done", finishReason: "stop" }; -}); + yield { type: 'content', delta: 'Test', content: 'Test' } + yield { type: 'done', finishReason: 'stop' } +}) -const client = new ChatClient({ connection: mockAdapter }); +const client = new ChatClient({ connection: mockAdapter }) // Easy to test without real API! ``` @@ -334,6 +346,7 @@ const client = new ChatClient({ connection: mockAdapter }); ### 1. Flexibility Support any streaming source: + - āœ… HTTP (SSE or raw) - āœ… WebSockets - āœ… GraphQL subscriptions @@ -347,11 +360,11 @@ Easy to test with mock adapters: ```typescript const mockConnection = stream(async function* () { - yield { type: "content", delta: "Hello", content: "Hello" }; - yield { type: "done", finishReason: "stop" }; -}); + yield { type: 'content', delta: 'Hello', content: 'Hello' } + yield { type: 'done', finishReason: 'stop' } +}) -const client = new ChatClient({ connection: mockConnection }); +const client = new ChatClient({ connection: mockConnection }) ``` ### 3. Type Safety @@ -362,9 +375,9 @@ Full TypeScript support with proper types: interface ConnectionAdapter { connect( messages: any[], - data?: Record - ): AsyncIterable; - abort?(): void; + data?: Record, + ): AsyncIterable + abort?(): void } ``` @@ -376,7 +389,7 @@ Direct streams bypass HTTP overhead: // No HTTP serialization/deserialization const chat = useChat({ connection: stream((messages) => directServerFunction(messages)), -}); +}) ``` ## Advanced Examples @@ -386,76 +399,75 @@ const chat = useChat({ ```typescript function createRetryAdapter( baseAdapter: ConnectionAdapter, - maxRetries: number = 3 + maxRetries: number = 3, ): ConnectionAdapter { return { async *connect(messages, data) { - let lastError: Error | null = null; - + let lastError: Error | null = null + for (let attempt = 0; attempt < maxRetries; attempt++) { try { - yield* baseAdapter.connect(messages, data); - return; // Success + yield* baseAdapter.connect(messages, data) + return // Success } catch (error) { - lastError = error as Error; + lastError = error as Error if (attempt < maxRetries - 1) { - await new Promise(resolve => setTimeout(resolve, 1000 * (attempt + 1))); + await new Promise((resolve) => + setTimeout(resolve, 1000 * (attempt + 1)), + ) } } } - - throw lastError; + + throw lastError }, - + abort() { - baseAdapter.abort?.(); + baseAdapter.abort?.() }, - }; + } } // Use it const chat = useChat({ - connection: createRetryAdapter( - fetchServerSentEvents("/api/chat"), - 3 - ), -}); + connection: createRetryAdapter(fetchServerSentEvents('/api/chat'), 3), +}) ``` ### Caching Adapter ```typescript function createCachingAdapter( - baseAdapter: ConnectionAdapter + baseAdapter: ConnectionAdapter, ): ConnectionAdapter { - const cache = new Map(); - + const cache = new Map() + return { async *connect(messages, data) { - const cacheKey = JSON.stringify(messages); - + const cacheKey = JSON.stringify(messages) + if (cache.has(cacheKey)) { // Replay from cache for (const chunk of cache.get(cacheKey)!) { - yield chunk; + yield chunk } - return; + return } - + // Cache chunks as they arrive - const chunks: StreamChunk[] = []; + const chunks: StreamChunk[] = [] for await (const chunk of baseAdapter.connect(messages, data)) { - chunks.push(chunk); - yield chunk; + chunks.push(chunk) + yield chunk } - - cache.set(cacheKey, chunks); + + cache.set(cacheKey, chunks) }, - + abort() { - baseAdapter.abort?.(); + baseAdapter.abort?.() }, - }; + } } ``` @@ -464,38 +476,38 @@ function createCachingAdapter( ```typescript function createLoggingAdapter( baseAdapter: ConnectionAdapter, - logger: (message: string, data: any) => void + logger: (message: string, data: any) => void, ): ConnectionAdapter { return { async *connect(messages, data) { - logger("Connection started", { messages, data }); - + logger('Connection started', { messages, data }) + try { for await (const chunk of baseAdapter.connect(messages, data)) { - logger("Chunk received", chunk); - yield chunk; + logger('Chunk received', chunk) + yield chunk } - logger("Connection complete", {}); + logger('Connection complete', {}) } catch (error) { - logger("Connection error", error); - throw error; + logger('Connection error', error) + throw error } }, - + abort() { - logger("Connection aborted", {}); - baseAdapter.abort?.(); + logger('Connection aborted', {}) + baseAdapter.abort?.() }, - }; + } } // Use it const chat = useChat({ connection: createLoggingAdapter( - fetchServerSentEvents("/api/chat"), - console.log + fetchServerSentEvents('/api/chat'), + console.log, ), -}); +}) ``` ## Best Practices @@ -505,15 +517,17 @@ const chat = useChat({ ```typescript // āœ… Good - use built-in adapter const chat = useChat({ - connection: fetchServerSentEvents("/api/chat"), -}); + connection: fetchServerSentEvents('/api/chat'), +}) // āŒ Avoid - custom adapter for standard SSE const chat = useChat({ - connection: { - connect: async function* () { /* reimplementing SSE */ } + connection: { + connect: async function* () { + /* reimplementing SSE */ + }, }, -}); +}) ``` ### 2. Compose Adapters @@ -521,13 +535,10 @@ const chat = useChat({ ```typescript const chat = useChat({ connection: createLoggingAdapter( - createRetryAdapter( - fetchServerSentEvents("/api/chat"), - 3 - ), - console.log + createRetryAdapter(fetchServerSentEvents('/api/chat'), 3), + console.log, ), -}); +}) ``` ### 3. Handle Errors Gracefully @@ -536,16 +547,16 @@ const chat = useChat({ const connection: ConnectionAdapter = { async *connect(messages, data) { try { - yield* fetchServerSentEvents("/api/chat").connect(messages, data); + yield* fetchServerSentEvents('/api/chat').connect(messages, data) } catch (error) { // Emit error chunk instead of throwing yield { - type: "error", - error: { message: error.message, code: "CONNECTION_ERROR" }, - }; + type: 'error', + error: { message: error.message, code: 'CONNECTION_ERROR' }, + } } }, -}; +} ``` ### 4. Implement Abort Support @@ -556,37 +567,37 @@ function createCustomAdapter(url: string): ConnectionAdapter { async *connect(messages, data, abortSignal) { // Use the provided abortSignal from ChatClient const response = await fetch(url, { - method: "POST", + method: 'POST', body: JSON.stringify({ messages, data }), signal: abortSignal, // Pass abort signal to fetch - }); - - const reader = response.body?.getReader(); + }) + + const reader = response.body?.getReader() if (!reader) { - throw new Error("Response body is not readable"); + throw new Error('Response body is not readable') } try { - const decoder = new TextDecoder(); - + const decoder = new TextDecoder() + while (true) { // Check if aborted before reading if (abortSignal?.aborted) { - break; + break } - const { done, value } = await reader.read(); - if (done) break; + const { done, value } = await reader.read() + if (done) break // Process chunks... - const chunk = decoder.decode(value, { stream: true }); + const chunk = decoder.decode(value, { stream: true }) // Yield parsed chunks... } } finally { - reader.releaseLock(); + reader.releaseLock() } }, - }; + } } ``` @@ -595,52 +606,50 @@ function createCustomAdapter(url: string): ConnectionAdapter { ### Unit Testing ChatClient ```typescript -import { ChatClient, stream } from "@tanstack/ai-client"; -import { describe, it, expect } from "vitest"; +import { ChatClient, stream } from '@tanstack/ai-client' +import { describe, it, expect } from 'vitest' -describe("ChatClient with mock adapter", () => { - it("should process messages", async () => { +describe('ChatClient with mock adapter', () => { + it('should process messages', async () => { const mockAdapter = stream(async function* (messages) { - expect(messages).toHaveLength(1); - expect(messages[0].content).toBe("Hello"); - - yield { type: "content", delta: "Hi", content: "Hi" }; - yield { type: "done", finishReason: "stop" }; - }); - - const client = new ChatClient({ connection: mockAdapter }); - - await client.sendMessage("Hello"); - - expect(client.getMessages()).toHaveLength(2); - expect(client.getMessages()[1].content).toBe("Hi"); - }); -}); + expect(messages).toHaveLength(1) + expect(messages[0].content).toBe('Hello') + + yield { type: 'content', delta: 'Hi', content: 'Hi' } + yield { type: 'done', finishReason: 'stop' } + }) + + const client = new ChatClient({ connection: mockAdapter }) + + await client.sendMessage('Hello') + + expect(client.getMessages()).toHaveLength(2) + expect(client.getMessages()[1].content).toBe('Hi') + }) +}) ``` ### Integration Testing with React ```typescript -import { renderHook, waitFor } from "@testing-library/react"; -import { useChat, stream } from "@tanstack/ai-react"; +import { renderHook, waitFor } from '@testing-library/react' +import { useChat, stream } from '@tanstack/ai-react' -test("useChat with mock adapter", async () => { +test('useChat with mock adapter', async () => { const mockAdapter = stream(async function* () { - yield { type: "content", delta: "Test", content: "Test" }; - yield { type: "done", finishReason: "stop" }; - }); - - const { result } = renderHook(() => - useChat({ connection: mockAdapter }) - ); - - await result.current.sendMessage("Hello"); - + yield { type: 'content', delta: 'Test', content: 'Test' } + yield { type: 'done', finishReason: 'stop' } + }) + + const { result } = renderHook(() => useChat({ connection: mockAdapter })) + + await result.current.sendMessage('Hello') + await waitFor(() => { - expect(result.current.messages).toHaveLength(2); - expect(result.current.messages[1].content).toBe("Test"); - }); -}); + expect(result.current.messages).toHaveLength(2) + expect(result.current.messages[1].content).toBe('Test') + }) +}) ``` ## Reference @@ -657,13 +666,13 @@ interface ConnectionAdapter { */ connect( messages: any[], - data?: Record - ): AsyncIterable; - + data?: Record, + ): AsyncIterable + /** * Optional: Abort the current connection */ - abort?(): void; + abort?(): void } ``` @@ -671,9 +680,9 @@ interface ConnectionAdapter { ```typescript interface FetchConnectionOptions { - headers?: Record | Headers; - credentials?: RequestCredentials; // "omit" | "same-origin" | "include" - signal?: AbortSignal; + headers?: Record | Headers + credentials?: RequestCredentials // "omit" | "same-origin" | "include" + signal?: AbortSignal } ``` @@ -683,4 +692,3 @@ interface FetchConnectionOptions { - šŸ“– [useChat Hook](../packages/ai-react/README.md) - šŸ“– [Tool Execution Loop](TOOL_EXECUTION_LOOP.md) - šŸ“– [Connection Adapters Examples](../packages/ai-client/CONNECTION_ADAPTERS.md) - diff --git a/ai-docs/EVENT_CLIENT.md b/ai-docs/EVENT_CLIENT.md index d5a7d58e7..be9065a0c 100644 --- a/ai-docs/EVENT_CLIENT.md +++ b/ai-docs/EVENT_CLIENT.md @@ -1,405 +1,422 @@ -# AI Event Client - Observability & Debugging - -The `@tanstack/ai/event-client` provides a type-safe EventEmitter for monitoring and debugging AI operations in your application. - -## Installation - -The event client is included with `@tanstack/ai`: - -```bash -npm install @tanstack/ai -``` - -## Usage - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; - -// Subscribe to events -aiEventClient.on('stream:content', (data) => { - console.log('Content delta:', data.delta); -}); - -aiEventClient.on('usage:tokens', (data) => { - console.log('Tokens used:', data.usage.totalTokens); -}); -``` - -## Available Events - -### Chat Lifecycle Events - -#### `chat:started` -Emitted when a chat completion or stream starts. - -```typescript -{ - timestamp: number; - model: string; - messageCount: number; - hasTools: boolean; - streaming: boolean; -} -``` - -#### `chat:completed` -Emitted when a non-streaming chat completion finishes. - -```typescript -{ - timestamp: number; - model: string; - result: ChatCompletionResult; - duration: number; -} -``` - -#### `chat:iteration` -Emitted when the AI makes another iteration (e.g., for tool calling). - -```typescript -{ - timestamp: number; - iteration: number; - reason: string; -} -``` - -### Stream Events - -#### `stream:started` -Emitted when a streaming response begins. - -```typescript -{ - timestamp: number; - messageId: string; -} -``` - -#### `stream:ended` -Emitted when a streaming response completes. - -```typescript -{ - timestamp: number; - messageId: string; - duration: number; -} -``` - -#### `stream:chunk` -Emitted for every stream chunk (includes all chunk types). - -```typescript -{ - timestamp: number; - messageId: string; - chunk: StreamChunk; -} -``` - -#### `stream:content` -Emitted for content delta chunks. - -```typescript -{ - timestamp: number; - messageId: string; - delta: string; -} -``` - -#### `stream:tool-call` -Emitted when a tool call is received. - -```typescript -{ - timestamp: number; - messageId: string; - toolCallId: string; - toolName: string; - arguments: string; -} -``` - -#### `stream:tool-result` -Emitted when a tool result is received. - -```typescript -{ - timestamp: number; - messageId: string; - toolCallId: string; - content: string; -} -``` - -#### `stream:done` -Emitted when the stream completes with finish reason and usage. - -```typescript -{ - timestamp: number; - messageId: string; - finishReason: string | null; - usage?: { - promptTokens: number; - completionTokens: number; - totalTokens: number; - }; -} -``` - -#### `stream:error` -Emitted when a stream encounters an error. - -```typescript -{ - timestamp: number; - messageId: string; - error: { - message: string; - code?: string; - }; -} -``` - -### Tool Events - -#### `tool:approval-requested` -Emitted when a tool requires user approval before execution. - -```typescript -{ - timestamp: number; - messageId: string; - toolCallId: string; - toolName: string; - input: any; - approvalId: string; -} -``` - -#### `tool:input-available` -Emitted when a client-side tool has its input ready. - -```typescript -{ - timestamp: number; - messageId: string; - toolCallId: string; - toolName: string; - input: any; -} -``` - -#### `tool:completed` -Emitted when a tool execution completes. - -```typescript -{ - timestamp: number; - toolCallId: string; - toolName: string; - result: any; - duration: number; -} -``` - -### Token Usage Events - -#### `usage:tokens` -Emitted when token usage information is available (both streaming and non-streaming). - -```typescript -{ - timestamp: number; - messageId?: string; - model: string; - usage: { - promptTokens: number; - completionTokens: number; - totalTokens: number; - }; -} -``` - -## Example Use Cases - -### 1. Token Usage Tracking & Cost Monitoring - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; - -let totalTokens = 0; -let totalCost = 0; - -const costPerToken = { - 'gpt-4o': 0.00003, // $30 per 1M tokens - 'gpt-4o-mini': 0.00000015, // $0.15 per 1M tokens -}; - -aiEventClient.on('usage:tokens', (data) => { - totalTokens += data.usage.totalTokens; - const cost = (data.usage.totalTokens * (costPerToken[data.model] || 0)); - totalCost += cost; - - console.log({ - model: data.model, - tokens: data.usage.totalTokens, - cost: `$${cost.toFixed(6)}`, - totalCost: `$${totalCost.toFixed(6)}`, - }); -}); -``` - -### 2. Real-time Content Streaming Display - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; - -process.stdout.write('AI: '); -aiEventClient.on('stream:content', (data) => { - process.stdout.write(data.delta); -}); - -aiEventClient.on('stream:done', () => { - process.stdout.write('\n'); -}); -``` - -### 3. Logging & Debugging - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; -import winston from 'winston'; - -const logger = winston.createLogger({ - level: 'info', - format: winston.format.json(), - transports: [ - new winston.transports.File({ filename: 'ai-events.log' }), - ], -}); - -// Log all events -aiEventClient.on('chat:started', (data) => { - logger.info('Chat started', data); -}); - -aiEventClient.on('stream:error', (data) => { - logger.error('Stream error', data); -}); - -aiEventClient.on('usage:tokens', (data) => { - logger.info('Token usage', data); -}); -``` - -### 4. Performance Monitoring - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; - -const chatMetrics = new Map(); - -aiEventClient.on('chat:started', (data) => { - chatMetrics.set(data.timestamp, { - startTime: data.timestamp, - model: data.model, - }); -}); - -aiEventClient.on('chat:completed', (data) => { - const metrics = Array.from(chatMetrics.values()).find( - m => m.model === data.model - ); - - if (metrics) { - console.log('Performance:', { - model: data.model, - duration: data.duration, - tokensPerSecond: data.result.usage.totalTokens / (data.duration / 1000), - }); - } -}); -``` - -### 5. Tool Execution Monitoring - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; - -aiEventClient.on('tool:input-available', (data) => { - console.log(`[${data.toolName}] Called with:`, data.input); -}); - -aiEventClient.on('tool:completed', (data) => { - console.log(`[${data.toolName}] Completed in ${data.duration}ms`); - console.log('Result:', data.result); -}); - -aiEventClient.on('tool:approval-requested', (data) => { - console.log(`[${data.toolName}] Needs approval:`, data.input); -}); -``` - -## API Reference - -### `aiEventClient.on(event, listener)` -Subscribe to an event. - -```typescript -aiEventClient.on('stream:content', (data) => { - // Handle event -}); -``` - -### `aiEventClient.once(event, listener)` -Subscribe to an event once (automatically unsubscribes after first emission). - -```typescript -aiEventClient.once('chat:completed', (data) => { - console.log('First chat completed:', data); -}); -``` - -### `aiEventClient.off(event, listener)` -Unsubscribe from an event. - -```typescript -const handler = (data) => console.log(data); -aiEventClient.on('stream:content', handler); -// Later... -aiEventClient.off('stream:content', handler); -``` - -### `aiEventClient.removeAllListeners(event?)` -Remove all listeners for a specific event or all events. - -```typescript -// Remove all listeners for a specific event -aiEventClient.removeAllListeners('stream:content'); - -// Remove all listeners for all events -aiEventClient.removeAllListeners(); -``` - -## Type Safety - -The event client is fully type-safe. TypeScript will autocomplete event names and infer the correct data types for each event: - -```typescript -aiEventClient.on('usage:tokens', (data) => { - // TypeScript knows data has: timestamp, model, usage - const totalTokens = data.usage.totalTokens; // āœ“ Type-safe -}); -``` - -## Notes - -- The event client uses Node.js `EventEmitter` under the hood -- Maximum listeners is set to 100 by default to prevent warnings in observability scenarios -- Events are emitted for both streaming and non-streaming operations -- All events include a `timestamp` field for tracking and analysis +# AI Event Client - Observability & Debugging + +The `@tanstack/ai/event-client` provides a type-safe EventEmitter for monitoring and debugging AI operations in your application. + +## Installation + +The event client is included with `@tanstack/ai`: + +```bash +npm install @tanstack/ai +``` + +## Usage + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' + +// Subscribe to events +aiEventClient.on('stream:content', (data) => { + console.log('Content delta:', data.delta) +}) + +aiEventClient.on('usage:tokens', (data) => { + console.log('Tokens used:', data.usage.totalTokens) +}) +``` + +## Available Events + +### Chat Lifecycle Events + +#### `chat:started` + +Emitted when a chat completion or stream starts. + +```typescript +{ + timestamp: number + model: string + messageCount: number + hasTools: boolean + streaming: boolean +} +``` + +#### `chat:completed` + +Emitted when a non-streaming chat completion finishes. + +```typescript +{ + timestamp: number + model: string + result: ChatCompletionResult + duration: number +} +``` + +#### `chat:iteration` + +Emitted when the AI makes another iteration (e.g., for tool calling). + +```typescript +{ + timestamp: number + iteration: number + reason: string +} +``` + +### Stream Events + +#### `stream:started` + +Emitted when a streaming response begins. + +```typescript +{ + timestamp: number + messageId: string +} +``` + +#### `stream:ended` + +Emitted when a streaming response completes. + +```typescript +{ + timestamp: number + messageId: string + duration: number +} +``` + +#### `stream:chunk` + +Emitted for every stream chunk (includes all chunk types). + +```typescript +{ + timestamp: number + messageId: string + chunk: StreamChunk +} +``` + +#### `stream:content` + +Emitted for content delta chunks. + +```typescript +{ + timestamp: number + messageId: string + delta: string +} +``` + +#### `stream:tool-call` + +Emitted when a tool call is received. + +```typescript +{ + timestamp: number + messageId: string + toolCallId: string + toolName: string + arguments: string +} +``` + +#### `stream:tool-result` + +Emitted when a tool result is received. + +```typescript +{ + timestamp: number + messageId: string + toolCallId: string + content: string +} +``` + +#### `stream:done` + +Emitted when the stream completes with finish reason and usage. + +```typescript +{ + timestamp: number; + messageId: string; + finishReason: string | null; + usage?: { + promptTokens: number; + completionTokens: number; + totalTokens: number; + }; +} +``` + +#### `stream:error` + +Emitted when a stream encounters an error. + +```typescript +{ + timestamp: number; + messageId: string; + error: { + message: string; + code?: string; + }; +} +``` + +### Tool Events + +#### `tool:approval-requested` + +Emitted when a tool requires user approval before execution. + +```typescript +{ + timestamp: number + messageId: string + toolCallId: string + toolName: string + input: any + approvalId: string +} +``` + +#### `tool:input-available` + +Emitted when a client-side tool has its input ready. + +```typescript +{ + timestamp: number + messageId: string + toolCallId: string + toolName: string + input: any +} +``` + +#### `tool:completed` + +Emitted when a tool execution completes. + +```typescript +{ + timestamp: number + toolCallId: string + toolName: string + result: any + duration: number +} +``` + +### Token Usage Events + +#### `usage:tokens` + +Emitted when token usage information is available (both streaming and non-streaming). + +```typescript +{ + timestamp: number; + messageId?: string; + model: string; + usage: { + promptTokens: number; + completionTokens: number; + totalTokens: number; + }; +} +``` + +## Example Use Cases + +### 1. Token Usage Tracking & Cost Monitoring + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' + +let totalTokens = 0 +let totalCost = 0 + +const costPerToken = { + 'gpt-4o': 0.00003, // $30 per 1M tokens + 'gpt-4o-mini': 0.00000015, // $0.15 per 1M tokens +} + +aiEventClient.on('usage:tokens', (data) => { + totalTokens += data.usage.totalTokens + const cost = data.usage.totalTokens * (costPerToken[data.model] || 0) + totalCost += cost + + console.log({ + model: data.model, + tokens: data.usage.totalTokens, + cost: `$${cost.toFixed(6)}`, + totalCost: `$${totalCost.toFixed(6)}`, + }) +}) +``` + +### 2. Real-time Content Streaming Display + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' + +process.stdout.write('AI: ') +aiEventClient.on('stream:content', (data) => { + process.stdout.write(data.delta) +}) + +aiEventClient.on('stream:done', () => { + process.stdout.write('\n') +}) +``` + +### 3. Logging & Debugging + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' +import winston from 'winston' + +const logger = winston.createLogger({ + level: 'info', + format: winston.format.json(), + transports: [new winston.transports.File({ filename: 'ai-events.log' })], +}) + +// Log all events +aiEventClient.on('chat:started', (data) => { + logger.info('Chat started', data) +}) + +aiEventClient.on('stream:error', (data) => { + logger.error('Stream error', data) +}) + +aiEventClient.on('usage:tokens', (data) => { + logger.info('Token usage', data) +}) +``` + +### 4. Performance Monitoring + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' + +const chatMetrics = new Map() + +aiEventClient.on('chat:started', (data) => { + chatMetrics.set(data.timestamp, { + startTime: data.timestamp, + model: data.model, + }) +}) + +aiEventClient.on('chat:completed', (data) => { + const metrics = Array.from(chatMetrics.values()).find( + (m) => m.model === data.model, + ) + + if (metrics) { + console.log('Performance:', { + model: data.model, + duration: data.duration, + tokensPerSecond: data.result.usage.totalTokens / (data.duration / 1000), + }) + } +}) +``` + +### 5. Tool Execution Monitoring + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' + +aiEventClient.on('tool:input-available', (data) => { + console.log(`[${data.toolName}] Called with:`, data.input) +}) + +aiEventClient.on('tool:completed', (data) => { + console.log(`[${data.toolName}] Completed in ${data.duration}ms`) + console.log('Result:', data.result) +}) + +aiEventClient.on('tool:approval-requested', (data) => { + console.log(`[${data.toolName}] Needs approval:`, data.input) +}) +``` + +## API Reference + +### `aiEventClient.on(event, listener)` + +Subscribe to an event. + +```typescript +aiEventClient.on('stream:content', (data) => { + // Handle event +}) +``` + +### `aiEventClient.once(event, listener)` + +Subscribe to an event once (automatically unsubscribes after first emission). + +```typescript +aiEventClient.once('chat:completed', (data) => { + console.log('First chat completed:', data) +}) +``` + +### `aiEventClient.off(event, listener)` + +Unsubscribe from an event. + +```typescript +const handler = (data) => console.log(data) +aiEventClient.on('stream:content', handler) +// Later... +aiEventClient.off('stream:content', handler) +``` + +### `aiEventClient.removeAllListeners(event?)` + +Remove all listeners for a specific event or all events. + +```typescript +// Remove all listeners for a specific event +aiEventClient.removeAllListeners('stream:content') + +// Remove all listeners for all events +aiEventClient.removeAllListeners() +``` + +## Type Safety + +The event client is fully type-safe. TypeScript will autocomplete event names and infer the correct data types for each event: + +```typescript +aiEventClient.on('usage:tokens', (data) => { + // TypeScript knows data has: timestamp, model, usage + const totalTokens = data.usage.totalTokens // āœ“ Type-safe +}) +``` + +## Notes + +- The event client uses Node.js `EventEmitter` under the hood +- Maximum listeners is set to 100 by default to prevent warnings in observability scenarios +- Events are emitted for both streaming and non-streaming operations +- All events include a `timestamp` field for tracking and analysis diff --git a/ai-docs/EVENT_CLIENT_INTEGRATION.md b/ai-docs/EVENT_CLIENT_INTEGRATION.md index aa4bc9218..48fac56d4 100644 --- a/ai-docs/EVENT_CLIENT_INTEGRATION.md +++ b/ai-docs/EVENT_CLIENT_INTEGRATION.md @@ -1,101 +1,105 @@ -# AI Event Client Integration Example - -This example demonstrates how the event client automatically captures events from AI operations. - -## Usage - -```typescript -import { aiEventClient } from '@tanstack/ai/event-client'; -import { chat } from '@tanstack/ai'; -import { openai } from '@tanstack/ai-openai'; - -// Set up event listeners BEFORE making AI calls -aiEventClient.on('usage:tokens', (data) => { - console.log(`Tokens used: ${data.usage.totalTokens}`); -}); - -aiEventClient.on('stream:content', (data) => { - process.stdout.write(data.delta); -}); - -// Now make AI calls - events will be automatically emitted -const adapter = openai({ apiKey: process.env.OPENAI_API_KEY! }); - -const stream = chat({ - adapter, - model: 'gpt-4o', - messages: [{ role: 'user', content: 'Hello!' }], -}); - -for await (const chunk of stream) { - // Events are automatically emitted during streaming - // No need to manually emit anything -} -``` - -## How It Works - -1. The `aiEventClient` is a singleton EventEmitter that's automatically used by the AI core -2. When you call `chat()` or `chatCompletion()`, the AI core emits events to the client -3. Your event listeners receive these events in real-time -4. No configuration needed - just import and listen! - -## Event Flow - -``` -chat() called - ↓ -chat:started event - ↓ -stream:started event - ↓ -stream:content events (multiple) -stream:tool-call events (if tools used) -stream:done event - ↓ -usage:tokens event - ↓ -stream:ended event -``` - -## Common Patterns - -### Pattern 1: Real-time Content Display -```typescript -aiEventClient.on('stream:content', (data) => { - document.getElementById('output').textContent += data.delta; -}); -``` - -### Pattern 2: Token Usage Tracking -```typescript -let totalCost = 0; -aiEventClient.on('usage:tokens', (data) => { - const cost = data.usage.totalTokens * 0.00003; // Example cost - totalCost += cost; - console.log(`Cost: $${cost.toFixed(6)}, Total: $${totalCost.toFixed(6)}`); -}); -``` - -### Pattern 3: Error Handling -```typescript -aiEventClient.on('stream:error', (data) => { - console.error('AI Error:', data.error.message); - // Show error to user -}); -``` - -### Pattern 4: Tool Monitoring -```typescript -aiEventClient.on('tool:completed', (data) => { - console.log(`Tool ${data.toolName} completed in ${data.duration}ms`); -}); -``` - -## Benefits - -- āœ… **Zero Configuration**: Works automatically with all AI operations -- āœ… **Type-Safe**: Full TypeScript support with event type inference -- āœ… **Decoupled**: Observability doesn't affect your core AI logic -- āœ… **Flexible**: Subscribe to only the events you need -- āœ… **Performance**: Minimal overhead, designed for production use +# AI Event Client Integration Example + +This example demonstrates how the event client automatically captures events from AI operations. + +## Usage + +```typescript +import { aiEventClient } from '@tanstack/ai/event-client' +import { chat } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' + +// Set up event listeners BEFORE making AI calls +aiEventClient.on('usage:tokens', (data) => { + console.log(`Tokens used: ${data.usage.totalTokens}`) +}) + +aiEventClient.on('stream:content', (data) => { + process.stdout.write(data.delta) +}) + +// Now make AI calls - events will be automatically emitted +const adapter = openai({ apiKey: process.env.OPENAI_API_KEY! }) + +const stream = chat({ + adapter, + model: 'gpt-4o', + messages: [{ role: 'user', content: 'Hello!' }], +}) + +for await (const chunk of stream) { + // Events are automatically emitted during streaming + // No need to manually emit anything +} +``` + +## How It Works + +1. The `aiEventClient` is a singleton EventEmitter that's automatically used by the AI core +2. When you call `chat()` or `chatCompletion()`, the AI core emits events to the client +3. Your event listeners receive these events in real-time +4. No configuration needed - just import and listen! + +## Event Flow + +``` +chat() called + ↓ +chat:started event + ↓ +stream:started event + ↓ +stream:content events (multiple) +stream:tool-call events (if tools used) +stream:done event + ↓ +usage:tokens event + ↓ +stream:ended event +``` + +## Common Patterns + +### Pattern 1: Real-time Content Display + +```typescript +aiEventClient.on('stream:content', (data) => { + document.getElementById('output').textContent += data.delta +}) +``` + +### Pattern 2: Token Usage Tracking + +```typescript +let totalCost = 0 +aiEventClient.on('usage:tokens', (data) => { + const cost = data.usage.totalTokens * 0.00003 // Example cost + totalCost += cost + console.log(`Cost: $${cost.toFixed(6)}, Total: $${totalCost.toFixed(6)}`) +}) +``` + +### Pattern 3: Error Handling + +```typescript +aiEventClient.on('stream:error', (data) => { + console.error('AI Error:', data.error.message) + // Show error to user +}) +``` + +### Pattern 4: Tool Monitoring + +```typescript +aiEventClient.on('tool:completed', (data) => { + console.log(`Tool ${data.toolName} completed in ${data.duration}ms`) +}) +``` + +## Benefits + +- āœ… **Zero Configuration**: Works automatically with all AI operations +- āœ… **Type-Safe**: Full TypeScript support with event type inference +- āœ… **Decoupled**: Observability doesn't affect your core AI logic +- āœ… **Flexible**: Subscribe to only the events you need +- āœ… **Performance**: Minimal overhead, designed for production use diff --git a/ai-docs/IMPLEMENTATION_SUMMARY.md b/ai-docs/IMPLEMENTATION_SUMMARY.md index feccf9030..92abc0ef1 100644 --- a/ai-docs/IMPLEMENTATION_SUMMARY.md +++ b/ai-docs/IMPLEMENTATION_SUMMARY.md @@ -1,431 +1,435 @@ -# Implementation Summary: Type-Safe Multi-Adapter with Fallback - -## Overview - -This implementation adds two major features to the AI SDK: - -1. **Type-Safe Model Validation** - Models are validated against the selected adapter at compile-time -2. **Automatic Adapter Fallback** - Automatically tries multiple adapters in order when one fails - -## Key Features - -### āœ… Type Safety - -- Model names are validated based on the selected adapter -- Adapter names are type-checked -- Full IDE autocomplete support -- Compile-time error detection - -### āœ… Fallback System - -- Global fallback order configuration in constructor -- Per-request fallback order override -- Automatic retry on errors, rate limits, or service outages -- Detailed error reporting from all failed adapters -- Works with all methods (chat, stream, generate, summarize, embed) - -## Tool Execution Architecture - -The `chat()` method includes an automatic tool execution loop implemented via the `ToolCallManager` class: - -```typescript -// In AI.chat() method -const toolCallManager = new ToolCallManager(tools || []); - -while (iterationCount < maxIterations) { - // Stream chunks and accumulate tool calls - for await (const chunk of adapter.chatStream()) { - if (chunk.type === "tool_call") { - toolCallManager.addToolCallChunk(chunk); - } - } - - // Execute tools if model requested them - if (shouldExecuteTools && toolCallManager.hasToolCalls()) { - const toolResults = yield* toolCallManager.executeTools(doneChunk); - messages = [...messages, ...toolResults]; - continue; // Next iteration - } - - break; // No tools to execute, done -} -``` - -**ToolCallManager handles:** -- āœ… Accumulating streaming tool call chunks -- āœ… Validating tool calls (ID and name present) -- āœ… Executing tool `execute` functions -- āœ… Yielding `tool_result` chunks -- āœ… Creating tool result messages - -## Architecture - -### Type System - -```typescript -// Adapter map with typed models -type AdapterMap = Record>; - -// Extract model types from adapter -type ExtractModels = T extends AIAdapter ? M[number] : string; - -// Single adapter mode: strict model validation -type ChatOptionsWithAdapter = { - adapter: K; - model: ExtractModels; // Models for this adapter only -}; - -// Fallback mode: union of all models -type ChatOptionsWithFallback = { - adapters: ReadonlyArray; - model: UnionOfModels; // Models from any adapter -}; -``` - -### Core Components - -1. **BaseAdapter** - Abstract class with generic model list -2. **AIAdapter Interface** - Includes `models` property with generic type -3. **AI Class** - Main class with fallback logic and tool execution loop -4. **ToolCallManager** - Handles tool call accumulation, validation, and execution -5. **Adapter Implementations** - OpenAI, Anthropic, Gemini, Ollama with model lists - -### Fallback Logic - -```typescript -private async tryWithFallback( - adapters: ReadonlyArray, - operation: (adapter: keyof T & string) => Promise, - operationName: string -): Promise { - const errors: Array<{ adapter: string; error: Error }> = []; - - for (const adapterName of adapters) { - try { - return await operation(adapterName); // Try operation - } catch (error: any) { - errors.push({ adapter: adapterName, error }); // Record error - console.warn(`[AI] Adapter "${adapterName}" failed for ${operationName}`); - } - } - - // All failed - throw comprehensive error - throw new Error(`All adapters failed for ${operationName}:\n${errorDetails}`); -} -``` - -## API Design - -### Constructor - -```typescript -const ai = new AI({ - adapters: { - primary: new OpenAIAdapter({ apiKey: "..." }), - secondary: new AnthropicAdapter({ apiKey: "..." }), - }, - fallbackOrder: ["primary", "secondary"], // Optional global order -}); -``` - -### Single Adapter Mode (Strict Type Safety) - -```typescript -await ai.chat({ - adapter: "primary", // Type-safe: must exist in adapters - model: "gpt-4", // Type-safe: must be valid for primary - messages: [...], -}); -``` - -### Fallback Mode (Automatic Retry) - -```typescript -await ai.chat({ - adapters: ["primary", "secondary"], // Type-safe: all must exist - model: "gpt-4", // Must work with at least one adapter - messages: [...], -}); -``` - -## Files Modified - -### Core Package (`packages/ai/src/`) - -- **`ai.ts`** - Main AI class with fallback logic -- **`base-adapter.ts`** - Added generic models property -- **`types.ts`** - Added models to AIAdapter interface - -### Adapter Packages - -- **`packages/ai-openai/src/openai-adapter.ts`** - Added OpenAI model list -- **`packages/ai-anthropic/src/anthropic-adapter.ts`** - Added Anthropic model list -- **`packages/ai-gemini/src/gemini-adapter.ts`** - Added Gemini model list -- **`packages/ai-ollama/src/ollama-adapter.ts`** - Added Ollama model list - -### Documentation - -- **`docs/TYPE_SAFETY.md`** - Complete type safety guide -- **`docs/ADAPTER_FALLBACK.md`** - Complete fallback guide -- **`docs/QUICK_START.md`** - Quick reference for both features - -### Examples - -- **`examples/type-safety-demo.ts`** - Type safety examples -- **`examples/visual-error-examples.ts`** - Shows exact TypeScript errors -- **`examples/model-safety-demo.ts`** - Comprehensive type safety examples -- **`examples/adapter-fallback-demo.ts`** - Comprehensive fallback examples -- **`examples/all-adapters-type-safety.ts`** - All adapters together - -## Usage Examples - -### Example 1: Type Safety Only - -```typescript -const ai = new AI({ - adapters: { - openai: new OpenAIAdapter({ apiKey: "..." }), - }, -}); - -// āœ… Valid -await ai.chat({ adapter: "openai", model: "gpt-4", messages: [] }); - -// āŒ TypeScript Error -await ai.chat({ adapter: "openai", model: "claude-3", messages: [] }); -``` - -### Example 2: Fallback Only - -```typescript -const ai = new AI({ - adapters: { - primary: new OpenAIAdapter({ apiKey: "..." }), - backup: new AnthropicAdapter({ apiKey: "..." }), - }, - fallbackOrder: ["primary", "backup"], -}); - -// Automatically tries backup if primary fails -await ai.chat({ adapters: [], model: "gpt-4", messages: [] }); -``` - -### Example 3: Combined Usage - -```typescript -const ai = new AI({ - adapters: { - fast: new OpenAIAdapter({ apiKey: "..." }), - reliable: new AnthropicAdapter({ apiKey: "..." }), - }, - fallbackOrder: ["fast", "reliable"], -}); - -// Single adapter: strict type safety -await ai.chat({ - adapter: "fast", - model: "gpt-4", // āœ… Validated against fast adapter - messages: [] -}); - -// Fallback mode: automatic retry -await ai.chat({ - adapters: ["fast", "reliable"], - model: "gpt-4", // āš ļø Less strict, but has fallback - messages: [] -}); -``` - -## Benefits - -### For Developers - -1. **Catch Errors Early** - Model mismatches caught at compile-time, not runtime -2. **Better IDE Experience** - Autocomplete shows only valid models per adapter -3. **Refactoring Safety** - Changing adapters immediately shows model incompatibilities -4. **Self-Documenting** - Types show exactly what's available - -### For Applications - -1. **Higher Reliability** - Automatic failover on service outages -2. **Rate Limit Protection** - Seamlessly switch to backup on rate limits -3. **Cost Optimization** - Try cheaper options first, fall back to expensive ones -4. **Better Observability** - Detailed error logs from all failed attempts - -## Trade-offs - -### Type Safety vs Flexibility - -- **Single adapter mode**: Maximum type safety, no fallback -- **Fallback mode**: Less strict types, automatic retry - -**Recommendation**: Use single adapter mode when possible, fallback mode when reliability is critical. - -### Model Compatibility - -In fallback mode, TypeScript allows any model from any adapter. This is necessary for flexibility but means you must ensure the model works with at least one adapter in your list. - -**Solution**: Define model mappings per adapter for strict control. - -## Migration Path - -### Existing Code (Single Adapter) - -```typescript -// Before -const ai = new AI(new OpenAIAdapter({ apiKey: "..." })); -await ai.chat("gpt-4", messages); - -// After (backwards compatible) -const ai = new AI({ - adapters: { openai: new OpenAIAdapter({ apiKey: "..." }) } -}); -await ai.chat({ adapter: "openai", model: "gpt-4", messages }); -``` - -### Adding Fallback - -```typescript -// Step 1: Add more adapters -const ai = new AI({ - adapters: { - openai: new OpenAIAdapter({ apiKey: "..." }), - anthropic: new AnthropicAdapter({ apiKey: "..." }), // New! - }, -}); - -// Step 2: Use fallback mode -await ai.chat({ - adapters: ["openai", "anthropic"], // Fallback enabled - model: "gpt-4", - messages: [], -}); - -// Step 3: Configure global fallback (optional) -const ai = new AI({ - adapters: { /* ... */ }, - fallbackOrder: ["openai", "anthropic"], // Global default -}); - -await ai.chat({ adapters: [], model: "gpt-4", messages: [] }); -``` - -## Testing Recommendations - -### Test Type Safety - -```typescript -// These should NOT compile -ai.chat({ adapter: "openai", model: "claude-3", messages: [] }); // āŒ -ai.chat({ adapter: "invalid", model: "gpt-4", messages: [] }); // āŒ -ai.chat({ adapter: "openai", model: "gpt-5", messages: [] }); // āŒ -``` - -### Test Fallback Behavior - -```typescript -// Mock adapters to simulate failures -const mockAdapter1 = { - chatCompletion: jest.fn().mockRejectedValue(new Error("Rate limit")), -}; -const mockAdapter2 = { - chatCompletion: jest.fn().mockResolvedValue({ content: "Success" }), -}; - -const ai = new AI({ - adapters: { first: mockAdapter1, second: mockAdapter2 }, - fallbackOrder: ["first", "second"], -}); - -// Should try first, fail, then succeed with second -await ai.chat({ adapters: [], model: "gpt-4", messages: [] }); - -expect(mockAdapter1.chatCompletion).toHaveBeenCalled(); -expect(mockAdapter2.chatCompletion).toHaveBeenCalled(); -``` - -### Test ToolCallManager - -The `ToolCallManager` class has comprehensive unit tests: - -```bash -cd packages/ai -pnpm test -``` - -Test coverage includes: -- āœ… Accumulating streaming tool call chunks (name, arguments) -- āœ… Filtering incomplete tool calls (missing ID or name) -- āœ… Executing tools with parsed arguments -- āœ… Handling tool execution errors gracefully -- āœ… Handling tools without execute functions -- āœ… Multiple tool calls in one iteration -- āœ… Clearing state between iterations -- āœ… Emitting tool_result chunks -- āœ… Creating tool result messages - -See `packages/ai/src/tool-call-manager.test.ts` for implementation. - -## Performance Considerations - -### Single Adapter Mode - -- **No overhead** - Direct call to adapter -- **Fast failure** - Error thrown immediately - -### Fallback Mode - -- **Sequential retry** - Tries each adapter in order -- **Additional latency** - On failure, waits for timeout before trying next -- **More robust** - Higher chance of success - -**Recommendation**: Use single adapter mode for performance-critical paths, fallback mode for user-facing features where reliability matters. - -## Future Enhancements - -### Possible Additions - -1. **Parallel fallback** - Try multiple adapters simultaneously -2. **Smart routing** - Choose adapter based on request characteristics -3. **Caching** - Remember which adapter succeeded for similar requests -4. **Circuit breaker** - Skip known-failing adapters temporarily -5. **Metrics** - Track success rate, latency per adapter -6. **Weighted fallback** - Probabilistic adapter selection - -### Extensibility - -The system is designed to be extended: - -```typescript -// Custom adapter with type-safe models -const MY_MODELS = ["model-1", "model-2"] as const; - -class MyAdapter extends BaseAdapter { - name = "my-adapter"; - models = MY_MODELS; - // ... implement methods -} - -// Use with full type safety -const ai = new AI({ - adapters: { mine: new MyAdapter() }, -}); - -await ai.chat({ - adapter: "mine", - model: "model-1", // āœ… Type-safe - messages: [], -}); -``` - -## Conclusion - -This implementation provides: - -- āœ… **Compile-time safety** for model selection -- āœ… **Runtime reliability** with automatic fallback -- āœ… **Developer experience** improvements (autocomplete, error messages) -- āœ… **Production readiness** (error handling, logging) -- āœ… **Extensibility** for future enhancements - -The combination of type safety and fallback makes the SDK both safer and more reliable, suitable for production use cases where uptime and correctness are critical. +# Implementation Summary: Type-Safe Multi-Adapter with Fallback + +## Overview + +This implementation adds two major features to the AI SDK: + +1. **Type-Safe Model Validation** - Models are validated against the selected adapter at compile-time +2. **Automatic Adapter Fallback** - Automatically tries multiple adapters in order when one fails + +## Key Features + +### āœ… Type Safety + +- Model names are validated based on the selected adapter +- Adapter names are type-checked +- Full IDE autocomplete support +- Compile-time error detection + +### āœ… Fallback System + +- Global fallback order configuration in constructor +- Per-request fallback order override +- Automatic retry on errors, rate limits, or service outages +- Detailed error reporting from all failed adapters +- Works with all methods (chat, stream, generate, summarize, embed) + +## Tool Execution Architecture + +The `chat()` method includes an automatic tool execution loop implemented via the `ToolCallManager` class: + +```typescript +// In AI.chat() method +const toolCallManager = new ToolCallManager(tools || []) + +while (iterationCount < maxIterations) { + // Stream chunks and accumulate tool calls + for await (const chunk of adapter.chatStream()) { + if (chunk.type === 'tool_call') { + toolCallManager.addToolCallChunk(chunk) + } + } + + // Execute tools if model requested them + if (shouldExecuteTools && toolCallManager.hasToolCalls()) { + const toolResults = yield * toolCallManager.executeTools(doneChunk) + messages = [...messages, ...toolResults] + continue // Next iteration + } + + break // No tools to execute, done +} +``` + +**ToolCallManager handles:** + +- āœ… Accumulating streaming tool call chunks +- āœ… Validating tool calls (ID and name present) +- āœ… Executing tool `execute` functions +- āœ… Yielding `tool_result` chunks +- āœ… Creating tool result messages + +## Architecture + +### Type System + +```typescript +// Adapter map with typed models +type AdapterMap = Record> + +// Extract model types from adapter +type ExtractModels = T extends AIAdapter ? M[number] : string + +// Single adapter mode: strict model validation +type ChatOptionsWithAdapter = { + adapter: K + model: ExtractModels // Models for this adapter only +} + +// Fallback mode: union of all models +type ChatOptionsWithFallback = { + adapters: ReadonlyArray + model: UnionOfModels // Models from any adapter +} +``` + +### Core Components + +1. **BaseAdapter** - Abstract class with generic model list +2. **AIAdapter Interface** - Includes `models` property with generic type +3. **AI Class** - Main class with fallback logic and tool execution loop +4. **ToolCallManager** - Handles tool call accumulation, validation, and execution +5. **Adapter Implementations** - OpenAI, Anthropic, Gemini, Ollama with model lists + +### Fallback Logic + +```typescript +private async tryWithFallback( + adapters: ReadonlyArray, + operation: (adapter: keyof T & string) => Promise, + operationName: string +): Promise { + const errors: Array<{ adapter: string; error: Error }> = []; + + for (const adapterName of adapters) { + try { + return await operation(adapterName); // Try operation + } catch (error: any) { + errors.push({ adapter: adapterName, error }); // Record error + console.warn(`[AI] Adapter "${adapterName}" failed for ${operationName}`); + } + } + + // All failed - throw comprehensive error + throw new Error(`All adapters failed for ${operationName}:\n${errorDetails}`); +} +``` + +## API Design + +### Constructor + +```typescript +const ai = new AI({ + adapters: { + primary: new OpenAIAdapter({ apiKey: '...' }), + secondary: new AnthropicAdapter({ apiKey: '...' }), + }, + fallbackOrder: ['primary', 'secondary'], // Optional global order +}) +``` + +### Single Adapter Mode (Strict Type Safety) + +```typescript +await ai.chat({ + adapter: "primary", // Type-safe: must exist in adapters + model: "gpt-4", // Type-safe: must be valid for primary + messages: [...], +}); +``` + +### Fallback Mode (Automatic Retry) + +```typescript +await ai.chat({ + adapters: ["primary", "secondary"], // Type-safe: all must exist + model: "gpt-4", // Must work with at least one adapter + messages: [...], +}); +``` + +## Files Modified + +### Core Package (`packages/ai/src/`) + +- **`ai.ts`** - Main AI class with fallback logic +- **`base-adapter.ts`** - Added generic models property +- **`types.ts`** - Added models to AIAdapter interface + +### Adapter Packages + +- **`packages/ai-openai/src/openai-adapter.ts`** - Added OpenAI model list +- **`packages/ai-anthropic/src/anthropic-adapter.ts`** - Added Anthropic model list +- **`packages/ai-gemini/src/gemini-adapter.ts`** - Added Gemini model list +- **`packages/ai-ollama/src/ollama-adapter.ts`** - Added Ollama model list + +### Documentation + +- **`docs/TYPE_SAFETY.md`** - Complete type safety guide +- **`docs/ADAPTER_FALLBACK.md`** - Complete fallback guide +- **`docs/QUICK_START.md`** - Quick reference for both features + +### Examples + +- **`examples/type-safety-demo.ts`** - Type safety examples +- **`examples/visual-error-examples.ts`** - Shows exact TypeScript errors +- **`examples/model-safety-demo.ts`** - Comprehensive type safety examples +- **`examples/adapter-fallback-demo.ts`** - Comprehensive fallback examples +- **`examples/all-adapters-type-safety.ts`** - All adapters together + +## Usage Examples + +### Example 1: Type Safety Only + +```typescript +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ apiKey: '...' }), + }, +}) + +// āœ… Valid +await ai.chat({ adapter: 'openai', model: 'gpt-4', messages: [] }) + +// āŒ TypeScript Error +await ai.chat({ adapter: 'openai', model: 'claude-3', messages: [] }) +``` + +### Example 2: Fallback Only + +```typescript +const ai = new AI({ + adapters: { + primary: new OpenAIAdapter({ apiKey: '...' }), + backup: new AnthropicAdapter({ apiKey: '...' }), + }, + fallbackOrder: ['primary', 'backup'], +}) + +// Automatically tries backup if primary fails +await ai.chat({ adapters: [], model: 'gpt-4', messages: [] }) +``` + +### Example 3: Combined Usage + +```typescript +const ai = new AI({ + adapters: { + fast: new OpenAIAdapter({ apiKey: '...' }), + reliable: new AnthropicAdapter({ apiKey: '...' }), + }, + fallbackOrder: ['fast', 'reliable'], +}) + +// Single adapter: strict type safety +await ai.chat({ + adapter: 'fast', + model: 'gpt-4', // āœ… Validated against fast adapter + messages: [], +}) + +// Fallback mode: automatic retry +await ai.chat({ + adapters: ['fast', 'reliable'], + model: 'gpt-4', // āš ļø Less strict, but has fallback + messages: [], +}) +``` + +## Benefits + +### For Developers + +1. **Catch Errors Early** - Model mismatches caught at compile-time, not runtime +2. **Better IDE Experience** - Autocomplete shows only valid models per adapter +3. **Refactoring Safety** - Changing adapters immediately shows model incompatibilities +4. **Self-Documenting** - Types show exactly what's available + +### For Applications + +1. **Higher Reliability** - Automatic failover on service outages +2. **Rate Limit Protection** - Seamlessly switch to backup on rate limits +3. **Cost Optimization** - Try cheaper options first, fall back to expensive ones +4. **Better Observability** - Detailed error logs from all failed attempts + +## Trade-offs + +### Type Safety vs Flexibility + +- **Single adapter mode**: Maximum type safety, no fallback +- **Fallback mode**: Less strict types, automatic retry + +**Recommendation**: Use single adapter mode when possible, fallback mode when reliability is critical. + +### Model Compatibility + +In fallback mode, TypeScript allows any model from any adapter. This is necessary for flexibility but means you must ensure the model works with at least one adapter in your list. + +**Solution**: Define model mappings per adapter for strict control. + +## Migration Path + +### Existing Code (Single Adapter) + +```typescript +// Before +const ai = new AI(new OpenAIAdapter({ apiKey: '...' })) +await ai.chat('gpt-4', messages) + +// After (backwards compatible) +const ai = new AI({ + adapters: { openai: new OpenAIAdapter({ apiKey: '...' }) }, +}) +await ai.chat({ adapter: 'openai', model: 'gpt-4', messages }) +``` + +### Adding Fallback + +```typescript +// Step 1: Add more adapters +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ apiKey: '...' }), + anthropic: new AnthropicAdapter({ apiKey: '...' }), // New! + }, +}) + +// Step 2: Use fallback mode +await ai.chat({ + adapters: ['openai', 'anthropic'], // Fallback enabled + model: 'gpt-4', + messages: [], +}) + +// Step 3: Configure global fallback (optional) +const ai = new AI({ + adapters: { + /* ... */ + }, + fallbackOrder: ['openai', 'anthropic'], // Global default +}) + +await ai.chat({ adapters: [], model: 'gpt-4', messages: [] }) +``` + +## Testing Recommendations + +### Test Type Safety + +```typescript +// These should NOT compile +ai.chat({ adapter: 'openai', model: 'claude-3', messages: [] }) // āŒ +ai.chat({ adapter: 'invalid', model: 'gpt-4', messages: [] }) // āŒ +ai.chat({ adapter: 'openai', model: 'gpt-5', messages: [] }) // āŒ +``` + +### Test Fallback Behavior + +```typescript +// Mock adapters to simulate failures +const mockAdapter1 = { + chatCompletion: jest.fn().mockRejectedValue(new Error('Rate limit')), +} +const mockAdapter2 = { + chatCompletion: jest.fn().mockResolvedValue({ content: 'Success' }), +} + +const ai = new AI({ + adapters: { first: mockAdapter1, second: mockAdapter2 }, + fallbackOrder: ['first', 'second'], +}) + +// Should try first, fail, then succeed with second +await ai.chat({ adapters: [], model: 'gpt-4', messages: [] }) + +expect(mockAdapter1.chatCompletion).toHaveBeenCalled() +expect(mockAdapter2.chatCompletion).toHaveBeenCalled() +``` + +### Test ToolCallManager + +The `ToolCallManager` class has comprehensive unit tests: + +```bash +cd packages/ai +pnpm test +``` + +Test coverage includes: + +- āœ… Accumulating streaming tool call chunks (name, arguments) +- āœ… Filtering incomplete tool calls (missing ID or name) +- āœ… Executing tools with parsed arguments +- āœ… Handling tool execution errors gracefully +- āœ… Handling tools without execute functions +- āœ… Multiple tool calls in one iteration +- āœ… Clearing state between iterations +- āœ… Emitting tool_result chunks +- āœ… Creating tool result messages + +See `packages/ai/src/tool-call-manager.test.ts` for implementation. + +## Performance Considerations + +### Single Adapter Mode + +- **No overhead** - Direct call to adapter +- **Fast failure** - Error thrown immediately + +### Fallback Mode + +- **Sequential retry** - Tries each adapter in order +- **Additional latency** - On failure, waits for timeout before trying next +- **More robust** - Higher chance of success + +**Recommendation**: Use single adapter mode for performance-critical paths, fallback mode for user-facing features where reliability matters. + +## Future Enhancements + +### Possible Additions + +1. **Parallel fallback** - Try multiple adapters simultaneously +2. **Smart routing** - Choose adapter based on request characteristics +3. **Caching** - Remember which adapter succeeded for similar requests +4. **Circuit breaker** - Skip known-failing adapters temporarily +5. **Metrics** - Track success rate, latency per adapter +6. **Weighted fallback** - Probabilistic adapter selection + +### Extensibility + +The system is designed to be extended: + +```typescript +// Custom adapter with type-safe models +const MY_MODELS = ['model-1', 'model-2'] as const + +class MyAdapter extends BaseAdapter { + name = 'my-adapter' + models = MY_MODELS + // ... implement methods +} + +// Use with full type safety +const ai = new AI({ + adapters: { mine: new MyAdapter() }, +}) + +await ai.chat({ + adapter: 'mine', + model: 'model-1', // āœ… Type-safe + messages: [], +}) +``` + +## Conclusion + +This implementation provides: + +- āœ… **Compile-time safety** for model selection +- āœ… **Runtime reliability** with automatic fallback +- āœ… **Developer experience** improvements (autocomplete, error messages) +- āœ… **Production readiness** (error handling, logging) +- āœ… **Extensibility** for future enhancements + +The combination of type safety and fallback makes the SDK both safer and more reliable, suitable for production use cases where uptime and correctness are critical. diff --git a/ai-docs/MIGRATION_UNIFIED_CHAT.md b/ai-docs/MIGRATION_UNIFIED_CHAT.md index 5ac102140..23e87d965 100644 --- a/ai-docs/MIGRATION_UNIFIED_CHAT.md +++ b/ai-docs/MIGRATION_UNIFIED_CHAT.md @@ -1,233 +1,237 @@ -# Migration Guide: From `as` Option to Separate Methods - -## Overview - -The `as` option has been removed from the `chat()` method. Instead, use: -- **`chat()`** - For streaming (returns `AsyncIterable`) -- **`chatCompletion()`** - For promise-based completion (returns `Promise`) - -## Migration Examples - -### Before (Using `as` option) - -```typescript -import { createAPIFileRoute } from "@tanstack/start/api"; -import { ai } from "~/lib/ai-client"; - -export const Route = createAPIFileRoute("/api/tanchat")({ - POST: async ({ request }) => { - const { messages, tools } = await request.json(); - - // Old way: Using as: "response" - return ai.chat({ - model: "gpt-4o", - adapter: "openAi", - fallbacks: [ - { - adapter: "ollama", - model: "gpt-oss:20b" - } - ], - messages: allMessages, - temperature: 0.7, - toolChoice: "auto", - maxIterations: 5, - as: "response", // ← Old way - }); - } -}); -``` - -### After (Using separate methods) - -```typescript -import { createAPIFileRoute } from "@tanstack/start/api"; -import { ai } from "~/lib/ai-client"; -import { toStreamResponse } from "@tanstack/ai"; - -export const Route = createAPIFileRoute("/api/tanchat")({ - POST: async ({ request }) => { - const { messages, tools } = await request.json(); - - // New way: Use chat() + toStreamResponse() - const stream = ai.chat({ - model: "gpt-4o", - adapter: "openAi", - fallbacks: [ - { - adapter: "ollama", - model: "gpt-oss:20b" - } - ], - messages: allMessages, - temperature: 0.7, - toolChoice: "auto", - maxIterations: 5, - }); - - return toStreamResponse(stream); - } -}); -``` - -## Key Changes - -1. **Removed**: `as: "response"` option -2. **Changed**: `chat()` now always returns `AsyncIterable` -3. **Added**: `chatCompletion()` method for promise-based calls -4. **Added**: Import `toStreamResponse()` helper for HTTP responses - -## Migration Patterns - -### Pattern 1: Non-streaming (Promise mode) - -**Before:** -```typescript -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], - as: "promise", // or omit - it was the default -}); -``` - -**After:** -```typescript -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); -``` - -### Pattern 2: Streaming - -**Before:** -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], - as: "stream", -}); -``` - -**After:** -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); -// No as option needed - chat() is now streaming-only -``` - -### Pattern 3: HTTP Response - -**Before:** -```typescript -return ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], - as: "response", -}); -``` - -**After:** -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); - -return toStreamResponse(stream); -``` - -## Complete Example - -Here's a complete updated file: - -```typescript -import { createAPIFileRoute } from "@tanstack/start/api"; -import { ai } from "~/lib/ai-client"; -import { toStreamResponse } from "@tanstack/ai"; - -const SYSTEM_PROMPT = `You are a helpful AI assistant...`; - -export const Route = createAPIFileRoute("/api/tanchat")({ - POST: async ({ request }) => { - try { - const body = await request.json(); - const { messages, tools } = body; - - const allMessages = tools - ? messages - : [{ role: "system", content: SYSTEM_PROMPT }, ...messages]; - - // Use chat() for streaming, then convert to Response - const stream = ai.chat({ - adapter: "openAi", - model: "gpt-4o", - messages: allMessages, - temperature: 0.7, - tools, - toolChoice: "auto", - maxIterations: 5, - fallbacks: [ - { - adapter: "ollama", - model: "gpt-oss:20b" - } - ] - }); - - return toStreamResponse(stream); - } catch (error: any) { - return new Response( - JSON.stringify({ error: error.message }), - { - status: 500, - headers: { "Content-Type": "application/json" } - } - ); - } - } -}); -``` - -## Benefits - -āœ… **Simpler code**: Clearer intent with separate methods -āœ… **Same functionality**: Still returns SSE-formatted Response -āœ… **Same fallback behavior**: OpenAI → Ollama failover still works -āœ… **Same tool execution**: Tools are still executed automatically -āœ… **Type-safe**: TypeScript knows exact return types -āœ… **Better naming**: `chatCompletion()` clearly indicates promise-based completion - -## Testing - -The client-side code doesn't need any changes! It still consumes the SSE stream the same way: - -```typescript -const response = await fetch("/api/tanchat", { - method: "POST", - body: JSON.stringify({ messages, tools }) -}); - -const reader = response.body!.getReader(); -const decoder = new TextDecoder(); - -while (true) { - const { done, value } = await reader.read(); - if (done) break; - - const text = decoder.decode(value); - // Parse SSE format and handle chunks -} -``` - -Everything works exactly the same, just with a cleaner API! šŸŽ‰ +# Migration Guide: From `as` Option to Separate Methods + +## Overview + +The `as` option has been removed from the `chat()` method. Instead, use: + +- **`chat()`** - For streaming (returns `AsyncIterable`) +- **`chatCompletion()`** - For promise-based completion (returns `Promise`) + +## Migration Examples + +### Before (Using `as` option) + +```typescript +import { createAPIFileRoute } from '@tanstack/start/api' +import { ai } from '~/lib/ai-client' + +export const Route = createAPIFileRoute('/api/tanchat')({ + POST: async ({ request }) => { + const { messages, tools } = await request.json() + + // Old way: Using as: "response" + return ai.chat({ + model: 'gpt-4o', + adapter: 'openAi', + fallbacks: [ + { + adapter: 'ollama', + model: 'gpt-oss:20b', + }, + ], + messages: allMessages, + temperature: 0.7, + toolChoice: 'auto', + maxIterations: 5, + as: 'response', // ← Old way + }) + }, +}) +``` + +### After (Using separate methods) + +```typescript +import { createAPIFileRoute } from '@tanstack/start/api' +import { ai } from '~/lib/ai-client' +import { toStreamResponse } from '@tanstack/ai' + +export const Route = createAPIFileRoute('/api/tanchat')({ + POST: async ({ request }) => { + const { messages, tools } = await request.json() + + // New way: Use chat() + toStreamResponse() + const stream = ai.chat({ + model: 'gpt-4o', + adapter: 'openAi', + fallbacks: [ + { + adapter: 'ollama', + model: 'gpt-oss:20b', + }, + ], + messages: allMessages, + temperature: 0.7, + toolChoice: 'auto', + maxIterations: 5, + }) + + return toStreamResponse(stream) + }, +}) +``` + +## Key Changes + +1. **Removed**: `as: "response"` option +2. **Changed**: `chat()` now always returns `AsyncIterable` +3. **Added**: `chatCompletion()` method for promise-based calls +4. **Added**: Import `toStreamResponse()` helper for HTTP responses + +## Migration Patterns + +### Pattern 1: Non-streaming (Promise mode) + +**Before:** + +```typescript +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], + as: 'promise', // or omit - it was the default +}) +``` + +**After:** + +```typescript +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) +``` + +### Pattern 2: Streaming + +**Before:** + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], + as: 'stream', +}) +``` + +**After:** + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) +// No as option needed - chat() is now streaming-only +``` + +### Pattern 3: HTTP Response + +**Before:** + +```typescript +return ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], + as: 'response', +}) +``` + +**After:** + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) + +return toStreamResponse(stream) +``` + +## Complete Example + +Here's a complete updated file: + +```typescript +import { createAPIFileRoute } from '@tanstack/start/api' +import { ai } from '~/lib/ai-client' +import { toStreamResponse } from '@tanstack/ai' + +const SYSTEM_PROMPT = `You are a helpful AI assistant...` + +export const Route = createAPIFileRoute('/api/tanchat')({ + POST: async ({ request }) => { + try { + const body = await request.json() + const { messages, tools } = body + + const allMessages = tools + ? messages + : [{ role: 'system', content: SYSTEM_PROMPT }, ...messages] + + // Use chat() for streaming, then convert to Response + const stream = ai.chat({ + adapter: 'openAi', + model: 'gpt-4o', + messages: allMessages, + temperature: 0.7, + tools, + toolChoice: 'auto', + maxIterations: 5, + fallbacks: [ + { + adapter: 'ollama', + model: 'gpt-oss:20b', + }, + ], + }) + + return toStreamResponse(stream) + } catch (error: any) { + return new Response(JSON.stringify({ error: error.message }), { + status: 500, + headers: { 'Content-Type': 'application/json' }, + }) + } + }, +}) +``` + +## Benefits + +āœ… **Simpler code**: Clearer intent with separate methods +āœ… **Same functionality**: Still returns SSE-formatted Response +āœ… **Same fallback behavior**: OpenAI → Ollama failover still works +āœ… **Same tool execution**: Tools are still executed automatically +āœ… **Type-safe**: TypeScript knows exact return types +āœ… **Better naming**: `chatCompletion()` clearly indicates promise-based completion + +## Testing + +The client-side code doesn't need any changes! It still consumes the SSE stream the same way: + +```typescript +const response = await fetch('/api/tanchat', { + method: 'POST', + body: JSON.stringify({ messages, tools }), +}) + +const reader = response.body!.getReader() +const decoder = new TextDecoder() + +while (true) { + const { done, value } = await reader.read() + if (done) break + + const text = decoder.decode(value) + // Parse SSE format and handle chunks +} +``` + +Everything works exactly the same, just with a cleaner API! šŸŽ‰ diff --git a/ai-docs/TOOL_EXECUTION_LOOP.md b/ai-docs/TOOL_EXECUTION_LOOP.md index 9bf3c336b..d7fb04b0c 100644 --- a/ai-docs/TOOL_EXECUTION_LOOP.md +++ b/ai-docs/TOOL_EXECUTION_LOOP.md @@ -55,15 +55,15 @@ Done! ```typescript for await (const chunk of stream) { - if (chunk.type === "content") { + if (chunk.type === 'content') { // Display text to user - console.log(chunk.delta); - } else if (chunk.type === "tool_call") { + console.log(chunk.delta) + } else if (chunk.type === 'tool_call') { // Show that a tool is being called - console.log(`Calling: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "tool_result") { + console.log(`Calling: ${chunk.toolCall.function.name}`) + } else if (chunk.type === 'tool_result') { // Show the tool result - console.log(`Result: ${chunk.content}`); + console.log(`Result: ${chunk.content}`) } } ``` @@ -79,78 +79,78 @@ for await (const chunk of stream) { ## Complete Example ```typescript -import { chat, tool } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; +import { chat, tool } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' // Define tools with execute functions const tools = [ tool({ - type: "function", + type: 'function', function: { - name: "get_weather", - description: "Get current weather for a location", + name: 'get_weather', + description: 'Get current weather for a location', parameters: { - type: "object", + type: 'object', properties: { - location: { type: "string" }, - unit: { type: "string", enum: ["celsius", "fahrenheit"] }, + location: { type: 'string' }, + unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }, }, - required: ["location"], + required: ['location'], }, }, execute: async (args) => { // This is called automatically by the SDK - const weather = await fetchWeatherAPI(args.location); + const weather = await fetchWeatherAPI(args.location) return JSON.stringify({ temperature: weather.temp, conditions: weather.conditions, - unit: args.unit || "celsius", - }); + unit: args.unit || 'celsius', + }) }, }), tool({ - type: "function", + type: 'function', function: { - name: "calculate", - description: "Perform mathematical calculations", + name: 'calculate', + description: 'Perform mathematical calculations', parameters: { - type: "object", + type: 'object', properties: { - expression: { type: "string" }, + expression: { type: 'string' }, }, - required: ["expression"], + required: ['expression'], }, }, execute: async (args) => { // This is called automatically by the SDK - const result = evaluateExpression(args.expression); - return JSON.stringify({ result }); + const result = evaluateExpression(args.expression) + return JSON.stringify({ result }) }, }), -]; +] // Use with chat - tools are automatically executed const stream = chat({ adapter: openai(), - model: "gpt-4o", - messages: [{ role: "user", content: "What's the weather in Paris?" }], + model: 'gpt-4o', + messages: [{ role: 'user', content: "What's the weather in Paris?" }], tools, agentLoopStrategy: maxIterations(5), // Control loop behavior // Or use custom strategy: // agentLoopStrategy: ({ iterationCount, messages }) => iterationCount < 10, -}); +}) // Handle the stream for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); - } else if (chunk.type === "tool_call") { - console.log(`\nšŸ”§ Calling: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "tool_result") { - console.log(`āœ“ Result: ${chunk.content}\n`); - } else if (chunk.type === "done") { - console.log(`\nDone! (${chunk.finishReason})`); + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) + } else if (chunk.type === 'tool_call') { + console.log(`\nšŸ”§ Calling: ${chunk.toolCall.function.name}`) + } else if (chunk.type === 'tool_result') { + console.log(`āœ“ Result: ${chunk.content}\n`) + } else if (chunk.type === 'done') { + console.log(`\nDone! (${chunk.finishReason})`) } } ``` @@ -287,12 +287,12 @@ This is equivalent to `agentLoopStrategy: maxIterations(3)`. ```typescript export interface AgentLoopState { - iterationCount: number; // Current iteration (0-indexed) - messages: Message[]; // Current conversation messages - finishReason: string | null; // Last finish reason from model + iterationCount: number // Current iteration (0-indexed) + messages: Message[] // Current conversation messages + finishReason: string | null // Last finish reason from model } -export type AgentLoopStrategy = (state: AgentLoopState) => boolean; +export type AgentLoopStrategy = (state: AgentLoopState) => boolean ``` ### `toolChoice` @@ -366,35 +366,35 @@ Emitted after the SDK executes a tool: Perfect for API endpoints - tool execution happens on server, results stream to client: ```typescript -import { chat } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; -import { toStreamResponse } from "@tanstack/ai"; +import { chat } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' +import { toStreamResponse } from '@tanstack/ai' export async function POST(request: Request) { - const { messages } = await request.json(); + const { messages } = await request.json() const stream = chat({ adapter: openai(), - model: "gpt-4o", + model: 'gpt-4o', messages, tools: [weatherTool, calculateTool], maxIterations: 5, - }); + }) // Client receives tool_call and tool_result chunks - return toStreamResponse(stream); + return toStreamResponse(stream) } ``` **Client-side:** ```typescript -const response = await fetch("/api/chat", { - method: "POST", +const response = await fetch('/api/chat', { + method: 'POST', body: JSON.stringify({ messages }), -}); +}) -const reader = response.body.getReader(); +const reader = response.body.getReader() // Receives: content chunks, tool_call chunks, tool_result chunks, done chunk ``` @@ -461,24 +461,24 @@ The tool execution logic is implemented in the `ToolCallManager` class for bette ```typescript class ToolCallManager { - constructor(tools: ReadonlyArray); + constructor(tools: ReadonlyArray) // Add a streaming tool call chunk - addToolCallChunk(chunk: ToolCallChunk): void; + addToolCallChunk(chunk: ToolCallChunk): void // Check if there are complete tool calls - hasToolCalls(): boolean; + hasToolCalls(): boolean // Get all validated tool calls - getToolCalls(): ToolCall[]; + getToolCalls(): ToolCall[] // Execute tools and yield tool_result chunks async *executeTools( - doneChunk - ): AsyncGenerator; + doneChunk, + ): AsyncGenerator // Clear for next iteration - clear(): void; + clear(): void } ``` diff --git a/ai-docs/TOOL_REGISTRY.md b/ai-docs/TOOL_REGISTRY.md index 419258893..efe8ca834 100644 --- a/ai-docs/TOOL_REGISTRY.md +++ b/ai-docs/TOOL_REGISTRY.md @@ -1,463 +1,474 @@ -# Tool Registry API - -> **šŸ”„ Automatic Tool Execution Loop:** The `chat()` method automatically executes tools in a loop. When the model decides to call a tool, the SDK: -> 1. Executes the tool's `execute` function -> 2. Emits `tool_result` chunks with the result -> 3. Adds tool results to messages automatically -> 4. Continues the conversation with the model -> 5. Repeats until no more tools are needed (up to `maxIterations`, default: 5) -> -> **You don't need to manually handle tool execution** - just provide tools with `execute` functions and the SDK handles everything! -> -> **šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) - -## Overview - -The Tool Registry API allows you to define tools once in the AI constructor and then reference them by name throughout your application. This provides better organization, type safety, and reusability. - -## Key Benefits - -āœ… **Define Once, Use Everywhere** - Register tools in one place -āœ… **Type-Safe Tool Names** - TypeScript autocomplete and validation -āœ… **Better Organization** - Centralized tool management -āœ… **No Duplication** - Reuse tools across different chats -āœ… **Runtime Validation** - Errors if referencing non-existent tools - -## Basic Usage - -### 1. Define Tools Registry - -```typescript -import { AI } from "@ts-poc/ai"; -import { OpenAIAdapter } from "@ts-poc/ai-openai"; - -// Define all your tools in a registry -const tools = { - get_weather: { - type: "function" as const, - function: { - name: "get_weather", - description: "Get current weather for a location", - parameters: { - type: "object", - properties: { - location: { type: "string", description: "City name" }, - }, - required: ["location"], - }, - }, - execute: async (args: { location: string }) => { - // Your implementation - return JSON.stringify({ temp: 72, condition: "sunny" }); - }, - }, - - calculate: { - type: "function" as const, - function: { - name: "calculate", - description: "Perform mathematical calculations", - parameters: { - type: "object", - properties: { - expression: { type: "string" }, - }, - required: ["expression"], - }, - }, - execute: async (args: { expression: string }) => { - const result = eval(args.expression); // Use safe math parser in production! - return JSON.stringify({ result }); - }, - }, -} as const; // ← Important: use "as const" for type safety! -``` - -### 2. Initialize AI with Tools - -```typescript -const ai = new AI({ - adapters: { - openai: new OpenAIAdapter({ - apiKey: process.env.OPENAI_API_KEY, - }), - }, - tools, // ← Register tools here! -}); -``` - -### 3. Use Tools by Name (Type-Safe!) - -```typescript -// Use specific tools -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "user", content: "What's the weather in SF?" }, - ], - tools: ["get_weather"], // ← Type-safe! Only registered tool names - toolChoice: "auto", - maxIterations: 5, -}); - -// Use multiple tools -const result2 = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "user", content: "What's the weather in SF and what's 2+2?" }, - ], - tools: ["get_weather", "calculate"], // ← Multiple tools, all type-safe! - toolChoice: "auto", - maxIterations: 5, -}); - -// No tools (regular chat) -const result3 = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "user", content: "Tell me a joke" }, - ], - // No tools specified -}); -``` - -## Type Safety - -TypeScript provides full autocomplete and validation: - -```typescript -const ai = new AI({ - adapters: { /* ... */ }, - tools: { - get_weather: { /* ... */ }, - calculate: { /* ... */ }, - }, -}); - -// āœ… Valid - TypeScript knows these tool names exist -ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather"], // ← Autocomplete works! -}); - -// āœ… Valid - multiple tools -ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "calculate"], // ← Both validated! -}); - -// āŒ TypeScript Error - invalid tool name -ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["invalid_tool"], // ← Compile error! -}); -``` - -## Migration from Old API - -### Before (Tools Inline) - -```typescript -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: [ - { - type: "function", - function: { - name: "get_weather", - description: "Get weather", - parameters: { /* ... */ }, - }, - execute: async (args) => { /* ... */ }, - }, - { - type: "function", - function: { - name: "calculate", - description: "Calculate", - parameters: { /* ... */ }, - }, - execute: async (args) => { /* ... */ }, - }, - ], - toolChoice: "auto", -}); -``` - -### After (Tool Registry) - -```typescript -// Define once in constructor -const ai = new AI({ - adapters: { /* ... */ }, - tools: { - get_weather: { - type: "function", - function: { - name: "get_weather", - description: "Get weather", - parameters: { /* ... */ }, - }, - execute: async (args) => { /* ... */ }, - }, - calculate: { - type: "function", - function: { - name: "calculate", - description: "Calculate", - parameters: { /* ... */ }, - }, - execute: async (args) => { /* ... */ }, - }, - }, -}); - -// Use by name (type-safe!) -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "calculate"], // ← Much cleaner! - toolChoice: "auto", -}); -``` - -## Working with Tools - -### Get Tool by Name - -```typescript -const weatherTool = ai.getTool("get_weather"); -console.log(weatherTool.function.description); -``` - -### List All Tool Names - -```typescript -const toolNames = ai.toolNames; -console.log("Available tools:", toolNames); -// Output: ["get_weather", "calculate"] -``` - -### Check if Tool Exists - -```typescript -try { - const tool = ai.getTool("some_tool"); - console.log("Tool exists!"); -} catch (error) { - console.log("Tool not found"); -} -``` - -## Streaming with Tools - -Tools work seamlessly with streaming: - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "user", content: "What's the weather in Paris and what's 100*5?" }, - ], - tools: ["get_weather", "calculate"], - toolChoice: "auto", - maxIterations: 5, -}); - -for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); - } else if (chunk.type === "tool_call") { - console.log(`\n→ Calling: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "done") { - console.log("\nāœ“ Done"); - } -} -``` - -## HTTP Streaming with Tools - -Perfect for API endpoints: - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -export const Route = createAPIFileRoute("/api/chat")({ - POST: async ({ request }): Promise => { - const { messages } = await request.json(); - - const stream = ai.chat({ - adapter: "openai", - model: "gpt-4o", - messages, - tools: ["get_weather", "search_database", "send_email"], - toolChoice: "auto", - maxIterations: 5, - }); - - return toStreamResponse(stream); - } -}); -``` - -## Real-World Example: E-commerce Assistant - -```typescript -const tools = { - search_products: { - type: "function" as const, - function: { - name: "search_products", - description: "Search for products in the catalog", - parameters: { - type: "object", - properties: { - query: { type: "string" }, - category: { type: "string" }, - maxPrice: { type: "number" }, - }, - required: ["query"], - }, - }, - execute: async (args: { query: string; category?: string; maxPrice?: number }) => { - const results = await db.products.search(args); - return JSON.stringify(results); - }, - }, - - get_product_details: { - type: "function" as const, - function: { - name: "get_product_details", - description: "Get detailed information about a product", - parameters: { - type: "object", - properties: { - productId: { type: "string" }, - }, - required: ["productId"], - }, - }, - execute: async (args: { productId: string }) => { - const product = await db.products.findById(args.productId); - return JSON.stringify(product); - }, - }, - - check_inventory: { - type: "function" as const, - function: { - name: "check_inventory", - description: "Check if a product is in stock", - parameters: { - type: "object", - properties: { - productId: { type: "string" }, - quantity: { type: "number", default: 1 }, - }, - required: ["productId"], - }, - }, - execute: async (args: { productId: string; quantity?: number }) => { - const available = await inventory.check(args.productId, args.quantity || 1); - return JSON.stringify({ available, productId: args.productId }); - }, - }, - - add_to_cart: { - type: "function" as const, - function: { - name: "add_to_cart", - description: "Add a product to the shopping cart", - parameters: { - type: "object", - properties: { - productId: { type: "string" }, - quantity: { type: "number", default: 1 }, - }, - required: ["productId"], - }, - }, - execute: async (args: { productId: string; quantity?: number }) => { - await cart.add(args.productId, args.quantity || 1); - return JSON.stringify({ success: true, productId: args.productId }); - }, - }, -} as const; - -const ai = new AI({ - adapters: { openai: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }) }, - tools, -}); - -// Now any chat can use these tools by name! -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "user", content: "I'm looking for a red guitar under $500" }, - ], - tools: ["search_products", "get_product_details", "check_inventory", "add_to_cart"], - toolChoice: "auto", - maxIterations: 10, -}); -``` - -## Advanced: Dynamic Tool Selection - -You can dynamically select which tools to use: - -```typescript -function getChatTools(userRole: string): string[] { - if (userRole === "admin") { - return ["search_products", "get_product_details", "check_inventory", "add_to_cart", "update_prices"]; - } else if (userRole === "customer") { - return ["search_products", "get_product_details", "add_to_cart"]; - } else { - return ["search_products"]; // Guest users - } -} - -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: getChatTools(user.role) as any, // Type assertion needed for dynamic arrays - toolChoice: "auto", -}); -``` - -## Best Practices - -1. **Use `as const`** when defining tools for maximum type safety -2. **Descriptive names** - Use clear, verb-based names like `get_weather`, `search_products` -3. **Comprehensive descriptions** - Help the AI understand when to use each tool -4. **Required parameters** - Mark parameters as required when appropriate -5. **Error handling** - Return error information in execute functions -6. **Validation** - Validate parameters in execute functions -7. **Centralize** - Keep all tool definitions in one place for maintainability - -## Summary - -The Tool Registry API provides: - -āœ… **Type-Safe Tool References** - Autocomplete and validation -āœ… **Centralized Management** - Define once, use everywhere -āœ… **Cleaner Code** - Reference by name instead of inline definitions -āœ… **Better Reusability** - Share tools across different chats -āœ… **Runtime Validation** - Catch errors early - -**Migration Path**: Move inline tool definitions to the constructor registry, then reference them by name in your chat calls! +# Tool Registry API + +> **šŸ”„ Automatic Tool Execution Loop:** The `chat()` method automatically executes tools in a loop. When the model decides to call a tool, the SDK: +> +> 1. Executes the tool's `execute` function +> 2. Emits `tool_result` chunks with the result +> 3. Adds tool results to messages automatically +> 4. Continues the conversation with the model +> 5. Repeats until no more tools are needed (up to `maxIterations`, default: 5) +> +> **You don't need to manually handle tool execution** - just provide tools with `execute` functions and the SDK handles everything! +> +> **šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) + +## Overview + +The Tool Registry API allows you to define tools once in the AI constructor and then reference them by name throughout your application. This provides better organization, type safety, and reusability. + +## Key Benefits + +āœ… **Define Once, Use Everywhere** - Register tools in one place +āœ… **Type-Safe Tool Names** - TypeScript autocomplete and validation +āœ… **Better Organization** - Centralized tool management +āœ… **No Duplication** - Reuse tools across different chats +āœ… **Runtime Validation** - Errors if referencing non-existent tools + +## Basic Usage + +### 1. Define Tools Registry + +```typescript +import { AI } from '@ts-poc/ai' +import { OpenAIAdapter } from '@ts-poc/ai-openai' + +// Define all your tools in a registry +const tools = { + get_weather: { + type: 'function' as const, + function: { + name: 'get_weather', + description: 'Get current weather for a location', + parameters: { + type: 'object', + properties: { + location: { type: 'string', description: 'City name' }, + }, + required: ['location'], + }, + }, + execute: async (args: { location: string }) => { + // Your implementation + return JSON.stringify({ temp: 72, condition: 'sunny' }) + }, + }, + + calculate: { + type: 'function' as const, + function: { + name: 'calculate', + description: 'Perform mathematical calculations', + parameters: { + type: 'object', + properties: { + expression: { type: 'string' }, + }, + required: ['expression'], + }, + }, + execute: async (args: { expression: string }) => { + const result = eval(args.expression) // Use safe math parser in production! + return JSON.stringify({ result }) + }, + }, +} as const // ← Important: use "as const" for type safety! +``` + +### 2. Initialize AI with Tools + +```typescript +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ + apiKey: process.env.OPENAI_API_KEY, + }), + }, + tools, // ← Register tools here! +}) +``` + +### 3. Use Tools by Name (Type-Safe!) + +```typescript +// Use specific tools +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: "What's the weather in SF?" }], + tools: ['get_weather'], // ← Type-safe! Only registered tool names + toolChoice: 'auto', + maxIterations: 5, +}) + +// Use multiple tools +const result2 = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [ + { role: 'user', content: "What's the weather in SF and what's 2+2?" }, + ], + tools: ['get_weather', 'calculate'], // ← Multiple tools, all type-safe! + toolChoice: 'auto', + maxIterations: 5, +}) + +// No tools (regular chat) +const result3 = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Tell me a joke' }], + // No tools specified +}) +``` + +## Type Safety + +TypeScript provides full autocomplete and validation: + +```typescript +const ai = new AI({ + adapters: { /* ... */ }, + tools: { + get_weather: { /* ... */ }, + calculate: { /* ... */ }, + }, +}); + +// āœ… Valid - TypeScript knows these tool names exist +ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather"], // ← Autocomplete works! +}); + +// āœ… Valid - multiple tools +ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "calculate"], // ← Both validated! +}); + +// āŒ TypeScript Error - invalid tool name +ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["invalid_tool"], // ← Compile error! +}); +``` + +## Migration from Old API + +### Before (Tools Inline) + +```typescript +const result = await ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: [ + { + type: "function", + function: { + name: "get_weather", + description: "Get weather", + parameters: { /* ... */ }, + }, + execute: async (args) => { /* ... */ }, + }, + { + type: "function", + function: { + name: "calculate", + description: "Calculate", + parameters: { /* ... */ }, + }, + execute: async (args) => { /* ... */ }, + }, + ], + toolChoice: "auto", +}); +``` + +### After (Tool Registry) + +```typescript +// Define once in constructor +const ai = new AI({ + adapters: { /* ... */ }, + tools: { + get_weather: { + type: "function", + function: { + name: "get_weather", + description: "Get weather", + parameters: { /* ... */ }, + }, + execute: async (args) => { /* ... */ }, + }, + calculate: { + type: "function", + function: { + name: "calculate", + description: "Calculate", + parameters: { /* ... */ }, + }, + execute: async (args) => { /* ... */ }, + }, + }, +}); + +// Use by name (type-safe!) +const result = await ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "calculate"], // ← Much cleaner! + toolChoice: "auto", +}); +``` + +## Working with Tools + +### Get Tool by Name + +```typescript +const weatherTool = ai.getTool('get_weather') +console.log(weatherTool.function.description) +``` + +### List All Tool Names + +```typescript +const toolNames = ai.toolNames +console.log('Available tools:', toolNames) +// Output: ["get_weather", "calculate"] +``` + +### Check if Tool Exists + +```typescript +try { + const tool = ai.getTool('some_tool') + console.log('Tool exists!') +} catch (error) { + console.log('Tool not found') +} +``` + +## Streaming with Tools + +Tools work seamlessly with streaming: + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [ + { role: 'user', content: "What's the weather in Paris and what's 100*5?" }, + ], + tools: ['get_weather', 'calculate'], + toolChoice: 'auto', + maxIterations: 5, +}) + +for await (const chunk of stream) { + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) + } else if (chunk.type === 'tool_call') { + console.log(`\n→ Calling: ${chunk.toolCall.function.name}`) + } else if (chunk.type === 'done') { + console.log('\nāœ“ Done') + } +} +``` + +## HTTP Streaming with Tools + +Perfect for API endpoints: + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +export const Route = createAPIFileRoute('/api/chat')({ + POST: async ({ request }): Promise => { + const { messages } = await request.json() + + const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4o', + messages, + tools: ['get_weather', 'search_database', 'send_email'], + toolChoice: 'auto', + maxIterations: 5, + }) + + return toStreamResponse(stream) + }, +}) +``` + +## Real-World Example: E-commerce Assistant + +```typescript +const tools = { + search_products: { + type: 'function' as const, + function: { + name: 'search_products', + description: 'Search for products in the catalog', + parameters: { + type: 'object', + properties: { + query: { type: 'string' }, + category: { type: 'string' }, + maxPrice: { type: 'number' }, + }, + required: ['query'], + }, + }, + execute: async (args: { + query: string + category?: string + maxPrice?: number + }) => { + const results = await db.products.search(args) + return JSON.stringify(results) + }, + }, + + get_product_details: { + type: 'function' as const, + function: { + name: 'get_product_details', + description: 'Get detailed information about a product', + parameters: { + type: 'object', + properties: { + productId: { type: 'string' }, + }, + required: ['productId'], + }, + }, + execute: async (args: { productId: string }) => { + const product = await db.products.findById(args.productId) + return JSON.stringify(product) + }, + }, + + check_inventory: { + type: 'function' as const, + function: { + name: 'check_inventory', + description: 'Check if a product is in stock', + parameters: { + type: 'object', + properties: { + productId: { type: 'string' }, + quantity: { type: 'number', default: 1 }, + }, + required: ['productId'], + }, + }, + execute: async (args: { productId: string; quantity?: number }) => { + const available = await inventory.check( + args.productId, + args.quantity || 1, + ) + return JSON.stringify({ available, productId: args.productId }) + }, + }, + + add_to_cart: { + type: 'function' as const, + function: { + name: 'add_to_cart', + description: 'Add a product to the shopping cart', + parameters: { + type: 'object', + properties: { + productId: { type: 'string' }, + quantity: { type: 'number', default: 1 }, + }, + required: ['productId'], + }, + }, + execute: async (args: { productId: string; quantity?: number }) => { + await cart.add(args.productId, args.quantity || 1) + return JSON.stringify({ success: true, productId: args.productId }) + }, + }, +} as const + +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }), + }, + tools, +}) + +// Now any chat can use these tools by name! +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [ + { role: 'user', content: "I'm looking for a red guitar under $500" }, + ], + tools: [ + 'search_products', + 'get_product_details', + 'check_inventory', + 'add_to_cart', + ], + toolChoice: 'auto', + maxIterations: 10, +}) +``` + +## Advanced: Dynamic Tool Selection + +You can dynamically select which tools to use: + +```typescript +function getChatTools(userRole: string): string[] { + if (userRole === "admin") { + return ["search_products", "get_product_details", "check_inventory", "add_to_cart", "update_prices"]; + } else if (userRole === "customer") { + return ["search_products", "get_product_details", "add_to_cart"]; + } else { + return ["search_products"]; // Guest users + } +} + +const result = await ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: getChatTools(user.role) as any, // Type assertion needed for dynamic arrays + toolChoice: "auto", +}); +``` + +## Best Practices + +1. **Use `as const`** when defining tools for maximum type safety +2. **Descriptive names** - Use clear, verb-based names like `get_weather`, `search_products` +3. **Comprehensive descriptions** - Help the AI understand when to use each tool +4. **Required parameters** - Mark parameters as required when appropriate +5. **Error handling** - Return error information in execute functions +6. **Validation** - Validate parameters in execute functions +7. **Centralize** - Keep all tool definitions in one place for maintainability + +## Summary + +The Tool Registry API provides: + +āœ… **Type-Safe Tool References** - Autocomplete and validation +āœ… **Centralized Management** - Define once, use everywhere +āœ… **Cleaner Code** - Reference by name instead of inline definitions +āœ… **Better Reusability** - Share tools across different chats +āœ… **Runtime Validation** - Catch errors early + +**Migration Path**: Move inline tool definitions to the constructor registry, then reference them by name in your chat calls! diff --git a/ai-docs/TOOL_REGISTRY_IMPLEMENTATION.md b/ai-docs/TOOL_REGISTRY_IMPLEMENTATION.md index bd52e8693..6f741a6e4 100644 --- a/ai-docs/TOOL_REGISTRY_IMPLEMENTATION.md +++ b/ai-docs/TOOL_REGISTRY_IMPLEMENTATION.md @@ -1,405 +1,439 @@ -# Tool Registry API - Implementation Summary - -> **šŸ”„ Automatic Tool Execution Loop:** This document describes how tools are registered and referenced. Remember that the `chat()` method automatically executes tools in a loop - when the model calls a tool, the SDK executes it, adds the result to messages, and continues the conversation automatically (up to `maxIterations`, default: 5). - -## Overview - -Successfully refactored the AI API to support a **tool registry** where tools are defined once in the constructor and then referenced by name in a type-safe manner throughout the application. - -## Key Changes - -### 1. Tool Registry in Constructor - -**Before:** -```typescript -const ai = new AI({ - adapters: { /* ... */ } -}); - -// Had to pass full tool definitions every time -ai.chat({ - messages: [...], - tools: [ - { type: "function", function: { name: "get_weather", ... }, execute: ... }, - { type: "function", function: { name: "calculate", ... }, execute: ... }, - ], -}); -``` - -**After:** -```typescript -const ai = new AI({ - adapters: { /* ... */ }, - tools: { - get_weather: { - type: "function" as const, - function: { name: "get_weather", ... }, - execute: async (args) => { ... }, - }, - calculate: { - type: "function" as const, - function: { name: "calculate", ... }, - execute: async (args) => { ... }, - }, - }, -}); - -// Reference by name - type-safe! -ai.chat({ - messages: [...], - tools: ["get_weather", "calculate"], // ← Type-safe string array! -}); -``` - -### 2. Type System Updates - -Added new generic parameter to `AI` class: -```typescript -class AI -``` - -Where: -- `T` - Adapter map (existing) -- `TTools` - Tool registry (new!) - -### 3. Type-Safe Tool Names - -Tool names are extracted from the registry type: -```typescript -type ToolNames = keyof TTools & string; -``` - -TypeScript provides: -- āœ… Autocomplete for tool names -- āœ… Compile-time validation -- āœ… Refactoring safety - -### 4. Updated Method Signatures - -All chat methods now accept tool names instead of full tool objects: - -```typescript -// ChatOptionsWithAdapter -type ChatOptionsWithAdapter = { - // ... other options - tools?: ReadonlyArray>; // ← Type-safe tool names! -}; -``` - -### 5. Internal Tool Resolution - -New helper methods: -```typescript -class AI { - getTool(name: ToolNames): Tool; // Get single tool - get toolNames(): Array>; // List all tool names - private getToolsByNames(names: ToolNames[]): Tool[]; // Convert names to objects -} -``` - -## API Examples - -### Basic Usage - -```typescript -const ai = new AI({ - adapters: { - openai: new OpenAIAdapter({ apiKey: "..." }), - }, - tools: { - get_weather: { /* ... */ }, - calculate: { /* ... */ }, - }, -}); - -// Use specific tools -await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather"], // ← Type-safe! -}); - -// Use multiple tools -await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "calculate"], // ← Both validated! -}); - -// No tools -await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - // No tools specified -}); -``` - -### With Streaming - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "calculate"], -}); - -for await (const chunk of stream) { - // Handle chunks -} -``` - -### With HTTP Response - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "search_products"], -}); - -return toStreamResponse(stream); -``` - -## Real-World Example: api.tanchat.ts - -**Before:** -```typescript -const tools: Tool[] = [ - { type: "function", function: { name: "getGuitars", ... }, execute: ... }, - { type: "function", function: { name: "recommendGuitar", ... }, execute: ... }, -]; - -const ai = new AI({ adapters: { /* ... */ } }); - -ai.chat({ - messages: [...], - tools, // ← Pass array of Tool objects -}); -``` - -**After:** -```typescript -const tools = { - getGuitars: { - type: "function" as const, - function: { name: "getGuitars", ... }, - execute: async () => { ... }, - }, - recommendGuitar: { - type: "function" as const, - function: { name: "recommendGuitar", ... }, - execute: async (args) => { ... }, - }, -} as const; - -const ai = new AI({ - adapters: { /* ... */ }, - tools, // ← Register once! -}); - -ai.chat({ - messages: [...], - tools: ["getGuitars", "recommendGuitar"], // ← Type-safe names! -}); -``` - -## Benefits - -### 1. Type Safety -- āœ… Autocomplete for tool names in IDE -- āœ… Compile-time errors for invalid tool names -- āœ… Refactoring support (rename tools safely) - -### 2. Better Organization -- āœ… Centralized tool definitions -- āœ… Single source of truth -- āœ… Easy to maintain and update - -### 3. Code Reusability -- āœ… Define tools once, use everywhere -- āœ… Share tools across different chat calls -- āœ… No duplication - -### 4. Developer Experience -- āœ… Cleaner code (tool names vs full objects) -- āœ… Less typing (just reference by name) -- āœ… Better readability - -### 5. Runtime Safety -- āœ… Validation that tools exist -- āœ… Clear error messages -- āœ… No silent failures - -## Implementation Details - -### Type System - -```typescript -// Tool registry type -type ToolRegistry = Record; - -// Extract tool names -type ToolNames = keyof TTools & string; - -// AI class with tool registry -class AI { - private tools: TTools; - - constructor(config: AIConfig) { - this.tools = config.tools || {} as TTools; - } - - private getToolsByNames(names: ReadonlyArray>): Tool[] { - return names.map(name => this.getTool(name)); - } -} -``` - -### Chat Options - -```typescript -type ChatOptionsWithAdapter = { - adapter: keyof TAdapters; - model: ExtractModels; - messages: Message[]; - tools?: ReadonlyArray>; // ← Tool names, not objects - // ... other options -}; -``` - -### Internal Resolution - -When `chat()` is called: -1. Extract tool names from options -2. Convert tool names to Tool objects using `getToolsByNames()` -3. Pass Tool objects to adapter methods -4. Adapters work with full Tool objects (no changes needed) - -## Files Changed - -### Core Implementation -- āœ… `packages/ai/src/ai.ts` - - Added `TTools` generic parameter to `AI` class - - Added `ToolRegistry` and `ToolNames` types - - Updated `ChatOptionsWithAdapter` and `ChatOptionsWithFallback` - - Added `getTool()`, `toolNames`, and `getToolsByNames()` methods - - Updated `chatPromise()` and `chatStream()` to convert tool names - -### Documentation -- āœ… `docs/TOOL_REGISTRY.md` - Comprehensive guide -- āœ… `docs/TOOL_REGISTRY_QUICK_START.md` - Quick reference -- āœ… `examples/tool-registry-example.ts` - Full examples - -### Example Updates -- āœ… `examples/ts-chat/src/routes/demo/api.tanchat.ts` - Updated to use tool registry - -## Migration Guide - -### Step 1: Convert Tool Array to Registry - -```typescript -// Before -const tools: Tool[] = [ - { type: "function", function: { name: "tool1", ... } }, - { type: "function", function: { name: "tool2", ... } }, -]; - -// After -const tools = { - tool1: { type: "function" as const, function: { name: "tool1", ... } }, - tool2: { type: "function" as const, function: { name: "tool2", ... } }, -} as const; // ← Important! -``` - -### Step 2: Register Tools in Constructor - -```typescript -// Before -const ai = new AI({ adapters: { /* ... */ } }); - -// After -const ai = new AI({ - adapters: { /* ... */ }, - tools, // ← Add tools here -}); -``` - -### Step 3: Use Tool Names in Chat Calls - -```typescript -// Before -ai.chat({ - messages: [...], - tools: tools, // ← Full array -}); - -// After -ai.chat({ - messages: [...], - tools: ["tool1", "tool2"], // ← Just names! -}); -``` - -## Testing - -Verify type safety: -```typescript -const ai = new AI({ - adapters: { /* ... */ }, - tools: { - get_weather: { /* ... */ }, - calculate: { /* ... */ }, - }, -}); - -// āœ… Should work -ai.chat({ messages: [], tools: ["get_weather"] }); - -// āŒ Should show TypeScript error -ai.chat({ messages: [], tools: ["invalid_tool"] }); -``` - -## Performance - -No performance impact: -- Tool name resolution happens once per chat call -- Minimal overhead (simple object lookup) -- Tool execution unchanged - -## Backward Compatibility - -**Breaking Change**: This is a breaking change. Users must: -1. Convert tool arrays to registries -2. Register tools in constructor -3. Use tool names instead of objects - -However, the migration path is straightforward and provides significant benefits. - -## Future Enhancements - -Potential improvements: -- Tool namespaces (e.g., `weather.get`, `weather.forecast`) -- Tool permissions/access control -- Tool versioning -- Dynamic tool registration -- Tool composition/chaining - -## Summary - -The Tool Registry API provides: - -āœ… **Type-Safe Tool References** - Autocomplete and validation -āœ… **Centralized Management** - Define once, use everywhere -āœ… **Cleaner Code** - Reference by name instead of objects -āœ… **Better Reusability** - Share tools across chats -āœ… **Runtime Validation** - Clear error messages -āœ… **Developer Experience** - Improved DX with less code - -**Result**: More maintainable, type-safe, and developer-friendly tool management! šŸŽ‰ +# Tool Registry API - Implementation Summary + +> **šŸ”„ Automatic Tool Execution Loop:** This document describes how tools are registered and referenced. Remember that the `chat()` method automatically executes tools in a loop - when the model calls a tool, the SDK executes it, adds the result to messages, and continues the conversation automatically (up to `maxIterations`, default: 5). + +## Overview + +Successfully refactored the AI API to support a **tool registry** where tools are defined once in the constructor and then referenced by name in a type-safe manner throughout the application. + +## Key Changes + +### 1. Tool Registry in Constructor + +**Before:** + +```typescript +const ai = new AI({ + adapters: { /* ... */ } +}); + +// Had to pass full tool definitions every time +ai.chat({ + messages: [...], + tools: [ + { type: "function", function: { name: "get_weather", ... }, execute: ... }, + { type: "function", function: { name: "calculate", ... }, execute: ... }, + ], +}); +``` + +**After:** + +```typescript +const ai = new AI({ + adapters: { /* ... */ }, + tools: { + get_weather: { + type: "function" as const, + function: { name: "get_weather", ... }, + execute: async (args) => { ... }, + }, + calculate: { + type: "function" as const, + function: { name: "calculate", ... }, + execute: async (args) => { ... }, + }, + }, +}); + +// Reference by name - type-safe! +ai.chat({ + messages: [...], + tools: ["get_weather", "calculate"], // ← Type-safe string array! +}); +``` + +### 2. Type System Updates + +Added new generic parameter to `AI` class: + +```typescript +class AI +``` + +Where: + +- `T` - Adapter map (existing) +- `TTools` - Tool registry (new!) + +### 3. Type-Safe Tool Names + +Tool names are extracted from the registry type: + +```typescript +type ToolNames = keyof TTools & string +``` + +TypeScript provides: + +- āœ… Autocomplete for tool names +- āœ… Compile-time validation +- āœ… Refactoring safety + +### 4. Updated Method Signatures + +All chat methods now accept tool names instead of full tool objects: + +```typescript +// ChatOptionsWithAdapter +type ChatOptionsWithAdapter = { + // ... other options + tools?: ReadonlyArray> // ← Type-safe tool names! +} +``` + +### 5. Internal Tool Resolution + +New helper methods: + +```typescript +class AI { + getTool(name: ToolNames): Tool // Get single tool + get toolNames(): Array> // List all tool names + private getToolsByNames(names: ToolNames[]): Tool[] // Convert names to objects +} +``` + +## API Examples + +### Basic Usage + +```typescript +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ apiKey: "..." }), + }, + tools: { + get_weather: { /* ... */ }, + calculate: { /* ... */ }, + }, +}); + +// Use specific tools +await ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather"], // ← Type-safe! +}); + +// Use multiple tools +await ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "calculate"], // ← Both validated! +}); + +// No tools +await ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + // No tools specified +}); +``` + +### With Streaming + +```typescript +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "calculate"], +}); + +for await (const chunk of stream) { + // Handle chunks +} +``` + +### With HTTP Response + +```typescript +import { toStreamResponse } from "@tanstack/ai"; + +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "search_products"], +}); + +return toStreamResponse(stream); +``` + +## Real-World Example: api.tanchat.ts + +**Before:** + +```typescript +const tools: Tool[] = [ + { type: "function", function: { name: "getGuitars", ... }, execute: ... }, + { type: "function", function: { name: "recommendGuitar", ... }, execute: ... }, +]; + +const ai = new AI({ adapters: { /* ... */ } }); + +ai.chat({ + messages: [...], + tools, // ← Pass array of Tool objects +}); +``` + +**After:** + +```typescript +const tools = { + getGuitars: { + type: "function" as const, + function: { name: "getGuitars", ... }, + execute: async () => { ... }, + }, + recommendGuitar: { + type: "function" as const, + function: { name: "recommendGuitar", ... }, + execute: async (args) => { ... }, + }, +} as const; + +const ai = new AI({ + adapters: { /* ... */ }, + tools, // ← Register once! +}); + +ai.chat({ + messages: [...], + tools: ["getGuitars", "recommendGuitar"], // ← Type-safe names! +}); +``` + +## Benefits + +### 1. Type Safety + +- āœ… Autocomplete for tool names in IDE +- āœ… Compile-time errors for invalid tool names +- āœ… Refactoring support (rename tools safely) + +### 2. Better Organization + +- āœ… Centralized tool definitions +- āœ… Single source of truth +- āœ… Easy to maintain and update + +### 3. Code Reusability + +- āœ… Define tools once, use everywhere +- āœ… Share tools across different chat calls +- āœ… No duplication + +### 4. Developer Experience + +- āœ… Cleaner code (tool names vs full objects) +- āœ… Less typing (just reference by name) +- āœ… Better readability + +### 5. Runtime Safety + +- āœ… Validation that tools exist +- āœ… Clear error messages +- āœ… No silent failures + +## Implementation Details + +### Type System + +```typescript +// Tool registry type +type ToolRegistry = Record + +// Extract tool names +type ToolNames = keyof TTools & string + +// AI class with tool registry +class AI { + private tools: TTools + + constructor(config: AIConfig) { + this.tools = config.tools || ({} as TTools) + } + + private getToolsByNames(names: ReadonlyArray>): Tool[] { + return names.map((name) => this.getTool(name)) + } +} +``` + +### Chat Options + +```typescript +type ChatOptionsWithAdapter = { + adapter: keyof TAdapters + model: ExtractModels + messages: Message[] + tools?: ReadonlyArray> // ← Tool names, not objects + // ... other options +} +``` + +### Internal Resolution + +When `chat()` is called: + +1. Extract tool names from options +2. Convert tool names to Tool objects using `getToolsByNames()` +3. Pass Tool objects to adapter methods +4. Adapters work with full Tool objects (no changes needed) + +## Files Changed + +### Core Implementation + +- āœ… `packages/ai/src/ai.ts` + - Added `TTools` generic parameter to `AI` class + - Added `ToolRegistry` and `ToolNames` types + - Updated `ChatOptionsWithAdapter` and `ChatOptionsWithFallback` + - Added `getTool()`, `toolNames`, and `getToolsByNames()` methods + - Updated `chatPromise()` and `chatStream()` to convert tool names + +### Documentation + +- āœ… `docs/TOOL_REGISTRY.md` - Comprehensive guide +- āœ… `docs/TOOL_REGISTRY_QUICK_START.md` - Quick reference +- āœ… `examples/tool-registry-example.ts` - Full examples + +### Example Updates + +- āœ… `examples/ts-chat/src/routes/demo/api.tanchat.ts` - Updated to use tool registry + +## Migration Guide + +### Step 1: Convert Tool Array to Registry + +```typescript +// Before +const tools: Tool[] = [ + { type: "function", function: { name: "tool1", ... } }, + { type: "function", function: { name: "tool2", ... } }, +]; + +// After +const tools = { + tool1: { type: "function" as const, function: { name: "tool1", ... } }, + tool2: { type: "function" as const, function: { name: "tool2", ... } }, +} as const; // ← Important! +``` + +### Step 2: Register Tools in Constructor + +```typescript +// Before +const ai = new AI({ + adapters: { + /* ... */ + }, +}) + +// After +const ai = new AI({ + adapters: { + /* ... */ + }, + tools, // ← Add tools here +}) +``` + +### Step 3: Use Tool Names in Chat Calls + +```typescript +// Before +ai.chat({ + messages: [...], + tools: tools, // ← Full array +}); + +// After +ai.chat({ + messages: [...], + tools: ["tool1", "tool2"], // ← Just names! +}); +``` + +## Testing + +Verify type safety: + +```typescript +const ai = new AI({ + adapters: { + /* ... */ + }, + tools: { + get_weather: { + /* ... */ + }, + calculate: { + /* ... */ + }, + }, +}) + +// āœ… Should work +ai.chat({ messages: [], tools: ['get_weather'] }) + +// āŒ Should show TypeScript error +ai.chat({ messages: [], tools: ['invalid_tool'] }) +``` + +## Performance + +No performance impact: + +- Tool name resolution happens once per chat call +- Minimal overhead (simple object lookup) +- Tool execution unchanged + +## Backward Compatibility + +**Breaking Change**: This is a breaking change. Users must: + +1. Convert tool arrays to registries +2. Register tools in constructor +3. Use tool names instead of objects + +However, the migration path is straightforward and provides significant benefits. + +## Future Enhancements + +Potential improvements: + +- Tool namespaces (e.g., `weather.get`, `weather.forecast`) +- Tool permissions/access control +- Tool versioning +- Dynamic tool registration +- Tool composition/chaining + +## Summary + +The Tool Registry API provides: + +āœ… **Type-Safe Tool References** - Autocomplete and validation +āœ… **Centralized Management** - Define once, use everywhere +āœ… **Cleaner Code** - Reference by name instead of objects +āœ… **Better Reusability** - Share tools across chats +āœ… **Runtime Validation** - Clear error messages +āœ… **Developer Experience** - Improved DX with less code + +**Result**: More maintainable, type-safe, and developer-friendly tool management! šŸŽ‰ diff --git a/ai-docs/TOOL_REGISTRY_QUICK_START.md b/ai-docs/TOOL_REGISTRY_QUICK_START.md index 996032e4e..2ea166e59 100644 --- a/ai-docs/TOOL_REGISTRY_QUICK_START.md +++ b/ai-docs/TOOL_REGISTRY_QUICK_START.md @@ -1,208 +1,208 @@ -# Tool Registry API - Quick Start - -> **šŸ”„ Automatic Tool Execution:** The `chat()` method automatically executes tools in a loop. When the model calls a tool, the SDK executes it, adds the result to messages, and continues the conversation automatically (controlled by `agentLoopStrategy`, default: `maxIterations(5)`). You don't need to manually handle tool execution! -> -> **šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) - -## In 3 Steps - -### 1. Define Tools in Constructor - -```typescript -const ai = new AI({ - adapters: { - openai: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }), - }, - tools: { - // ← Define all tools here! - get_weather: { - type: "function" as const, - function: { - name: "get_weather", - description: "Get weather for a location", - parameters: { - type: "object", - properties: { - location: { type: "string" }, - }, - required: ["location"], - }, - }, - execute: async (args: { location: string }) => { - return JSON.stringify({ temp: 72, condition: "sunny" }); - }, - }, - calculate: { - type: "function" as const, - function: { - name: "calculate", - description: "Perform calculations", - parameters: { - type: "object", - properties: { - expression: { type: "string" }, - }, - required: ["expression"], - }, - }, - execute: async (args: { expression: string }) => { - return JSON.stringify({ result: eval(args.expression) }); - }, - }, - }, -}); -``` - -### 2. Reference Tools by Name (Type-Safe!) - -```typescript -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "What's the weather in SF?" }], - tools: ["get_weather"], // ← Type-safe! Autocomplete works! - toolChoice: "auto", -}); -``` - -### 3. Use Multiple Tools - -```typescript -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Weather in NYC and calculate 5*20" }], - tools: ["get_weather", "calculate"], // ← Both tools! - toolChoice: "auto", -}); -``` - -## Type Safety - -āœ… **Autocomplete** - IDE suggests available tool names -āœ… **Validation** - TypeScript catches typos at compile time -āœ… **Runtime checks** - Errors if tool doesn't exist - -```typescript -// āœ… Valid -tools: ["get_weather", "calculate"]; - -// āŒ TypeScript Error -tools: ["invalid_tool"]; -``` - -## Benefits vs Old API - -### Before (Inline Tools) - -```typescript -// Had to define tools every time! -ai.chat({ - messages: [...], - tools: [ - { type: "function", function: { name: "get_weather", ... }, execute: ... }, - { type: "function", function: { name: "calculate", ... }, execute: ... }, - ], -}); -``` - -### After (Tool Registry) - -```typescript -// Define once in constructor, use by name everywhere! -ai.chat({ - messages: [...], - tools: ["get_weather", "calculate"], // ← Much cleaner! -}); -``` - -## With Streaming - -The `chat()` method automatically executes tools and emits chunks for each step: - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "calculate"], - toolChoice: "auto", - agentLoopStrategy: maxIterations(5), // Optional: control loop -}); - -for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); // Stream text content - } else if (chunk.type === "tool_call") { - console.log(`→ Calling: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "tool_result") { - console.log(`āœ“ Tool result: ${chunk.content}`); - } -} -``` - -**What happens internally:** - -1. Model decides to call a tool → `tool_call` chunk emitted -2. SDK executes the tool's `execute` function automatically -3. SDK emits `tool_result` chunk with the result -4. SDK adds tool result to messages and continues conversation -5. Model responds with final answer based on tool results - -## With HTTP Response - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -// Perfect for API endpoints! -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - tools: ["get_weather", "search_products", "send_email"], - toolChoice: "auto", -}); - -return toStreamResponse(stream); -``` - -## Pro Tips - -1. **Use `as const`** when defining tools for type safety -2. **Descriptive names** like `get_weather`, `search_products` -3. **Keep tools in one place** for easy maintenance -4. **List available tools**: `ai.toolNames` -5. **Get a tool**: `ai.getTool("get_weather")` - -## Common Pattern: Separate File - -```typescript -// tools.ts -export const tools = { - get_weather: { /* ... */ }, - calculate: { /* ... */ }, - search_products: { /* ... */ }, -} as const; - -// ai-client.ts -import { tools } from "./tools"; - -export const ai = new AI({ - adapters: { /* ... */ }, - tools, // ← Import from separate file! -}); - -// api.ts -import { ai } from "./ai-client"; - -ai.chat({ - messages: [...], - tools: ["get_weather"], // ← Type-safe across files! -}); -``` - -## Summary - -**Define once, use everywhere with full type safety!** šŸŽ‰ - -See full documentation: `docs/TOOL_REGISTRY.md` +# Tool Registry API - Quick Start + +> **šŸ”„ Automatic Tool Execution:** The `chat()` method automatically executes tools in a loop. When the model calls a tool, the SDK executes it, adds the result to messages, and continues the conversation automatically (controlled by `agentLoopStrategy`, default: `maxIterations(5)`). You don't need to manually handle tool execution! +> +> **šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) + +## In 3 Steps + +### 1. Define Tools in Constructor + +```typescript +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ apiKey: process.env.OPENAI_API_KEY }), + }, + tools: { + // ← Define all tools here! + get_weather: { + type: 'function' as const, + function: { + name: 'get_weather', + description: 'Get weather for a location', + parameters: { + type: 'object', + properties: { + location: { type: 'string' }, + }, + required: ['location'], + }, + }, + execute: async (args: { location: string }) => { + return JSON.stringify({ temp: 72, condition: 'sunny' }) + }, + }, + calculate: { + type: 'function' as const, + function: { + name: 'calculate', + description: 'Perform calculations', + parameters: { + type: 'object', + properties: { + expression: { type: 'string' }, + }, + required: ['expression'], + }, + }, + execute: async (args: { expression: string }) => { + return JSON.stringify({ result: eval(args.expression) }) + }, + }, + }, +}) +``` + +### 2. Reference Tools by Name (Type-Safe!) + +```typescript +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: "What's the weather in SF?" }], + tools: ['get_weather'], // ← Type-safe! Autocomplete works! + toolChoice: 'auto', +}) +``` + +### 3. Use Multiple Tools + +```typescript +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Weather in NYC and calculate 5*20' }], + tools: ['get_weather', 'calculate'], // ← Both tools! + toolChoice: 'auto', +}) +``` + +## Type Safety + +āœ… **Autocomplete** - IDE suggests available tool names +āœ… **Validation** - TypeScript catches typos at compile time +āœ… **Runtime checks** - Errors if tool doesn't exist + +```typescript +// āœ… Valid +tools: ['get_weather', 'calculate'] + +// āŒ TypeScript Error +tools: ['invalid_tool'] +``` + +## Benefits vs Old API + +### Before (Inline Tools) + +```typescript +// Had to define tools every time! +ai.chat({ + messages: [...], + tools: [ + { type: "function", function: { name: "get_weather", ... }, execute: ... }, + { type: "function", function: { name: "calculate", ... }, execute: ... }, + ], +}); +``` + +### After (Tool Registry) + +```typescript +// Define once in constructor, use by name everywhere! +ai.chat({ + messages: [...], + tools: ["get_weather", "calculate"], // ← Much cleaner! +}); +``` + +## With Streaming + +The `chat()` method automatically executes tools and emits chunks for each step: + +```typescript +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "calculate"], + toolChoice: "auto", + agentLoopStrategy: maxIterations(5), // Optional: control loop +}); + +for await (const chunk of stream) { + if (chunk.type === "content") { + process.stdout.write(chunk.delta); // Stream text content + } else if (chunk.type === "tool_call") { + console.log(`→ Calling: ${chunk.toolCall.function.name}`); + } else if (chunk.type === "tool_result") { + console.log(`āœ“ Tool result: ${chunk.content}`); + } +} +``` + +**What happens internally:** + +1. Model decides to call a tool → `tool_call` chunk emitted +2. SDK executes the tool's `execute` function automatically +3. SDK emits `tool_result` chunk with the result +4. SDK adds tool result to messages and continues conversation +5. Model responds with final answer based on tool results + +## With HTTP Response + +```typescript +import { toStreamResponse } from "@tanstack/ai"; + +// Perfect for API endpoints! +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + tools: ["get_weather", "search_products", "send_email"], + toolChoice: "auto", +}); + +return toStreamResponse(stream); +``` + +## Pro Tips + +1. **Use `as const`** when defining tools for type safety +2. **Descriptive names** like `get_weather`, `search_products` +3. **Keep tools in one place** for easy maintenance +4. **List available tools**: `ai.toolNames` +5. **Get a tool**: `ai.getTool("get_weather")` + +## Common Pattern: Separate File + +```typescript +// tools.ts +export const tools = { + get_weather: { /* ... */ }, + calculate: { /* ... */ }, + search_products: { /* ... */ }, +} as const; + +// ai-client.ts +import { tools } from "./tools"; + +export const ai = new AI({ + adapters: { /* ... */ }, + tools, // ← Import from separate file! +}); + +// api.ts +import { ai } from "./ai-client"; + +ai.chat({ + messages: [...], + tools: ["get_weather"], // ← Type-safe across files! +}); +``` + +## Summary + +**Define once, use everywhere with full type safety!** šŸŽ‰ + +See full documentation: `docs/TOOL_REGISTRY.md` diff --git a/ai-docs/TOOL_STATES_MIGRATION.md b/ai-docs/TOOL_STATES_MIGRATION.md index 4aee5bfef..7b180fd84 100644 --- a/ai-docs/TOOL_STATES_MIGRATION.md +++ b/ai-docs/TOOL_STATES_MIGRATION.md @@ -13,15 +13,17 @@ This migration introduces comprehensive tool state tracking and a parts-based me The `Message` interface has been renamed to `ModelMessage` to better reflect its purpose as the format used for LLM communication. **Before:** + ```typescript -import type { Message } from "@tanstack/ai"; -const messages: Message[] = []; +import type { Message } from '@tanstack/ai' +const messages: Message[] = [] ``` **After:** + ```typescript -import type { ModelMessage } from "@tanstack/ai"; -const messages: ModelMessage[] = []; +import type { ModelMessage } from '@tanstack/ai' +const messages: ModelMessage[] = [] ``` ### 2. New UIMessage Type with Parts @@ -31,41 +33,43 @@ const messages: ModelMessage[] = []; A new `UIMessage` type has been introduced for client-side UI rendering. Messages are now composed of parts (text, tool calls, tool results) instead of flat content and toolCalls properties. **Structure:** + ```typescript interface UIMessage { - id: string; - role: "system" | "user" | "assistant"; - parts: MessagePart[]; - createdAt?: Date; + id: string + role: 'system' | 'user' | 'assistant' + parts: MessagePart[] + createdAt?: Date } -type MessagePart = TextPart | ToolCallPart | ToolResultPart; +type MessagePart = TextPart | ToolCallPart | ToolResultPart interface TextPart { - type: "text"; - content: string; + type: 'text' + content: string } interface ToolCallPart { - type: "tool-call"; - id: string; - name: string; - arguments: string; - state: ToolCallState; // "awaiting-input" | "input-streaming" | "input-complete" + type: 'tool-call' + id: string + name: string + arguments: string + state: ToolCallState // "awaiting-input" | "input-streaming" | "input-complete" } interface ToolResultPart { - type: "tool-result"; - toolCallId: string; - content: string; - state: ToolResultState; // "streaming" | "complete" | "error" - error?: string; + type: 'tool-result' + toolCallId: string + content: string + state: ToolResultState // "streaming" | "complete" | "error" + error?: string } ``` ### 3. Tool Call States Tool calls now track their lifecycle: + - **awaiting-input**: Tool call started but no arguments received yet - **input-streaming**: Partial arguments received (uses loose JSON parser) - **input-complete**: All arguments received @@ -75,10 +79,10 @@ Tool calls now track their lifecycle: A new loose JSON parser has been integrated to handle incomplete tool arguments during streaming: ```typescript -import { parsePartialJSON } from "@tanstack/ai-client"; +import { parsePartialJSON } from '@tanstack/ai-client' -const partialArgs = '{"name": "John", "ag'; -const parsed = parsePartialJSON(partialArgs); // { name: "John" } +const partialArgs = '{"name": "John", "ag' +const parsed = parsePartialJSON(partialArgs) // { name: "John" } ``` ### 5. Automatic Conversion @@ -89,15 +93,15 @@ Connection adapters automatically convert `UIMessage[]` to `ModelMessage[]` befo // Client code - works with UIMessages const messages: UIMessage[] = [ { - id: "1", - role: "user", - parts: [{ type: "text", content: "Hello" }] - } -]; + id: '1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], + }, +] // Automatically converted to ModelMessages when sent -const connection = fetchServerSentEvents("/api/chat"); -const stream = connection.connect(messages); // Converts internally +const connection = fetchServerSentEvents('/api/chat') +const stream = connection.connect(messages) // Converts internally ``` ## Migration Steps @@ -105,12 +109,14 @@ const stream = connection.connect(messages); // Converts internally ### For CLI/Backend Code (using @tanstack/ai) **Step 1:** Update type imports + ```diff - import type { Message } from "@tanstack/ai"; + import type { ModelMessage } from "@tanstack/ai"; ``` **Step 2:** Update variable types + ```diff - const messages: Message[] = []; + const messages: ModelMessage[] = []; @@ -121,6 +127,7 @@ const stream = connection.connect(messages); // Converts internally **Step 1:** Update message rendering to use parts **Before:** + ```typescript {messages.map(({ id, role, content, toolCalls }) => (
@@ -131,15 +138,16 @@ const stream = connection.connect(messages); // Converts internally ``` **After:** + ```typescript {messages.map(({ id, role, parts }) => { const textContent = parts .filter(p => p.type === "text") .map(p => p.content) .join(""); - + const toolCallParts = parts.filter(p => p.type === "tool-call"); - + return (
{textContent &&

{textContent}

} @@ -150,6 +158,7 @@ const stream = connection.connect(messages); // Converts internally ``` **Step 2:** Access tool call state + ```typescript {toolCallParts.map(tc => (
@@ -174,13 +183,13 @@ Monitor tool call progress in real-time: const processor = new StreamProcessor({ handlers: { onToolCallStateChange: (index, id, name, state, args, parsedArgs) => { - console.log(`Tool ${name} is now ${state}`); + console.log(`Tool ${name} is now ${state}`) if (parsedArgs) { - console.log("Parsed arguments so far:", parsedArgs); + console.log('Parsed arguments so far:', parsedArgs) } - } - } -}); + }, + }, +}) ``` ### 2. Message Converters @@ -191,17 +200,17 @@ Convert between UIMessages and ModelMessages: import { uiMessageToModelMessages, modelMessageToUIMessage, - modelMessagesToUIMessages -} from "@tanstack/ai-client"; + modelMessagesToUIMessages, +} from '@tanstack/ai-client' // Convert UI message to model message(s) -const modelMessages = uiMessageToModelMessages(uiMessage); +const modelMessages = uiMessageToModelMessages(uiMessage) // Convert model message to UI message -const uiMessage = modelMessageToUIMessage(modelMessage, "msg-123"); +const uiMessage = modelMessageToUIMessage(modelMessage, 'msg-123') // Convert array of model messages to UI messages -const uiMessages = modelMessagesToUIMessages(modelMessages); +const uiMessages = modelMessagesToUIMessages(modelMessages) ``` ### 3. Custom JSON Parser @@ -212,23 +221,27 @@ Provide your own parser for incomplete JSON: const customParser = { parse: (jsonString: string) => { // Your custom parsing logic - return myPartialJSONParser(jsonString); - } -}; + return myPartialJSONParser(jsonString) + }, +} const processor = new StreamProcessor({ jsonParser: customParser, - handlers: { /* ... */ } -}); + handlers: { + /* ... */ + }, +}) ``` ## Updated Exports ### @tanstack/ai + - āœ… `ModelMessage` (renamed from `Message`) - All other exports unchanged ### @tanstack/ai-client + - āœ… `UIMessage` - New parts-based message type - āœ… `MessagePart`, `TextPart`, `ToolCallPart`, `ToolResultPart` - Part types - āœ… `ToolCallState`, `ToolResultState` - State types @@ -239,16 +252,19 @@ const processor = new StreamProcessor({ ## Breaking Changes ### ā—ļø Message Type Rename + - `Message` is now `ModelMessage` in `@tanstack/ai` - Update all type imports and variable declarations ### ā—ļø UIMessage Structure Change + - Messages now have `parts: MessagePart[]` instead of `content` and `toolCalls` - Update UI rendering code to iterate over parts - Access text via `parts.filter(p => p.type === "text")` - Access tool calls via `parts.filter(p => p.type === "tool-call")` ### āœ… No Breaking Changes For + - Server-side code (Python, PHP) - continues to work as-is - Connection adapters - automatically convert UIMessages to ModelMessages - Core AI functionality - ModelMessage has same structure as old Message @@ -264,6 +280,7 @@ const processor = new StreamProcessor({ ## Examples See the updated examples: + - **CLI Example**: `/examples/cli/src/index.ts` - Uses ModelMessage - **React Chat Example**: `/examples/ts-chat/src/routes/demo/tanchat.tsx` - Uses UIMessage with parts - **AI Assistant Component**: `/examples/ts-chat/src/components/example-AIAssistant.tsx` - Uses UIMessage with parts @@ -271,4 +288,3 @@ See the updated examples: ## Support For questions or issues related to this migration, please refer to the TanStack AI documentation or open an issue on GitHub. - diff --git a/ai-docs/TYPE_NARROWING_SOLUTION.md b/ai-docs/TYPE_NARROWING_SOLUTION.md index 6526226f0..2f618e06a 100644 --- a/ai-docs/TYPE_NARROWING_SOLUTION.md +++ b/ai-docs/TYPE_NARROWING_SOLUTION.md @@ -1,156 +1,165 @@ -# Type Narrowing with Separate Methods āœ… - -> **Note**: This document describes type narrowing with the current API. The previous `as` option approach has been replaced with separate methods. - -## The Solution - -With separate methods, type narrowing is automatic and simple: - -```typescript -// Streaming - returns AsyncIterable -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], -}); -// Type: AsyncIterable āœ… - -// Promise-based - returns Promise -const result = ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [...], -}); -// Type: Promise āœ… -``` - -No need for `as const` assertions or discriminated unions - TypeScript automatically knows the return type! - -## How to Use - -### āœ… Correct Usage - Type is Automatically Narrowed - -```typescript -// Returns AsyncIterable -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], -}); - -for await (const chunk of stream) { - // TypeScript knows chunk is StreamChunk āœ… - console.log(chunk.type); -} - -// Returns Promise -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [...], -}); - -// TypeScript knows result is ChatCompletionResult āœ… -console.log(result.content); -console.log(result.usage.totalTokens); -``` - -### Type Inference Examples - -```typescript -// 1. Stream mode - returns AsyncIterable -const stream = ai.chat({ adapter: "openai", model: "gpt-4", messages: [] }); -// Type: AsyncIterable āœ… - -// 2. Promise mode - returns Promise -const promise = ai.chatCompletion({ adapter: "openai", model: "gpt-4", messages: [] }); -// Type: Promise āœ… - -// 3. After await - ChatCompletionResult -const result = await ai.chatCompletion({ adapter: "openai", model: "gpt-4", messages: [] }); -// Type: ChatCompletionResult āœ… -``` - -## Real-World Example: API Handler - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -export const Route = createAPIFileRoute("/api/chat")({ - POST: async ({ request }): Promise => { - const { messages } = await request.json(); - - // TypeScript knows this returns AsyncIterable āœ… - const stream = ai.chat({ - adapter: "openAi", - model: "gpt-4o", - messages, - fallbacks: [ - { adapter: "ollama", model: "llama2" } - ] - }); - - // Convert to Response - return toStreamResponse(stream); - } -}); -``` - -## Why Separate Methods Are Better - -With the old `as` option approach: -```typescript -const as = "response"; // Type: string -const result = ai.chat({ adapter: "openai", model: "gpt-4", messages: [], as }); -// Return type: Promise | AsyncIterable | Response -// āŒ TypeScript doesn't know which specific type -// Need: as: "response" as const -``` - -With separate methods: -```typescript -const stream = ai.chat({ adapter: "openai", model: "gpt-4", messages: [] }); -// Return type: AsyncIterable -// āœ… TypeScript knows exact type automatically! -``` - -## Technical Explanation - -The separate methods approach is simpler: - -```typescript -class AI { - chat(options: ChatOptions): AsyncIterable { - // Implementation... - } - - async chatCompletion(options: ChatOptions): Promise { - // Implementation... - } -} -``` - -TypeScript's type inference: -1. Call `chat()` → method signature says it returns `AsyncIterable` -2. Call `chatCompletion()` → method signature says it returns `Promise` -3. No conditional types needed - just straightforward method signatures! - -## Benefits - -āœ… **Type Safety**: TypeScript knows exact return type at compile time -āœ… **IntelliSense**: Autocomplete shows correct properties for each method -āœ… **Compile-Time Errors**: Catch type mismatches before runtime -āœ… **Refactoring Safety**: Changes are caught automatically -āœ… **Self-Documenting**: Methods serve as inline documentation -āœ… **Simpler**: No `as const` needed, no overloads needed - -## Summary - -The separate methods API provides perfect type narrowing without any special syntax: - -| Method | Return Type | -|--------|-------------| -| `chat()` | `AsyncIterable` | -| `chatCompletion()` | `Promise` | - -**Pro Tip**: Just call the method you need - TypeScript handles the rest! šŸŽ‰ +# Type Narrowing with Separate Methods āœ… + +> **Note**: This document describes type narrowing with the current API. The previous `as` option approach has been replaced with separate methods. + +## The Solution + +With separate methods, type narrowing is automatic and simple: + +```typescript +// Streaming - returns AsyncIterable +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], +}); +// Type: AsyncIterable āœ… + +// Promise-based - returns Promise +const result = ai.chatCompletion({ + adapter: "openai", + model: "gpt-4", + messages: [...], +}); +// Type: Promise āœ… +``` + +No need for `as const` assertions or discriminated unions - TypeScript automatically knows the return type! + +## How to Use + +### āœ… Correct Usage - Type is Automatically Narrowed + +```typescript +// Returns AsyncIterable +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], +}); + +for await (const chunk of stream) { + // TypeScript knows chunk is StreamChunk āœ… + console.log(chunk.type); +} + +// Returns Promise +const result = await ai.chatCompletion({ + adapter: "openai", + model: "gpt-4", + messages: [...], +}); + +// TypeScript knows result is ChatCompletionResult āœ… +console.log(result.content); +console.log(result.usage.totalTokens); +``` + +### Type Inference Examples + +```typescript +// 1. Stream mode - returns AsyncIterable +const stream = ai.chat({ adapter: 'openai', model: 'gpt-4', messages: [] }) +// Type: AsyncIterable āœ… + +// 2. Promise mode - returns Promise +const promise = ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) +// Type: Promise āœ… + +// 3. After await - ChatCompletionResult +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) +// Type: ChatCompletionResult āœ… +``` + +## Real-World Example: API Handler + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +export const Route = createAPIFileRoute('/api/chat')({ + POST: async ({ request }): Promise => { + const { messages } = await request.json() + + // TypeScript knows this returns AsyncIterable āœ… + const stream = ai.chat({ + adapter: 'openAi', + model: 'gpt-4o', + messages, + fallbacks: [{ adapter: 'ollama', model: 'llama2' }], + }) + + // Convert to Response + return toStreamResponse(stream) + }, +}) +``` + +## Why Separate Methods Are Better + +With the old `as` option approach: + +```typescript +const as = 'response' // Type: string +const result = ai.chat({ adapter: 'openai', model: 'gpt-4', messages: [], as }) +// Return type: Promise | AsyncIterable | Response +// āŒ TypeScript doesn't know which specific type +// Need: as: "response" as const +``` + +With separate methods: + +```typescript +const stream = ai.chat({ adapter: 'openai', model: 'gpt-4', messages: [] }) +// Return type: AsyncIterable +// āœ… TypeScript knows exact type automatically! +``` + +## Technical Explanation + +The separate methods approach is simpler: + +```typescript +class AI { + chat(options: ChatOptions): AsyncIterable { + // Implementation... + } + + async chatCompletion(options: ChatOptions): Promise { + // Implementation... + } +} +``` + +TypeScript's type inference: + +1. Call `chat()` → method signature says it returns `AsyncIterable` +2. Call `chatCompletion()` → method signature says it returns `Promise` +3. No conditional types needed - just straightforward method signatures! + +## Benefits + +āœ… **Type Safety**: TypeScript knows exact return type at compile time +āœ… **IntelliSense**: Autocomplete shows correct properties for each method +āœ… **Compile-Time Errors**: Catch type mismatches before runtime +āœ… **Refactoring Safety**: Changes are caught automatically +āœ… **Self-Documenting**: Methods serve as inline documentation +āœ… **Simpler**: No `as const` needed, no overloads needed + +## Summary + +The separate methods API provides perfect type narrowing without any special syntax: + +| Method | Return Type | +| ------------------ | ------------------------------- | +| `chat()` | `AsyncIterable` | +| `chatCompletion()` | `Promise` | + +**Pro Tip**: Just call the method you need - TypeScript handles the rest! šŸŽ‰ diff --git a/ai-docs/TYPE_NARROWING_UNIFIED_CHAT.md b/ai-docs/TYPE_NARROWING_UNIFIED_CHAT.md index 879bf9d7d..adc0554d7 100644 --- a/ai-docs/TYPE_NARROWING_UNIFIED_CHAT.md +++ b/ai-docs/TYPE_NARROWING_UNIFIED_CHAT.md @@ -1,224 +1,225 @@ -# Type Narrowing in Chat API - -> **Note**: This document describes type narrowing with the current API using separate methods. The previous `as` option approach has been replaced with `chat()` for streaming and `chatCompletion()` for promise-based completion. - -## Overview - -The chat API uses separate methods, which provides automatic type narrowing without needing discriminated unions or const assertions: - -- **`chat()`** - Always returns `AsyncIterable` -- **`chatCompletion()`** - Always returns `Promise` - -TypeScript automatically knows the exact return type based on which method you call! - -## Type Narrowing Rules - -| Method | Return Type | Usage | -|--------|-------------|-------| -| `chat()` | `AsyncIterable` | Can use `for await...of`, iterate chunks | -| `chatCompletion()` | `Promise` | Can `await`, access `.content`, `.usage`, etc. | - -## Examples with Type Checking - -### 1. Promise Mode (chatCompletion) - Type is `Promise` - -```typescript -const result = ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -// TypeScript knows result is Promise -const resolved = await result; - -// āœ… These work - properties exist on ChatCompletionResult -console.log(resolved.content); -console.log(resolved.role); -console.log(resolved.usage.totalTokens); - -// āŒ TypeScript error - headers doesn't exist on ChatCompletionResult -console.log(resolved.headers); // Type error! -``` - -### 2. Stream Mode (chat) - Type is `AsyncIterable` - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -// TypeScript knows stream is AsyncIterable -// āœ… This works - can iterate async iterable -for await (const chunk of stream) { - console.log(chunk.type); - console.log(chunk.id); - console.log(chunk.model); -} - -// āŒ TypeScript error - content doesn't exist on AsyncIterable -console.log(stream.content); // Type error! - -// āŒ TypeScript error - headers doesn't exist on AsyncIterable -console.log(stream.headers); // Type error! -``` - -### 3. HTTP Response Mode - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -const response = toStreamResponse(stream); - -// TypeScript knows response is Response -// āœ… These work - properties exist on Response -console.log(response.headers); -console.log(response.body); -console.log(response.status); -console.log(response.ok); - -const contentType = response.headers.get("Content-Type"); - -// āŒ TypeScript error - content doesn't exist on Response -console.log(response.content); // Type error! -``` - -## Function Return Type Inference - -TypeScript correctly infers return types in functions: - -### API Handler - Returns `Response` - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -function apiHandler() { - const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - }); - - return toStreamResponse(stream); - // TypeScript infers: function apiHandler(): Response āœ… -} -``` - -### Type-safe API Handler - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -function apiHandler(): Response { - const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - }); - - return toStreamResponse(stream); // āœ… Correct - returns Response -} - -function wrongApiHandler(): Response { - const result = ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [...], - }); - - return result; // āŒ TypeScript error - returns Promise, not Response -} -``` - -### Streaming Handler - -```typescript -async function* streamHandler() { - const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - }); - - // TypeScript knows stream is AsyncIterable - for await (const chunk of stream) { - yield chunk; // āœ… Works perfectly - } -} -``` - -## With Fallbacks - Type Narrowing Still Works - -```typescript -// Promise with fallbacks - Type: Promise -const promise = ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [...], - fallbacks: [{ adapter: "ollama", model: "llama2" }] -}); -const resolved = await promise; -console.log(resolved.content); // āœ… Works - -// Stream with fallbacks - Type: AsyncIterable -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - fallbacks: [{ adapter: "ollama", model: "llama2" }] -}); -for await (const chunk of stream) { - console.log(chunk.type); // āœ… Works -} -``` - -## How It Works (Technical Details) - -With separate methods, TypeScript doesn't need function overloads or conditional types: - -```typescript -class AI { - // Simple method signatures - no overloads needed! - chat(options: ChatOptions): AsyncIterable { - return this.adapter.chatStream(options); - } - - async chatCompletion(options: ChatOptions): Promise { - return this.adapter.chatCompletion(options); - } -} -``` - -TypeScript's type inference is straightforward: -- Call `chat()` → get `AsyncIterable` -- Call `chatCompletion()` → get `Promise` - -No need for `as const` assertions or discriminated unions! - -## Benefits - -āœ… **Type Safety**: TypeScript knows exact return type at compile time -āœ… **IntelliSense**: Autocomplete shows correct properties for each method -āœ… **Compile-Time Errors**: Catch type mismatches before runtime -āœ… **Refactoring Safety**: Changes are caught automatically -āœ… **Self-Documenting**: Methods serve as inline documentation -āœ… **Simpler**: No need for const assertions or overloads - -## Summary - -The separate methods API provides perfect type narrowing automatically: - -| Code | Return Type | -|------|-------------| -| `chat()` | `AsyncIterable` | -| `chatCompletion()` | `Promise` | - -TypeScript enforces these types at compile time, providing complete type safety without any special syntax! šŸŽ‰ +# Type Narrowing in Chat API + +> **Note**: This document describes type narrowing with the current API using separate methods. The previous `as` option approach has been replaced with `chat()` for streaming and `chatCompletion()` for promise-based completion. + +## Overview + +The chat API uses separate methods, which provides automatic type narrowing without needing discriminated unions or const assertions: + +- **`chat()`** - Always returns `AsyncIterable` +- **`chatCompletion()`** - Always returns `Promise` + +TypeScript automatically knows the exact return type based on which method you call! + +## Type Narrowing Rules + +| Method | Return Type | Usage | +| ------------------ | ------------------------------- | ---------------------------------------------- | +| `chat()` | `AsyncIterable` | Can use `for await...of`, iterate chunks | +| `chatCompletion()` | `Promise` | Can `await`, access `.content`, `.usage`, etc. | + +## Examples with Type Checking + +### 1. Promise Mode (chatCompletion) - Type is `Promise` + +```typescript +const result = ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +// TypeScript knows result is Promise +const resolved = await result + +// āœ… These work - properties exist on ChatCompletionResult +console.log(resolved.content) +console.log(resolved.role) +console.log(resolved.usage.totalTokens) + +// āŒ TypeScript error - headers doesn't exist on ChatCompletionResult +console.log(resolved.headers) // Type error! +``` + +### 2. Stream Mode (chat) - Type is `AsyncIterable` + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +// TypeScript knows stream is AsyncIterable +// āœ… This works - can iterate async iterable +for await (const chunk of stream) { + console.log(chunk.type) + console.log(chunk.id) + console.log(chunk.model) +} + +// āŒ TypeScript error - content doesn't exist on AsyncIterable +console.log(stream.content) // Type error! + +// āŒ TypeScript error - headers doesn't exist on AsyncIterable +console.log(stream.headers) // Type error! +``` + +### 3. HTTP Response Mode + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +const response = toStreamResponse(stream) + +// TypeScript knows response is Response +// āœ… These work - properties exist on Response +console.log(response.headers) +console.log(response.body) +console.log(response.status) +console.log(response.ok) + +const contentType = response.headers.get('Content-Type') + +// āŒ TypeScript error - content doesn't exist on Response +console.log(response.content) // Type error! +``` + +## Function Return Type Inference + +TypeScript correctly infers return types in functions: + +### API Handler - Returns `Response` + +```typescript +import { toStreamResponse } from "@tanstack/ai"; + +function apiHandler() { + const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + }); + + return toStreamResponse(stream); + // TypeScript infers: function apiHandler(): Response āœ… +} +``` + +### Type-safe API Handler + +```typescript +import { toStreamResponse } from "@tanstack/ai"; + +function apiHandler(): Response { + const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + }); + + return toStreamResponse(stream); // āœ… Correct - returns Response +} + +function wrongApiHandler(): Response { + const result = ai.chatCompletion({ + adapter: "openai", + model: "gpt-4", + messages: [...], + }); + + return result; // āŒ TypeScript error - returns Promise, not Response +} +``` + +### Streaming Handler + +```typescript +async function* streamHandler() { + const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + }); + + // TypeScript knows stream is AsyncIterable + for await (const chunk of stream) { + yield chunk; // āœ… Works perfectly + } +} +``` + +## With Fallbacks - Type Narrowing Still Works + +```typescript +// Promise with fallbacks - Type: Promise +const promise = ai.chatCompletion({ + adapter: "openai", + model: "gpt-4", + messages: [...], + fallbacks: [{ adapter: "ollama", model: "llama2" }] +}); +const resolved = await promise; +console.log(resolved.content); // āœ… Works + +// Stream with fallbacks - Type: AsyncIterable +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + fallbacks: [{ adapter: "ollama", model: "llama2" }] +}); +for await (const chunk of stream) { + console.log(chunk.type); // āœ… Works +} +``` + +## How It Works (Technical Details) + +With separate methods, TypeScript doesn't need function overloads or conditional types: + +```typescript +class AI { + // Simple method signatures - no overloads needed! + chat(options: ChatOptions): AsyncIterable { + return this.adapter.chatStream(options) + } + + async chatCompletion(options: ChatOptions): Promise { + return this.adapter.chatCompletion(options) + } +} +``` + +TypeScript's type inference is straightforward: + +- Call `chat()` → get `AsyncIterable` +- Call `chatCompletion()` → get `Promise` + +No need for `as const` assertions or discriminated unions! + +## Benefits + +āœ… **Type Safety**: TypeScript knows exact return type at compile time +āœ… **IntelliSense**: Autocomplete shows correct properties for each method +āœ… **Compile-Time Errors**: Catch type mismatches before runtime +āœ… **Refactoring Safety**: Changes are caught automatically +āœ… **Self-Documenting**: Methods serve as inline documentation +āœ… **Simpler**: No need for const assertions or overloads + +## Summary + +The separate methods API provides perfect type narrowing automatically: + +| Code | Return Type | +| ------------------ | ------------------------------- | +| `chat()` | `AsyncIterable` | +| `chatCompletion()` | `Promise` | + +TypeScript enforces these types at compile time, providing complete type safety without any special syntax! šŸŽ‰ diff --git a/ai-docs/TYPE_SAFETY.md b/ai-docs/TYPE_SAFETY.md index 08d44be8a..48cf95eeb 100644 --- a/ai-docs/TYPE_SAFETY.md +++ b/ai-docs/TYPE_SAFETY.md @@ -1,303 +1,305 @@ -# Type-Safe Multi-Adapter AI API - -This package provides complete TypeScript type safety for working with multiple AI providers, ensuring that you can only use models that are supported by each adapter. - -## Features - -- āœ… **Adapter-specific model validation** - TypeScript prevents using GPT models with Anthropic and vice versa -- āœ… **Full autocomplete support** - Your IDE suggests only valid models for the selected adapter -- āœ… **Compile-time safety** - Catch model incompatibilities before runtime -- āœ… **Multi-adapter support** - Use multiple AI providers in a single application -- āœ… **Type inference** - Model types are automatically inferred from adapter configuration - -## Installation - -```bash -npm install @tanstack/ai @tanstack/ai-openai @tanstack/ai-anthropic -``` - -## Basic Usage - -### Creating an AI instance with multiple adapters - -```typescript -import { AI } from "@tanstack/ai"; -import { OpenAIAdapter } from "@tanstack/ai-openai"; -import { AnthropicAdapter } from "@tanstack/ai-anthropic"; - -const ai = new AI({ - adapters: { - "openai": new OpenAIAdapter({ - apiKey: process.env.OPENAI_API_KEY!, - }), - "anthropic": new AnthropicAdapter({ - apiKey: process.env.ANTHROPIC_API_KEY!, - }), - }, -}); -``` - -### Type-safe model selection - -```typescript -// āœ… VALID - OpenAI with GPT model -await ai.chat({ - adapter: "openai", - model: "gpt-4", // TypeScript knows this is valid - messages: [{ role: "user", content: "Hello!" }], -}); - -// āœ… VALID - Anthropic with Claude model -await ai.chat({ - adapter: "anthropic", - model: "claude-3-5-sonnet-20241022", // TypeScript knows this is valid - messages: [{ role: "user", content: "Hello!" }], -}); - -// āŒ COMPILE ERROR - Wrong model for adapter -await ai.chat({ - adapter: "anthropic", - model: "gpt-4", // TypeScript error: "gpt-4" not valid for Anthropic! - messages: [{ role: "user", content: "Hello!" }], -}); - -// āŒ COMPILE ERROR - Wrong model for adapter -await ai.chat({ - adapter: "openai", - model: "claude-3-5-sonnet-20241022", // TypeScript error: Claude not valid for OpenAI! - messages: [{ role: "user", content: "Hello!" }], -}); -``` - -## Available Models - -### OpenAI Models - -```typescript -type OpenAIModel = - | "gpt-4" - | "gpt-4-turbo" - | "gpt-4-turbo-preview" - | "gpt-4o" - | "gpt-4o-mini" - | "gpt-3.5-turbo" - | "gpt-3.5-turbo-16k" - | "gpt-3.5-turbo-instruct" - | "text-embedding-ada-002" - | "text-embedding-3-small" - | "text-embedding-3-large"; -``` - -### Anthropic Models - -```typescript -type AnthropicModel = - | "claude-3-5-sonnet-20241022" - | "claude-3-5-sonnet-20240620" - | "claude-3-opus-20240229" - | "claude-3-sonnet-20240229" - | "claude-3-haiku-20240307" - | "claude-2.1" - | "claude-2.0" - | "claude-instant-1.2"; -``` - -## API Methods - -All methods support the same type-safe adapter and model selection: - -### Chat Completion - -```typescript -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "system", content: "You are a helpful assistant." }, - { role: "user", content: "What is TypeScript?" }, - ], - temperature: 0.7, - maxTokens: 500, -}); -``` - -### Streaming Chat - -```typescript -for await (const chunk of ai.streamChat({ - adapter: "anthropic", - model: "claude-3-5-sonnet-20241022", - messages: [{ role: "user", content: "Count from 1 to 5" }], -})) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); - } -} -``` - -### Text Generation - -```typescript -const result = await ai.generateText({ - adapter: "openai", - model: "gpt-3.5-turbo-instruct", - prompt: "Write a haiku about TypeScript", - maxTokens: 100, -}); -``` - -### Summarization - -```typescript -const result = await ai.summarize({ - adapter: "anthropic", - model: "claude-3-haiku-20240307", - text: "Long text to summarize...", - style: "bullet-points", - maxLength: 200, -}); -``` - -### Embeddings - -```typescript -const result = await ai.embed({ - adapter: "openai", - model: "text-embedding-3-small", - input: "Text to embed", -}); -``` - -## Advanced Features - -### Dynamic Adapter Addition - -```typescript -const aiWithGemini = ai.addAdapter( - "gemini", - new GeminiAdapter({ apiKey: "..." }) -); - -// Now "gemini" is available with full type safety -await aiWithGemini.chat({ - adapter: "gemini", - model: "gemini-pro", // Types updated automatically - messages: [{ role: "user", content: "Hello!" }], -}); -``` - -### Getting Available Adapters - -```typescript -console.log(ai.adapterNames); // ["openai", "anthropic"] -``` - -### Direct Adapter Access - -```typescript -const openai = ai.getAdapter("openai"); -console.log(openai.models); // Array of OpenAI models -``` - -## Benefits - -### 1. Compile-Time Safety - -**Before:** -```typescript -// Runtime error when deployed -await ai.chat({ - provider: "anthropic", - model: "gpt-4", // Oops! Wrong model -}); -// Error: Model 'gpt-4' not found for provider 'anthropic' -``` - -**After:** -```typescript -// Compile-time error in your editor -await ai.chat({ - adapter: "anthropic", - model: "gpt-4", // TypeScript error immediately -}); -// Error: Type '"gpt-4"' is not assignable to type 'claude-...' -``` - -### 2. IDE Autocomplete - -When you type `model:`, your IDE will show you **only** the models available for the selected adapter: - -- Select `openai` → See GPT models -- Select `anthropic` → See Claude models - -### 3. Refactoring Safety - -If you switch adapters, TypeScript will immediately flag any incompatible models: - -```typescript -// Change from OpenAI to Anthropic -await ai.chat({ - adapter: "anthropic", // Changed this - model: "gpt-4", // TypeScript immediately flags this as an error - messages: [], -}); -``` - -### 4. Self-Documenting Code - -The types serve as documentation - you can see all available models without checking docs: - -```typescript -// Hover over "model" to see all valid options -ai.chat({ adapter: "openai", model: /* hover here */ }); -``` - -## Creating Custom Adapters - -To create a custom adapter with type safety: - -```typescript -import { BaseAdapter } from "@tanstack/ai"; - -const MY_MODELS = ["my-model-1", "my-model-2", "my-model-3"] as const; - -export class MyAdapter extends BaseAdapter { - name = "my-adapter"; - models = MY_MODELS; - - // Implement required methods... -} -``` - -Then use it with full type safety: - -```typescript -const ai = new AI({ - adapters: { - "my-adapter": new MyAdapter({ apiKey: "..." }), - }, -}); - -// TypeScript now knows about "my-model-1", "my-model-2", etc. -await ai.chat({ - adapter: "my-adapter", - model: "my-model-1", // Autocomplete works! - messages: [], -}); -``` - -## Examples - -See the `/examples` directory for complete working examples: - -- `model-safety-demo.ts` - Comprehensive demonstration of type safety -- `type-safety-demo.ts` - Quick reference showing valid and invalid usage -- `multi-adapter-example.ts` - Real-world multi-adapter usage - -## TypeScript Configuration - -This package requires TypeScript 4.5 or higher for full type inference support. - -## License - -MIT +# Type-Safe Multi-Adapter AI API + +This package provides complete TypeScript type safety for working with multiple AI providers, ensuring that you can only use models that are supported by each adapter. + +## Features + +- āœ… **Adapter-specific model validation** - TypeScript prevents using GPT models with Anthropic and vice versa +- āœ… **Full autocomplete support** - Your IDE suggests only valid models for the selected adapter +- āœ… **Compile-time safety** - Catch model incompatibilities before runtime +- āœ… **Multi-adapter support** - Use multiple AI providers in a single application +- āœ… **Type inference** - Model types are automatically inferred from adapter configuration + +## Installation + +```bash +npm install @tanstack/ai @tanstack/ai-openai @tanstack/ai-anthropic +``` + +## Basic Usage + +### Creating an AI instance with multiple adapters + +```typescript +import { AI } from '@tanstack/ai' +import { OpenAIAdapter } from '@tanstack/ai-openai' +import { AnthropicAdapter } from '@tanstack/ai-anthropic' + +const ai = new AI({ + adapters: { + openai: new OpenAIAdapter({ + apiKey: process.env.OPENAI_API_KEY!, + }), + anthropic: new AnthropicAdapter({ + apiKey: process.env.ANTHROPIC_API_KEY!, + }), + }, +}) +``` + +### Type-safe model selection + +```typescript +// āœ… VALID - OpenAI with GPT model +await ai.chat({ + adapter: 'openai', + model: 'gpt-4', // TypeScript knows this is valid + messages: [{ role: 'user', content: 'Hello!' }], +}) + +// āœ… VALID - Anthropic with Claude model +await ai.chat({ + adapter: 'anthropic', + model: 'claude-3-5-sonnet-20241022', // TypeScript knows this is valid + messages: [{ role: 'user', content: 'Hello!' }], +}) + +// āŒ COMPILE ERROR - Wrong model for adapter +await ai.chat({ + adapter: 'anthropic', + model: 'gpt-4', // TypeScript error: "gpt-4" not valid for Anthropic! + messages: [{ role: 'user', content: 'Hello!' }], +}) + +// āŒ COMPILE ERROR - Wrong model for adapter +await ai.chat({ + adapter: 'openai', + model: 'claude-3-5-sonnet-20241022', // TypeScript error: Claude not valid for OpenAI! + messages: [{ role: 'user', content: 'Hello!' }], +}) +``` + +## Available Models + +### OpenAI Models + +```typescript +type OpenAIModel = + | 'gpt-4' + | 'gpt-4-turbo' + | 'gpt-4-turbo-preview' + | 'gpt-4o' + | 'gpt-4o-mini' + | 'gpt-3.5-turbo' + | 'gpt-3.5-turbo-16k' + | 'gpt-3.5-turbo-instruct' + | 'text-embedding-ada-002' + | 'text-embedding-3-small' + | 'text-embedding-3-large' +``` + +### Anthropic Models + +```typescript +type AnthropicModel = + | 'claude-3-5-sonnet-20241022' + | 'claude-3-5-sonnet-20240620' + | 'claude-3-opus-20240229' + | 'claude-3-sonnet-20240229' + | 'claude-3-haiku-20240307' + | 'claude-2.1' + | 'claude-2.0' + | 'claude-instant-1.2' +``` + +## API Methods + +All methods support the same type-safe adapter and model selection: + +### Chat Completion + +```typescript +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'What is TypeScript?' }, + ], + temperature: 0.7, + maxTokens: 500, +}) +``` + +### Streaming Chat + +```typescript +for await (const chunk of ai.streamChat({ + adapter: 'anthropic', + model: 'claude-3-5-sonnet-20241022', + messages: [{ role: 'user', content: 'Count from 1 to 5' }], +})) { + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) + } +} +``` + +### Text Generation + +```typescript +const result = await ai.generateText({ + adapter: 'openai', + model: 'gpt-3.5-turbo-instruct', + prompt: 'Write a haiku about TypeScript', + maxTokens: 100, +}) +``` + +### Summarization + +```typescript +const result = await ai.summarize({ + adapter: 'anthropic', + model: 'claude-3-haiku-20240307', + text: 'Long text to summarize...', + style: 'bullet-points', + maxLength: 200, +}) +``` + +### Embeddings + +```typescript +const result = await ai.embed({ + adapter: 'openai', + model: 'text-embedding-3-small', + input: 'Text to embed', +}) +``` + +## Advanced Features + +### Dynamic Adapter Addition + +```typescript +const aiWithGemini = ai.addAdapter( + 'gemini', + new GeminiAdapter({ apiKey: '...' }), +) + +// Now "gemini" is available with full type safety +await aiWithGemini.chat({ + adapter: 'gemini', + model: 'gemini-pro', // Types updated automatically + messages: [{ role: 'user', content: 'Hello!' }], +}) +``` + +### Getting Available Adapters + +```typescript +console.log(ai.adapterNames) // ["openai", "anthropic"] +``` + +### Direct Adapter Access + +```typescript +const openai = ai.getAdapter('openai') +console.log(openai.models) // Array of OpenAI models +``` + +## Benefits + +### 1. Compile-Time Safety + +**Before:** + +```typescript +// Runtime error when deployed +await ai.chat({ + provider: 'anthropic', + model: 'gpt-4', // Oops! Wrong model +}) +// Error: Model 'gpt-4' not found for provider 'anthropic' +``` + +**After:** + +```typescript +// Compile-time error in your editor +await ai.chat({ + adapter: 'anthropic', + model: 'gpt-4', // TypeScript error immediately +}) +// Error: Type '"gpt-4"' is not assignable to type 'claude-...' +``` + +### 2. IDE Autocomplete + +When you type `model:`, your IDE will show you **only** the models available for the selected adapter: + +- Select `openai` → See GPT models +- Select `anthropic` → See Claude models + +### 3. Refactoring Safety + +If you switch adapters, TypeScript will immediately flag any incompatible models: + +```typescript +// Change from OpenAI to Anthropic +await ai.chat({ + adapter: 'anthropic', // Changed this + model: 'gpt-4', // TypeScript immediately flags this as an error + messages: [], +}) +``` + +### 4. Self-Documenting Code + +The types serve as documentation - you can see all available models without checking docs: + +```typescript +// Hover over "model" to see all valid options +ai.chat({ adapter: "openai", model: /* hover here */ }); +``` + +## Creating Custom Adapters + +To create a custom adapter with type safety: + +```typescript +import { BaseAdapter } from '@tanstack/ai' + +const MY_MODELS = ['my-model-1', 'my-model-2', 'my-model-3'] as const + +export class MyAdapter extends BaseAdapter { + name = 'my-adapter' + models = MY_MODELS + + // Implement required methods... +} +``` + +Then use it with full type safety: + +```typescript +const ai = new AI({ + adapters: { + 'my-adapter': new MyAdapter({ apiKey: '...' }), + }, +}) + +// TypeScript now knows about "my-model-1", "my-model-2", etc. +await ai.chat({ + adapter: 'my-adapter', + model: 'my-model-1', // Autocomplete works! + messages: [], +}) +``` + +## Examples + +See the `/examples` directory for complete working examples: + +- `model-safety-demo.ts` - Comprehensive demonstration of type safety +- `type-safety-demo.ts` - Quick reference showing valid and invalid usage +- `multi-adapter-example.ts` - Real-world multi-adapter usage + +## TypeScript Configuration + +This package requires TypeScript 4.5 or higher for full type inference support. + +## License + +MIT diff --git a/ai-docs/UNIFIED_CHAT_API.md b/ai-docs/UNIFIED_CHAT_API.md index cea718387..300fa93ea 100644 --- a/ai-docs/UNIFIED_CHAT_API.md +++ b/ai-docs/UNIFIED_CHAT_API.md @@ -1,389 +1,389 @@ -# Unified Chat API - -## Overview - -The chat API provides two methods for different use cases: - -- **`chat()`** - Returns `AsyncIterable` - streaming with **automatic tool execution loop** -- **`chatCompletion()`** - Returns `Promise` - standard non-streaming chat with optional structured output - -### šŸ”„ Automatic Tool Execution in `chat()` - -**IMPORTANT:** The `chat()` method runs an automatic tool execution loop. When you provide tools with `execute` functions: - -1. **Model calls a tool** → SDK executes it automatically -2. **SDK emits chunks** for tool calls and results (`tool_call`, `tool_result`) -3. **SDK adds results** to messages and continues conversation -4. **Loop repeats** until stopped by `agentLoopStrategy` (default: `maxIterations(5)`) - -**You don't need to manually execute tools or manage conversation state** - the SDK handles everything internally! - -**šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) - -## Migration Guide - -### Before (Using `as` option) - -```typescript -// For non-streaming -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - as: "promise", -}); - -// For streaming -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - as: "stream", -}); -for await (const chunk of stream) { - console.log(chunk); -} - -// For HTTP response -const response = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - as: "response", -}); -return response; -``` - -### After (Separate Methods) - -```typescript -// For non-streaming - use chatCompletion() -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -// For streaming - use chat() -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); -for await (const chunk of stream) { - console.log(chunk); -} - -// For HTTP response - use chat() + toStreamResponse() -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); -return toStreamResponse(stream); -``` - -## Usage Examples - -### 1. Promise Mode (chatCompletion) - -Standard non-streaming chat completion: - -```typescript -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [ - { role: "system", content: "You are a helpful assistant." }, - { role: "user", content: "What is TypeScript?" }, - ], - temperature: 0.7, -}); - -console.log(result.content); -console.log(`Tokens used: ${result.usage.totalTokens}`); -``` - -### 2. Stream Mode (chat) - -Streaming with automatic tool execution loop: - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Write a story" }], - tools: [weatherTool], // Optional: tools are auto-executed - agentLoopStrategy: maxIterations(5), // Optional: control loop -}); - -for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); // Stream text response - } else if (chunk.type === "tool_call") { - console.log(`→ Calling tool: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "tool_result") { - console.log(`āœ“ Tool result: ${chunk.content}`); - } else if (chunk.type === "done") { - console.log(`\nFinished: ${chunk.finishReason}`); - console.log(`Tokens: ${chunk.usage?.totalTokens}`); - } -} -``` - -**Chunk Types:** - -- `content` - Text content from the model (use `chunk.delta` for streaming) -- `tool_call` - Model is calling a tool (emitted by model, auto-executed by SDK) -- `tool_result` - Tool execution result (emitted after SDK executes tool) -- `done` - Stream complete (includes `finishReason` and token usage) -- `error` - An error occurred - -### 3. HTTP Response Mode - -Perfect for API endpoints: - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -// TanStack Start API Route -export const POST = async ({ request }: { request: Request }) => { - const { messages } = await request.json(); - - const stream = ai.chat({ - adapter: "openai", - model: "gpt-4o", - messages, - temperature: 0.7, - }); - - // Convert stream to Response with SSE headers - return toStreamResponse(stream); -}; -``` - -## With Fallbacks - -Both methods support fallbacks: - -```typescript -// Promise mode with fallbacks -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - fallbacks: [ - { adapter: "anthropic", model: "claude-3-sonnet-20240229" }, - { adapter: "ollama", model: "llama2" }, - ], -}); - -// Stream mode with fallbacks -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - fallbacks: [{ adapter: "anthropic", model: "claude-3-sonnet-20240229" }], -}); - -// HTTP response with fallbacks (seamless failover in HTTP streaming!) -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - fallbacks: [{ adapter: "ollama", model: "llama2" }], -}); -return toStreamResponse(stream); -``` - -## Tool Execution with Automatic Loop - -**The `chat()` method automatically executes tools in a loop** - no manual management needed! - -```typescript -const tools = [ - { - type: "function" as const, - function: { - name: "get_weather", - description: "Get weather for a location", - parameters: { - type: "object", - properties: { - location: { type: "string" }, - }, - required: ["location"], - }, - }, - execute: async (args: { location: string }) => { - // This function is automatically called by the SDK - const weather = await fetchWeatherAPI(args.location); - return JSON.stringify(weather); - }, - }, -]; - -// Streaming chat with automatic tool execution -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "What's the weather in SF?" }], - tools, // Tools with execute functions are auto-executed - toolChoice: "auto", - agentLoopStrategy: maxIterations(5), // Control loop behavior -}); - -for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); // Stream text response - } else if (chunk.type === "tool_call") { - // Model decided to call a tool - SDK will execute it automatically - console.log(`→ Calling: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "tool_result") { - // SDK executed the tool and got a result - console.log(`āœ“ Result: ${chunk.content}`); - } else if (chunk.type === "done") { - console.log(`Finished: ${chunk.finishReason}`); - } -} -``` - -**šŸ”„ What Happens Internally:** - -1. User asks: "What's the weather in SF?" -2. Model decides to call `get_weather` tool - - SDK emits `tool_call` chunk -3. **SDK automatically executes** `tools[0].execute({ location: "SF" })` - - SDK emits `tool_result` chunk -4. SDK adds assistant message (with tool call) + tool result to messages -5. **SDK automatically continues** conversation by calling model again -6. Model responds: "The weather in SF is sunny, 72°F" - - SDK emits `content` chunks -7. SDK emits `done` chunk - -**Key Points:** - -- āœ… Tools are **automatically executed** by the SDK (you don't call `execute`) -- āœ… Tool results are **automatically added** to messages -- āœ… Conversation **automatically continues** after tool execution -- āœ… Loop controlled by `agentLoopStrategy` (default: `maxIterations(5)`) -- āœ… All you do is handle chunks for display -- āœ… Custom strategies available for advanced control - -**Promise Mode (No Tool Execution):** - -The `chatCompletion()` method does NOT execute tools - it returns the model's response immediately: - -```typescript -// chatCompletion does not execute tools -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "What's the weather in SF?" }], - tools, -}); - -// If model wanted to call a tool, result.toolCalls will contain the calls -// but they won't be executed. This is useful if you want manual control. -if (result.toolCalls) { - console.log("Model wants to call:", result.toolCalls); - // You would execute manually and call chatCompletion again -} -``` - -## Type Safety - -TypeScript automatically infers the correct return type: - -```typescript -// Type: Promise -const promise = ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); - -// Type: AsyncIterable -const stream = ai.chat({ adapter: "openai", model: "gpt-4", messages: [] }); -``` - -## Benefits - -1. **Clearer API**: Separate methods for different use cases -2. **Consistent Interface**: Same options across both methods -3. **HTTP Streaming Made Easy**: Use `toStreamResponse()` helper -4. **Fallbacks Everywhere**: Both methods support the same fallback mechanism -5. **Type Safety**: TypeScript infers the correct return type -6. **Structured Outputs**: Available in `chatCompletion()` method - -## Real-World Example: TanStack Start API - -```typescript -import { createAPIFileRoute } from "@tanstack/start/api"; -import { ai } from "~/lib/ai-client"; -import { toStreamResponse } from "@tanstack/ai"; - -export const Route = createAPIFileRoute("/api/chat")({ - POST: async ({ request }) => { - const { messages, tools } = await request.json(); - - const stream = ai.chat({ - adapter: "openAi", - model: "gpt-4o", - messages, - tools, - toolChoice: "auto", - maxIterations: 5, - temperature: 0.7, - fallbacks: [{ adapter: "ollama", model: "llama2" }], - }); - - return toStreamResponse(stream); - }, -}); -``` - -Client-side consumption: - -```typescript -const response = await fetch("/api/chat", { - method: "POST", - body: JSON.stringify({ messages, tools }), -}); - -const reader = response.body!.getReader(); -const decoder = new TextDecoder(); - -while (true) { - const { done, value } = await reader.read(); - if (done) break; - - const text = decoder.decode(value); - const lines = text.split("\n\n"); - - for (const line of lines) { - if (line.startsWith("data: ")) { - const data = line.slice(6); - if (data === "[DONE]") continue; - - const chunk = JSON.parse(data); - if (chunk.type === "content") { - console.log(chunk.delta); // Stream content to UI - } - } - } -} -``` - -## Summary - -The unified chat API provides: - -- **Two methods**: `chat()` for streaming, `chatCompletion()` for promises -- **Same options** across both methods -- **Built-in HTTP streaming** helper (`toStreamResponse`) -- **Full fallback support** in both methods -- **Type-safe** return types -- **Simpler code** for common patterns +# Unified Chat API + +## Overview + +The chat API provides two methods for different use cases: + +- **`chat()`** - Returns `AsyncIterable` - streaming with **automatic tool execution loop** +- **`chatCompletion()`** - Returns `Promise` - standard non-streaming chat with optional structured output + +### šŸ”„ Automatic Tool Execution in `chat()` + +**IMPORTANT:** The `chat()` method runs an automatic tool execution loop. When you provide tools with `execute` functions: + +1. **Model calls a tool** → SDK executes it automatically +2. **SDK emits chunks** for tool calls and results (`tool_call`, `tool_result`) +3. **SDK adds results** to messages and continues conversation +4. **Loop repeats** until stopped by `agentLoopStrategy` (default: `maxIterations(5)`) + +**You don't need to manually execute tools or manage conversation state** - the SDK handles everything internally! + +**šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) + +## Migration Guide + +### Before (Using `as` option) + +```typescript +// For non-streaming +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + as: 'promise', +}) + +// For streaming +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + as: 'stream', +}) +for await (const chunk of stream) { + console.log(chunk) +} + +// For HTTP response +const response = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + as: 'response', +}) +return response +``` + +### After (Separate Methods) + +```typescript +// For non-streaming - use chatCompletion() +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +// For streaming - use chat() +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) +for await (const chunk of stream) { + console.log(chunk) +} + +// For HTTP response - use chat() + toStreamResponse() +import { toStreamResponse } from '@tanstack/ai' + +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) +return toStreamResponse(stream) +``` + +## Usage Examples + +### 1. Promise Mode (chatCompletion) + +Standard non-streaming chat completion: + +```typescript +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'What is TypeScript?' }, + ], + temperature: 0.7, +}) + +console.log(result.content) +console.log(`Tokens used: ${result.usage.totalTokens}`) +``` + +### 2. Stream Mode (chat) + +Streaming with automatic tool execution loop: + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Write a story' }], + tools: [weatherTool], // Optional: tools are auto-executed + agentLoopStrategy: maxIterations(5), // Optional: control loop +}) + +for await (const chunk of stream) { + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) // Stream text response + } else if (chunk.type === 'tool_call') { + console.log(`→ Calling tool: ${chunk.toolCall.function.name}`) + } else if (chunk.type === 'tool_result') { + console.log(`āœ“ Tool result: ${chunk.content}`) + } else if (chunk.type === 'done') { + console.log(`\nFinished: ${chunk.finishReason}`) + console.log(`Tokens: ${chunk.usage?.totalTokens}`) + } +} +``` + +**Chunk Types:** + +- `content` - Text content from the model (use `chunk.delta` for streaming) +- `tool_call` - Model is calling a tool (emitted by model, auto-executed by SDK) +- `tool_result` - Tool execution result (emitted after SDK executes tool) +- `done` - Stream complete (includes `finishReason` and token usage) +- `error` - An error occurred + +### 3. HTTP Response Mode + +Perfect for API endpoints: + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +// TanStack Start API Route +export const POST = async ({ request }: { request: Request }) => { + const { messages } = await request.json() + + const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4o', + messages, + temperature: 0.7, + }) + + // Convert stream to Response with SSE headers + return toStreamResponse(stream) +} +``` + +## With Fallbacks + +Both methods support fallbacks: + +```typescript +// Promise mode with fallbacks +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + fallbacks: [ + { adapter: 'anthropic', model: 'claude-3-sonnet-20240229' }, + { adapter: 'ollama', model: 'llama2' }, + ], +}) + +// Stream mode with fallbacks +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + fallbacks: [{ adapter: 'anthropic', model: 'claude-3-sonnet-20240229' }], +}) + +// HTTP response with fallbacks (seamless failover in HTTP streaming!) +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + fallbacks: [{ adapter: 'ollama', model: 'llama2' }], +}) +return toStreamResponse(stream) +``` + +## Tool Execution with Automatic Loop + +**The `chat()` method automatically executes tools in a loop** - no manual management needed! + +```typescript +const tools = [ + { + type: 'function' as const, + function: { + name: 'get_weather', + description: 'Get weather for a location', + parameters: { + type: 'object', + properties: { + location: { type: 'string' }, + }, + required: ['location'], + }, + }, + execute: async (args: { location: string }) => { + // This function is automatically called by the SDK + const weather = await fetchWeatherAPI(args.location) + return JSON.stringify(weather) + }, + }, +] + +// Streaming chat with automatic tool execution +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: "What's the weather in SF?" }], + tools, // Tools with execute functions are auto-executed + toolChoice: 'auto', + agentLoopStrategy: maxIterations(5), // Control loop behavior +}) + +for await (const chunk of stream) { + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) // Stream text response + } else if (chunk.type === 'tool_call') { + // Model decided to call a tool - SDK will execute it automatically + console.log(`→ Calling: ${chunk.toolCall.function.name}`) + } else if (chunk.type === 'tool_result') { + // SDK executed the tool and got a result + console.log(`āœ“ Result: ${chunk.content}`) + } else if (chunk.type === 'done') { + console.log(`Finished: ${chunk.finishReason}`) + } +} +``` + +**šŸ”„ What Happens Internally:** + +1. User asks: "What's the weather in SF?" +2. Model decides to call `get_weather` tool + - SDK emits `tool_call` chunk +3. **SDK automatically executes** `tools[0].execute({ location: "SF" })` + - SDK emits `tool_result` chunk +4. SDK adds assistant message (with tool call) + tool result to messages +5. **SDK automatically continues** conversation by calling model again +6. Model responds: "The weather in SF is sunny, 72°F" + - SDK emits `content` chunks +7. SDK emits `done` chunk + +**Key Points:** + +- āœ… Tools are **automatically executed** by the SDK (you don't call `execute`) +- āœ… Tool results are **automatically added** to messages +- āœ… Conversation **automatically continues** after tool execution +- āœ… Loop controlled by `agentLoopStrategy` (default: `maxIterations(5)`) +- āœ… All you do is handle chunks for display +- āœ… Custom strategies available for advanced control + +**Promise Mode (No Tool Execution):** + +The `chatCompletion()` method does NOT execute tools - it returns the model's response immediately: + +```typescript +// chatCompletion does not execute tools +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: "What's the weather in SF?" }], + tools, +}) + +// If model wanted to call a tool, result.toolCalls will contain the calls +// but they won't be executed. This is useful if you want manual control. +if (result.toolCalls) { + console.log('Model wants to call:', result.toolCalls) + // You would execute manually and call chatCompletion again +} +``` + +## Type Safety + +TypeScript automatically infers the correct return type: + +```typescript +// Type: Promise +const promise = ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) + +// Type: AsyncIterable +const stream = ai.chat({ adapter: 'openai', model: 'gpt-4', messages: [] }) +``` + +## Benefits + +1. **Clearer API**: Separate methods for different use cases +2. **Consistent Interface**: Same options across both methods +3. **HTTP Streaming Made Easy**: Use `toStreamResponse()` helper +4. **Fallbacks Everywhere**: Both methods support the same fallback mechanism +5. **Type Safety**: TypeScript infers the correct return type +6. **Structured Outputs**: Available in `chatCompletion()` method + +## Real-World Example: TanStack Start API + +```typescript +import { createAPIFileRoute } from '@tanstack/start/api' +import { ai } from '~/lib/ai-client' +import { toStreamResponse } from '@tanstack/ai' + +export const Route = createAPIFileRoute('/api/chat')({ + POST: async ({ request }) => { + const { messages, tools } = await request.json() + + const stream = ai.chat({ + adapter: 'openAi', + model: 'gpt-4o', + messages, + tools, + toolChoice: 'auto', + maxIterations: 5, + temperature: 0.7, + fallbacks: [{ adapter: 'ollama', model: 'llama2' }], + }) + + return toStreamResponse(stream) + }, +}) +``` + +Client-side consumption: + +```typescript +const response = await fetch('/api/chat', { + method: 'POST', + body: JSON.stringify({ messages, tools }), +}) + +const reader = response.body!.getReader() +const decoder = new TextDecoder() + +while (true) { + const { done, value } = await reader.read() + if (done) break + + const text = decoder.decode(value) + const lines = text.split('\n\n') + + for (const line of lines) { + if (line.startsWith('data: ')) { + const data = line.slice(6) + if (data === '[DONE]') continue + + const chunk = JSON.parse(data) + if (chunk.type === 'content') { + console.log(chunk.delta) // Stream content to UI + } + } + } +} +``` + +## Summary + +The unified chat API provides: + +- **Two methods**: `chat()` for streaming, `chatCompletion()` for promises +- **Same options** across both methods +- **Built-in HTTP streaming** helper (`toStreamResponse`) +- **Full fallback support** in both methods +- **Type-safe** return types +- **Simpler code** for common patterns diff --git a/ai-docs/UNIFIED_CHAT_IMPLEMENTATION.md b/ai-docs/UNIFIED_CHAT_IMPLEMENTATION.md index 440894e41..a75ad3c2b 100644 --- a/ai-docs/UNIFIED_CHAT_IMPLEMENTATION.md +++ b/ai-docs/UNIFIED_CHAT_IMPLEMENTATION.md @@ -1,246 +1,257 @@ -# Unified Chat API - Implementation Summary - -> **Note**: This document describes the historical implementation with the `as` option. The current API uses separate methods: `chat()` for streaming and `chatCompletion()` for promise-based completion. See `docs/UNIFIED_CHAT_API.md` for current API documentation. - -## Overview - -The chat API was previously unified using an `as` configuration option. The current implementation separates streaming and promise-based completion into distinct methods: - -- **`chat()`** - Always returns `AsyncIterable` (streaming) -- **`chatCompletion()`** - Always returns `Promise` (promise-based) - -## Current API Design - -### Method Separation - -```typescript -class AI { - // Streaming method with automatic tool execution loop - async *chat(options): AsyncIterable { - // Manages tool execution internally using ToolCallManager - const toolCallManager = new ToolCallManager(options.tools || []); - - while (iterationCount < maxIterations) { - // Stream from adapter - for await (const chunk of this.adapter.chatStream(options)) { - yield chunk; - - // Track tool calls - if (chunk.type === "tool_call") { - toolCallManager.addToolCallChunk(chunk); - } - } - - // Execute tools if needed - if (shouldExecuteTools && toolCallManager.hasToolCalls()) { - const toolResults = yield* toolCallManager.executeTools(doneChunk); - messages = [...messages, ...toolResults]; - continue; // Next iteration - } - - break; // Done - } - } - - // Promise-based method (no tool execution loop) - async chatCompletion(options): Promise { - return this.adapter.chatCompletion(options); - } -} -``` - -### ToolCallManager Class - -The tool execution logic is extracted into a dedicated `ToolCallManager` class: - -```typescript -class ToolCallManager { - // Accumulate tool calls from streaming chunks - addToolCallChunk(chunk): void; - - // Check if there are tool calls to execute - hasToolCalls(): boolean; - - // Get all complete tool calls - getToolCalls(): ToolCall[]; - - // Execute tools and yield tool_result chunks - async *executeTools(doneChunk): AsyncGenerator; - - // Clear for next iteration - clear(): void; -} -``` - -**Benefits:** -- āœ… **Separation of concerns** - tool logic isolated from chat logic -- āœ… **Testable** - ToolCallManager can be unit tested independently -- āœ… **Maintainable** - changes to tool execution don't affect chat method -- āœ… **Reusable** - can be used in other contexts if needed - -### Benefits of Separate Methods - -āœ… **Clearer API**: Method names indicate return type -āœ… **Better Type Inference**: TypeScript knows exact return type without overloads -āœ… **Simpler Implementation**: No need for discriminated unions -āœ… **Easier to Use**: Less cognitive overhead - -## Usage Examples - -### 1. Promise Mode (chatCompletion) - -```typescript -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); -``` - -### 2. Stream Mode (chat) - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -for await (const chunk of stream) { - console.log(chunk); -} -``` - -### 3. HTTP Response Mode - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -return toStreamResponse(stream); -``` - -## Historical Context - -The `as` option approach was implemented to unify `chat()` and `streamChat()` methods. However, separate methods provide better developer experience and type safety. - -### Migration Path - -See `docs/MIGRATION_UNIFIED_CHAT.md` for migration guide from the `as` option API to the current separate methods API. - -## Features Preserved - -āœ… **All features still supported**: -- Discriminated union types for adapter-model pairs -- Fallback mechanism (single-with-fallbacks or fallbacks-only) -- **Automatic tool execution loop** (via `ToolCallManager`) -- Error chunk detection for streaming -- Type-safe model selection - -āœ… **No breaking changes** to core functionality: -- Streaming behavior matches old `streamChat()` method -- Promise behavior matches old `chat()` method -- Error handling and fallbacks work identically -- **Tool execution now handled by `ToolCallManager` class** - -## Files Changed - -### Core Implementation -- āœ… `packages/ai/src/ai.ts` - - Removed `as` option from `chat()` method - - Made `chat()` streaming-only with automatic tool execution loop - - Added `chatCompletion()` method for promise-based calls - - Removed `streamToResponse()` private method (use `toStreamResponse()` from `stream-to-response.ts`) - - Refactored to use `ToolCallManager` for tool execution - -- āœ… `packages/ai/src/tool-call-manager.ts` (NEW) - - Encapsulates tool call accumulation, validation, and execution - - Independently testable - - Yields `tool_result` chunks during execution - - Returns tool result messages for conversation history - -- āœ… `packages/ai/src/types.ts` - - Added `ToolResultStreamChunk` type - - Added `"tool_result"` to `StreamChunkType` union - - Updated `StreamChunk` union to include `ToolResultStreamChunk` - -### Documentation -- āœ… `docs/UNIFIED_CHAT_API.md` - Updated API documentation with tool execution details -- āœ… `docs/MIGRATION_UNIFIED_CHAT.md` - Migration guide -- āœ… `docs/UNIFIED_CHAT_QUICK_REFERENCE.md` - Quick reference updated -- āœ… `docs/TOOL_EXECUTION_LOOP.md` (NEW) - Comprehensive tool execution guide -- āœ… `README.md` - Updated with tool execution loop documentation -- āœ… `examples/cli/README.md` - Updated with automatic tool execution details -- āœ… `packages/ai-react/README.md` - Updated backend examples with tool execution -- āœ… `packages/ai-client/README.md` - Added backend example with tool execution - -## Benefits of Current Approach - -1. **Simpler API Surface** - Two clear methods instead of one with options -2. **Consistent Interface** - Same options across both methods -3. **HTTP Streaming Made Easy** - Use `toStreamResponse()` helper -4. **Better Developer Experience** - Clear intent with method names -5. **Type Safety Maintained** - All discriminated unions still work -6. **Backward Compatible Migration** - Easy to migrate from old API -7. **Fallbacks Everywhere** - Both methods support same fallback mechanism -8. **Automatic Tool Execution** - `chat()` handles tool calling in a loop via `ToolCallManager` -9. **Testable Architecture** - Tool execution logic isolated in separate class -10. **Clean Separation** - `chat()` for streaming+tools, `chatCompletion()` for promises+structured output - -## Testing Recommendations - -Test scenarios: -1. āœ… Promise mode with primary adapter -2. āœ… Promise mode with fallbacks -3. āœ… Stream mode with primary adapter -4. āœ… Stream mode with fallbacks -5. āœ… HTTP response mode with primary adapter -6. āœ… HTTP response mode with fallbacks -7. āœ… Automatic tool execution in `chat()` (via `ToolCallManager`) -8. āœ… Manual tool handling in `chatCompletion()` -9. āœ… Error chunk detection triggers fallbacks -10. āœ… Type inference for both methods -11. āœ… Fallback-only mode (no primary adapter) -12. āœ… `ToolCallManager` unit tests (accumulation, validation, execution) -13. āœ… Multi-round tool execution (up to `maxIterations`) -14. āœ… Tool execution error handling - -## Next Steps - -### For Users -1. **Update method calls**: - - `chat({ as: "promise" })` → `chatCompletion()` - - `chat({ as: "stream" })` → `chat()` - - `chat({ as: "response" })` → `chat()` + `toStreamResponse()` -2. **Update imports**: Add `toStreamResponse` import if needed -3. **Test fallback behavior**: Verify seamless failover in all modes - -### Testing ToolCallManager - -The `ToolCallManager` class is independently testable. See `packages/ai/src/tool-call-manager.test.ts` for unit tests. - -Test scenarios: -- āœ… Accumulating streaming tool call chunks -- āœ… Filtering incomplete tool calls -- āœ… Executing tools with valid arguments -- āœ… Handling tool execution errors -- āœ… Handling tools without execute functions -- āœ… Multiple tool calls in one iteration -- āœ… Clearing tool calls between iterations - -### Future Enhancements -- Consider adding structured output support to streaming -- Add streaming response mode to embeddings -- Document SSE format for client-side consumption -- Add examples for different frameworks (Express, Fastify, etc.) - -## Conclusion - -Separating `chat()` and `chatCompletion()` provides a cleaner, more intuitive interface while maintaining all existing functionality. The two-method design covers all common use cases with clear, type-safe APIs. - -**Key Achievement**: Clear separation of concerns with `chat()` for streaming and `chatCompletion()` for promises, eliminating the need for a configuration option. +# Unified Chat API - Implementation Summary + +> **Note**: This document describes the historical implementation with the `as` option. The current API uses separate methods: `chat()` for streaming and `chatCompletion()` for promise-based completion. See `docs/UNIFIED_CHAT_API.md` for current API documentation. + +## Overview + +The chat API was previously unified using an `as` configuration option. The current implementation separates streaming and promise-based completion into distinct methods: + +- **`chat()`** - Always returns `AsyncIterable` (streaming) +- **`chatCompletion()`** - Always returns `Promise` (promise-based) + +## Current API Design + +### Method Separation + +```typescript +class AI { + // Streaming method with automatic tool execution loop + async *chat(options): AsyncIterable { + // Manages tool execution internally using ToolCallManager + const toolCallManager = new ToolCallManager(options.tools || []) + + while (iterationCount < maxIterations) { + // Stream from adapter + for await (const chunk of this.adapter.chatStream(options)) { + yield chunk + + // Track tool calls + if (chunk.type === 'tool_call') { + toolCallManager.addToolCallChunk(chunk) + } + } + + // Execute tools if needed + if (shouldExecuteTools && toolCallManager.hasToolCalls()) { + const toolResults = yield* toolCallManager.executeTools(doneChunk) + messages = [...messages, ...toolResults] + continue // Next iteration + } + + break // Done + } + } + + // Promise-based method (no tool execution loop) + async chatCompletion(options): Promise { + return this.adapter.chatCompletion(options) + } +} +``` + +### ToolCallManager Class + +The tool execution logic is extracted into a dedicated `ToolCallManager` class: + +```typescript +class ToolCallManager { + // Accumulate tool calls from streaming chunks + addToolCallChunk(chunk): void + + // Check if there are tool calls to execute + hasToolCalls(): boolean + + // Get all complete tool calls + getToolCalls(): ToolCall[] + + // Execute tools and yield tool_result chunks + async *executeTools( + doneChunk, + ): AsyncGenerator + + // Clear for next iteration + clear(): void +} +``` + +**Benefits:** + +- āœ… **Separation of concerns** - tool logic isolated from chat logic +- āœ… **Testable** - ToolCallManager can be unit tested independently +- āœ… **Maintainable** - changes to tool execution don't affect chat method +- āœ… **Reusable** - can be used in other contexts if needed + +### Benefits of Separate Methods + +āœ… **Clearer API**: Method names indicate return type +āœ… **Better Type Inference**: TypeScript knows exact return type without overloads +āœ… **Simpler Implementation**: No need for discriminated unions +āœ… **Easier to Use**: Less cognitive overhead + +## Usage Examples + +### 1. Promise Mode (chatCompletion) + +```typescript +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) +``` + +### 2. Stream Mode (chat) + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +for await (const chunk of stream) { + console.log(chunk) +} +``` + +### 3. HTTP Response Mode + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +return toStreamResponse(stream) +``` + +## Historical Context + +The `as` option approach was implemented to unify `chat()` and `streamChat()` methods. However, separate methods provide better developer experience and type safety. + +### Migration Path + +See `docs/MIGRATION_UNIFIED_CHAT.md` for migration guide from the `as` option API to the current separate methods API. + +## Features Preserved + +āœ… **All features still supported**: + +- Discriminated union types for adapter-model pairs +- Fallback mechanism (single-with-fallbacks or fallbacks-only) +- **Automatic tool execution loop** (via `ToolCallManager`) +- Error chunk detection for streaming +- Type-safe model selection + +āœ… **No breaking changes** to core functionality: + +- Streaming behavior matches old `streamChat()` method +- Promise behavior matches old `chat()` method +- Error handling and fallbacks work identically +- **Tool execution now handled by `ToolCallManager` class** + +## Files Changed + +### Core Implementation + +- āœ… `packages/ai/src/ai.ts` + - Removed `as` option from `chat()` method + - Made `chat()` streaming-only with automatic tool execution loop + - Added `chatCompletion()` method for promise-based calls + - Removed `streamToResponse()` private method (use `toStreamResponse()` from `stream-to-response.ts`) + - Refactored to use `ToolCallManager` for tool execution + +- āœ… `packages/ai/src/tool-call-manager.ts` (NEW) + - Encapsulates tool call accumulation, validation, and execution + - Independently testable + - Yields `tool_result` chunks during execution + - Returns tool result messages for conversation history + +- āœ… `packages/ai/src/types.ts` + - Added `ToolResultStreamChunk` type + - Added `"tool_result"` to `StreamChunkType` union + - Updated `StreamChunk` union to include `ToolResultStreamChunk` + +### Documentation + +- āœ… `docs/UNIFIED_CHAT_API.md` - Updated API documentation with tool execution details +- āœ… `docs/MIGRATION_UNIFIED_CHAT.md` - Migration guide +- āœ… `docs/UNIFIED_CHAT_QUICK_REFERENCE.md` - Quick reference updated +- āœ… `docs/TOOL_EXECUTION_LOOP.md` (NEW) - Comprehensive tool execution guide +- āœ… `README.md` - Updated with tool execution loop documentation +- āœ… `examples/cli/README.md` - Updated with automatic tool execution details +- āœ… `packages/ai-react/README.md` - Updated backend examples with tool execution +- āœ… `packages/ai-client/README.md` - Added backend example with tool execution + +## Benefits of Current Approach + +1. **Simpler API Surface** - Two clear methods instead of one with options +2. **Consistent Interface** - Same options across both methods +3. **HTTP Streaming Made Easy** - Use `toStreamResponse()` helper +4. **Better Developer Experience** - Clear intent with method names +5. **Type Safety Maintained** - All discriminated unions still work +6. **Backward Compatible Migration** - Easy to migrate from old API +7. **Fallbacks Everywhere** - Both methods support same fallback mechanism +8. **Automatic Tool Execution** - `chat()` handles tool calling in a loop via `ToolCallManager` +9. **Testable Architecture** - Tool execution logic isolated in separate class +10. **Clean Separation** - `chat()` for streaming+tools, `chatCompletion()` for promises+structured output + +## Testing Recommendations + +Test scenarios: + +1. āœ… Promise mode with primary adapter +2. āœ… Promise mode with fallbacks +3. āœ… Stream mode with primary adapter +4. āœ… Stream mode with fallbacks +5. āœ… HTTP response mode with primary adapter +6. āœ… HTTP response mode with fallbacks +7. āœ… Automatic tool execution in `chat()` (via `ToolCallManager`) +8. āœ… Manual tool handling in `chatCompletion()` +9. āœ… Error chunk detection triggers fallbacks +10. āœ… Type inference for both methods +11. āœ… Fallback-only mode (no primary adapter) +12. āœ… `ToolCallManager` unit tests (accumulation, validation, execution) +13. āœ… Multi-round tool execution (up to `maxIterations`) +14. āœ… Tool execution error handling + +## Next Steps + +### For Users + +1. **Update method calls**: + - `chat({ as: "promise" })` → `chatCompletion()` + - `chat({ as: "stream" })` → `chat()` + - `chat({ as: "response" })` → `chat()` + `toStreamResponse()` +2. **Update imports**: Add `toStreamResponse` import if needed +3. **Test fallback behavior**: Verify seamless failover in all modes + +### Testing ToolCallManager + +The `ToolCallManager` class is independently testable. See `packages/ai/src/tool-call-manager.test.ts` for unit tests. + +Test scenarios: + +- āœ… Accumulating streaming tool call chunks +- āœ… Filtering incomplete tool calls +- āœ… Executing tools with valid arguments +- āœ… Handling tool execution errors +- āœ… Handling tools without execute functions +- āœ… Multiple tool calls in one iteration +- āœ… Clearing tool calls between iterations + +### Future Enhancements + +- Consider adding structured output support to streaming +- Add streaming response mode to embeddings +- Document SSE format for client-side consumption +- Add examples for different frameworks (Express, Fastify, etc.) + +## Conclusion + +Separating `chat()` and `chatCompletion()` provides a cleaner, more intuitive interface while maintaining all existing functionality. The two-method design covers all common use cases with clear, type-safe APIs. + +**Key Achievement**: Clear separation of concerns with `chat()` for streaming and `chatCompletion()` for promises, eliminating the need for a configuration option. diff --git a/ai-docs/UNIFIED_CHAT_QUICK_REFERENCE.md b/ai-docs/UNIFIED_CHAT_QUICK_REFERENCE.md index 6d9e0e85b..e0ec12357 100644 --- a/ai-docs/UNIFIED_CHAT_QUICK_REFERENCE.md +++ b/ai-docs/UNIFIED_CHAT_QUICK_REFERENCE.md @@ -1,329 +1,329 @@ -# Unified Chat API - Quick Reference - -> **šŸ”„ Automatic Tool Execution:** The `chat()` method runs an automatic tool execution loop. Tools with `execute` functions are automatically called, results are added to messages, and the conversation continues - all handled internally by the SDK! -> -> **šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) - -## Two Methods for Different Use Cases - -```typescript -// 1. CHATCOMPLETION - Returns Promise -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], -}); - -// 2. CHAT - Returns AsyncIterable with automatic tool execution loop -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "Hello" }], - tools: [weatherTool], // Optional: auto-executed when called - agentLoopStrategy: maxIterations(5), // Optional: control loop -}); -for await (const chunk of stream) { - if (chunk.type === "content") process.stdout.write(chunk.delta); - else if (chunk.type === "tool_call") console.log("Calling tool..."); - else if (chunk.type === "tool_result") console.log("Tool executed!"); -} -``` - -## Quick Comparison - -| Feature | chatCompletion | chat | -| --------------------- | ------------------------------- | ---------------------------- | -| **Return Type** | `Promise` | `AsyncIterable` | -| **When to Use** | Need complete response | Real-time streaming | -| **Async/Await** | āœ… Yes | āœ… Yes (for await) | -| **Fallbacks** | āœ… Yes | āœ… Yes | -| **Tool Execution** | āŒ No (manual) | āœ… **Automatic loop** | -| **Type-Safe Models** | āœ… Yes | āœ… Yes | -| **Structured Output** | āœ… Yes | āŒ No | - -## Common Patterns - -### API Endpoint (TanStack Start) - -```typescript -import { toStreamResponse } from "@tanstack/ai"; - -export const Route = createAPIFileRoute("/api/chat")({ - POST: async ({ request }) => { - const { messages } = await request.json(); - - const stream = ai.chat({ - adapter: "openAi", - model: "gpt-4o", - messages, - fallbacks: [{ adapter: "ollama", model: "llama2" }], - }); - - return toStreamResponse(stream); - }, -}); -``` - -### CLI Application - -```typescript -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: userInput }], -}); - -for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); - } -} -``` - -### Batch Processing - -```typescript -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: document }], -}); - -await saveToDatabase(result.content); -``` - -## With Tools - -### Automatic Execution (chat) - -The `chat()` method **automatically executes tools in a loop**: - -```typescript -const tools = [ - { - type: "function" as const, - function: { - name: "get_weather", - description: "Get weather for a location", - parameters: { - /* ... */ - }, - }, - execute: async (args: any) => { - // SDK automatically calls this when model calls the tool - return JSON.stringify({ temp: 72, condition: "sunny" }); - }, - }, -]; - -// Stream mode with automatic tool execution -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "What's the weather in SF?" }], - tools, // Tools with execute functions are auto-executed - toolChoice: "auto", - agentLoopStrategy: maxIterations(5), // Control loop behavior -}); - -for await (const chunk of stream) { - if (chunk.type === "content") { - process.stdout.write(chunk.delta); - } else if (chunk.type === "tool_call") { - console.log(`→ Calling: ${chunk.toolCall.function.name}`); - } else if (chunk.type === "tool_result") { - console.log(`āœ“ Result: ${chunk.content}`); - } -} -``` - -**How it works:** - -1. Model decides to call a tool → `tool_call` chunk -2. SDK executes `tool.execute()` → `tool_result` chunk -3. SDK adds result to messages → continues conversation -4. Repeats until complete (up to `maxIterations`) - -### Manual Execution (chatCompletion) - -The `chatCompletion()` method does NOT execute tools automatically: - -```typescript -// chatCompletion returns tool calls but doesn't execute them -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [{ role: "user", content: "What's the weather in SF?" }], - tools, -}); - -// Check if model wants to call tools -if (result.toolCalls) { - console.log("Model wants to call:", result.toolCalls); - // You must execute manually and call chatCompletion again -} -``` - -## With Fallbacks - -Both methods support the same fallback mechanism: - -```typescript -// Promise with fallbacks -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [...], - fallbacks: [ - { adapter: "anthropic", model: "claude-3-sonnet-20240229" }, - { adapter: "ollama", model: "llama2" } - ] -}); - -// Stream with fallbacks -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - fallbacks: [ - { adapter: "ollama", model: "llama2" } - ] -}); - -// HTTP response with fallbacks (seamless HTTP failover!) -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [...], - fallbacks: [ - { adapter: "ollama", model: "llama2" } - ] -}); -return toStreamResponse(stream); -``` - -## Fallback-Only Mode - -No primary adapter, just try fallbacks in order: - -```typescript -const result = await ai.chatCompletion({ - messages: [...], - fallbacks: [ - { adapter: "openai", model: "gpt-4" }, - { adapter: "anthropic", model: "claude-3-sonnet-20240229" }, - { adapter: "ollama", model: "llama2" } - ], -}); -``` - -## Migration from Old API - -### Before (using `as` option) - -```typescript -// Non-streaming -const result = await ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], - as: "promise", -}); - -// Streaming -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], - as: "stream", -}); - -// HTTP Response -const response = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], - as: "response", -}); -``` - -### After (separate methods) - -```typescript -// Non-streaming - use chatCompletion() -const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); - -// Streaming - use chat() -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); - -// HTTP Response - use chat() + toStreamResponse() -import { toStreamResponse } from "@tanstack/ai"; - -const stream = ai.chat({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); -return toStreamResponse(stream); -``` - -## Type Inference - -TypeScript automatically infers the correct return type: - -```typescript -// Type: Promise -const promise = ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [], -}); - -// Type: AsyncIterable -const stream = ai.chat({ adapter: "openai", model: "gpt-4", messages: [] }); -``` - -## Error Handling - -Both methods throw errors if all adapters fail: - -```typescript -try { - const result = await ai.chatCompletion({ - adapter: "openai", - model: "gpt-4", - messages: [...], - fallbacks: [{ adapter: "ollama", model: "llama2" }] - }); -} catch (error: any) { - console.error("All adapters failed:", error.message); -} -``` - -## Cheat Sheet - -| What You Want | Use This | Example | -| ----------------- | ------------------------------------ | ----------------------------------------------------- | -| Complete response | `chatCompletion()` | `const result = await ai.chatCompletion({...})` | -| Custom streaming | `chat()` | `for await (const chunk of ai.chat({...}))` | -| API endpoint | `chat()` + `toStreamResponse()` | `return toStreamResponse(ai.chat({...}))` | -| With fallbacks | Add `fallbacks: [...]` | `fallbacks: [{ adapter: "ollama", model: "llama2" }]` | -| With tools | Add `tools: [...]` | `tools: [{...}, {...}], toolChoice: "auto"` | -| Multiple adapters | Use `fallbacks` only | `fallbacks: [{ adapter: "a", model: "m1" }, {...}]` | -| Structured output | Use `chatCompletion()` with `output` | `chatCompletion({..., output: schema })` | - -## Documentation - -- **Full API Docs**: `docs/UNIFIED_CHAT_API.md` -- **Migration Guide**: `docs/MIGRATION_UNIFIED_CHAT.md` -- **Implementation**: `docs/UNIFIED_CHAT_IMPLEMENTATION.md` +# Unified Chat API - Quick Reference + +> **šŸ”„ Automatic Tool Execution:** The `chat()` method runs an automatic tool execution loop. Tools with `execute` functions are automatically called, results are added to messages, and the conversation continues - all handled internally by the SDK! +> +> **šŸ“š See also:** [Complete Tool Execution Loop Documentation](TOOL_EXECUTION_LOOP.md) + +## Two Methods for Different Use Cases + +```typescript +// 1. CHATCOMPLETION - Returns Promise +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], +}) + +// 2. CHAT - Returns AsyncIterable with automatic tool execution loop +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: 'Hello' }], + tools: [weatherTool], // Optional: auto-executed when called + agentLoopStrategy: maxIterations(5), // Optional: control loop +}) +for await (const chunk of stream) { + if (chunk.type === 'content') process.stdout.write(chunk.delta) + else if (chunk.type === 'tool_call') console.log('Calling tool...') + else if (chunk.type === 'tool_result') console.log('Tool executed!') +} +``` + +## Quick Comparison + +| Feature | chatCompletion | chat | +| --------------------- | ------------------------------- | ---------------------------- | +| **Return Type** | `Promise` | `AsyncIterable` | +| **When to Use** | Need complete response | Real-time streaming | +| **Async/Await** | āœ… Yes | āœ… Yes (for await) | +| **Fallbacks** | āœ… Yes | āœ… Yes | +| **Tool Execution** | āŒ No (manual) | āœ… **Automatic loop** | +| **Type-Safe Models** | āœ… Yes | āœ… Yes | +| **Structured Output** | āœ… Yes | āŒ No | + +## Common Patterns + +### API Endpoint (TanStack Start) + +```typescript +import { toStreamResponse } from '@tanstack/ai' + +export const Route = createAPIFileRoute('/api/chat')({ + POST: async ({ request }) => { + const { messages } = await request.json() + + const stream = ai.chat({ + adapter: 'openAi', + model: 'gpt-4o', + messages, + fallbacks: [{ adapter: 'ollama', model: 'llama2' }], + }) + + return toStreamResponse(stream) + }, +}) +``` + +### CLI Application + +```typescript +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: userInput }], +}) + +for await (const chunk of stream) { + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) + } +} +``` + +### Batch Processing + +```typescript +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: document }], +}) + +await saveToDatabase(result.content) +``` + +## With Tools + +### Automatic Execution (chat) + +The `chat()` method **automatically executes tools in a loop**: + +```typescript +const tools = [ + { + type: 'function' as const, + function: { + name: 'get_weather', + description: 'Get weather for a location', + parameters: { + /* ... */ + }, + }, + execute: async (args: any) => { + // SDK automatically calls this when model calls the tool + return JSON.stringify({ temp: 72, condition: 'sunny' }) + }, + }, +] + +// Stream mode with automatic tool execution +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: "What's the weather in SF?" }], + tools, // Tools with execute functions are auto-executed + toolChoice: 'auto', + agentLoopStrategy: maxIterations(5), // Control loop behavior +}) + +for await (const chunk of stream) { + if (chunk.type === 'content') { + process.stdout.write(chunk.delta) + } else if (chunk.type === 'tool_call') { + console.log(`→ Calling: ${chunk.toolCall.function.name}`) + } else if (chunk.type === 'tool_result') { + console.log(`āœ“ Result: ${chunk.content}`) + } +} +``` + +**How it works:** + +1. Model decides to call a tool → `tool_call` chunk +2. SDK executes `tool.execute()` → `tool_result` chunk +3. SDK adds result to messages → continues conversation +4. Repeats until complete (up to `maxIterations`) + +### Manual Execution (chatCompletion) + +The `chatCompletion()` method does NOT execute tools automatically: + +```typescript +// chatCompletion returns tool calls but doesn't execute them +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [{ role: 'user', content: "What's the weather in SF?" }], + tools, +}) + +// Check if model wants to call tools +if (result.toolCalls) { + console.log('Model wants to call:', result.toolCalls) + // You must execute manually and call chatCompletion again +} +``` + +## With Fallbacks + +Both methods support the same fallback mechanism: + +```typescript +// Promise with fallbacks +const result = await ai.chatCompletion({ + adapter: "openai", + model: "gpt-4", + messages: [...], + fallbacks: [ + { adapter: "anthropic", model: "claude-3-sonnet-20240229" }, + { adapter: "ollama", model: "llama2" } + ] +}); + +// Stream with fallbacks +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + fallbacks: [ + { adapter: "ollama", model: "llama2" } + ] +}); + +// HTTP response with fallbacks (seamless HTTP failover!) +import { toStreamResponse } from "@tanstack/ai"; + +const stream = ai.chat({ + adapter: "openai", + model: "gpt-4", + messages: [...], + fallbacks: [ + { adapter: "ollama", model: "llama2" } + ] +}); +return toStreamResponse(stream); +``` + +## Fallback-Only Mode + +No primary adapter, just try fallbacks in order: + +```typescript +const result = await ai.chatCompletion({ + messages: [...], + fallbacks: [ + { adapter: "openai", model: "gpt-4" }, + { adapter: "anthropic", model: "claude-3-sonnet-20240229" }, + { adapter: "ollama", model: "llama2" } + ], +}); +``` + +## Migration from Old API + +### Before (using `as` option) + +```typescript +// Non-streaming +const result = await ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], + as: 'promise', +}) + +// Streaming +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], + as: 'stream', +}) + +// HTTP Response +const response = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], + as: 'response', +}) +``` + +### After (separate methods) + +```typescript +// Non-streaming - use chatCompletion() +const result = await ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) + +// Streaming - use chat() +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) + +// HTTP Response - use chat() + toStreamResponse() +import { toStreamResponse } from '@tanstack/ai' + +const stream = ai.chat({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) +return toStreamResponse(stream) +``` + +## Type Inference + +TypeScript automatically infers the correct return type: + +```typescript +// Type: Promise +const promise = ai.chatCompletion({ + adapter: 'openai', + model: 'gpt-4', + messages: [], +}) + +// Type: AsyncIterable +const stream = ai.chat({ adapter: 'openai', model: 'gpt-4', messages: [] }) +``` + +## Error Handling + +Both methods throw errors if all adapters fail: + +```typescript +try { + const result = await ai.chatCompletion({ + adapter: "openai", + model: "gpt-4", + messages: [...], + fallbacks: [{ adapter: "ollama", model: "llama2" }] + }); +} catch (error: any) { + console.error("All adapters failed:", error.message); +} +``` + +## Cheat Sheet + +| What You Want | Use This | Example | +| ----------------- | ------------------------------------ | ----------------------------------------------------- | +| Complete response | `chatCompletion()` | `const result = await ai.chatCompletion({...})` | +| Custom streaming | `chat()` | `for await (const chunk of ai.chat({...}))` | +| API endpoint | `chat()` + `toStreamResponse()` | `return toStreamResponse(ai.chat({...}))` | +| With fallbacks | Add `fallbacks: [...]` | `fallbacks: [{ adapter: "ollama", model: "llama2" }]` | +| With tools | Add `tools: [...]` | `tools: [{...}, {...}], toolChoice: "auto"` | +| Multiple adapters | Use `fallbacks` only | `fallbacks: [{ adapter: "a", model: "m1" }, {...}]` | +| Structured output | Use `chatCompletion()` with `output` | `chatCompletion({..., output: schema })` | + +## Documentation + +- **Full API Docs**: `docs/UNIFIED_CHAT_API.md` +- **Migration Guide**: `docs/MIGRATION_UNIFIED_CHAT.md` +- **Implementation**: `docs/UNIFIED_CHAT_IMPLEMENTATION.md` diff --git a/config.json b/config.json new file mode 100644 index 000000000..174cf479e --- /dev/null +++ b/config.json @@ -0,0 +1,14 @@ +{ + "$schema": "https://unpkg.com/@changesets/config@3.1.1/schema.json", + "changelog": [ + "@svitejs/changesets-changelog-github-compact", + { "repo": "TanStack/ai" } + ], + "commit": false, + "access": "public", + "baseBranch": "main", + "updateInternalDependencies": "patch", + "fixed": [], + "linked": [], + "ignore": [] +} diff --git a/docs/reference/classes/BaseAdapter.md b/docs/reference/classes/BaseAdapter.md new file mode 100644 index 000000000..7ea3273e8 --- /dev/null +++ b/docs/reference/classes/BaseAdapter.md @@ -0,0 +1,265 @@ +--- +id: BaseAdapter +title: BaseAdapter +--- + +# Abstract Class: BaseAdapter\ + +Defined in: [base-adapter.ts:22](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L22) + +Base adapter class with support for endpoint-specific models and provider options. + +Generic parameters: +- TChatModels: Models that support chat/text completion +- TEmbeddingModels: Models that support embeddings +- TChatProviderOptions: Provider-specific options for chat endpoint +- TEmbeddingProviderOptions: Provider-specific options for embedding endpoint +- TModelProviderOptionsByName: Provider-specific options for model by name + +## Type Parameters + +### TChatModels + +`TChatModels` *extends* `ReadonlyArray`\<`string`\> = `ReadonlyArray`\<`string`\> + +### TEmbeddingModels + +`TEmbeddingModels` *extends* `ReadonlyArray`\<`string`\> = `ReadonlyArray`\<`string`\> + +### TChatProviderOptions + +`TChatProviderOptions` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +### TEmbeddingProviderOptions + +`TEmbeddingProviderOptions` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +### TModelProviderOptionsByName + +`TModelProviderOptionsByName` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +## Implements + +- [`AIAdapter`](../../interfaces/AIAdapter.md)\<`TChatModels`, `TEmbeddingModels`, `TChatProviderOptions`, `TEmbeddingProviderOptions`, `TModelProviderOptionsByName`\> + +## Constructors + +### Constructor + +```ts +new BaseAdapter(config): BaseAdapter; +``` + +Defined in: [base-adapter.ts:49](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L49) + +#### Parameters + +##### config + +[`AIAdapterConfig`](../../interfaces/AIAdapterConfig.md) = `{}` + +#### Returns + +`BaseAdapter`\<`TChatModels`, `TEmbeddingModels`, `TChatProviderOptions`, `TEmbeddingProviderOptions`, `TModelProviderOptionsByName`\> + +## Properties + +### \_chatProviderOptions? + +```ts +optional _chatProviderOptions: TChatProviderOptions; +``` + +Defined in: [base-adapter.ts:44](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L44) + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`_chatProviderOptions`](../../interfaces/AIAdapter.md#_chatprovideroptions) + +*** + +### \_embeddingProviderOptions? + +```ts +optional _embeddingProviderOptions: TEmbeddingProviderOptions; +``` + +Defined in: [base-adapter.ts:45](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L45) + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`_embeddingProviderOptions`](../../interfaces/AIAdapter.md#_embeddingprovideroptions) + +*** + +### \_modelProviderOptionsByName + +```ts +_modelProviderOptionsByName: TModelProviderOptionsByName; +``` + +Defined in: [base-adapter.ts:47](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L47) + +Type-only map from model name to its specific provider options. +Used by the core AI types to narrow providerOptions based on the selected model. +Must be provided by all adapters. + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`_modelProviderOptionsByName`](../../interfaces/AIAdapter.md#_modelprovideroptionsbyname) + +*** + +### \_providerOptions? + +```ts +optional _providerOptions: TChatProviderOptions; +``` + +Defined in: [base-adapter.ts:43](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L43) + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`_providerOptions`](../../interfaces/AIAdapter.md#_provideroptions) + +*** + +### config + +```ts +protected config: AIAdapterConfig; +``` + +Defined in: [base-adapter.ts:40](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L40) + +*** + +### embeddingModels? + +```ts +optional embeddingModels: TEmbeddingModels; +``` + +Defined in: [base-adapter.ts:39](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L39) + +Models that support embeddings + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`embeddingModels`](../../interfaces/AIAdapter.md#embeddingmodels) + +*** + +### models + +```ts +abstract models: TChatModels; +``` + +Defined in: [base-adapter.ts:38](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L38) + +Models that support chat/text completion + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`models`](../../interfaces/AIAdapter.md#models) + +*** + +### name + +```ts +abstract name: string; +``` + +Defined in: [base-adapter.ts:37](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L37) + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`name`](../../interfaces/AIAdapter.md#name) + +## Methods + +### chatStream() + +```ts +abstract chatStream(options): AsyncIterable; +``` + +Defined in: [base-adapter.ts:53](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L53) + +#### Parameters + +##### options + +[`ChatOptions`](../../interfaces/ChatOptions.md) + +#### Returns + +`AsyncIterable`\<[`StreamChunk`](../../type-aliases/StreamChunk.md)\> + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`chatStream`](../../interfaces/AIAdapter.md#chatstream) + +*** + +### createEmbeddings() + +```ts +abstract createEmbeddings(options): Promise; +``` + +Defined in: [base-adapter.ts:58](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L58) + +#### Parameters + +##### options + +[`EmbeddingOptions`](../../interfaces/EmbeddingOptions.md) + +#### Returns + +`Promise`\<[`EmbeddingResult`](../../interfaces/EmbeddingResult.md)\> + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`createEmbeddings`](../../interfaces/AIAdapter.md#createembeddings) + +*** + +### generateId() + +```ts +protected generateId(): string; +``` + +Defined in: [base-adapter.ts:60](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L60) + +#### Returns + +`string` + +*** + +### summarize() + +```ts +abstract summarize(options): Promise; +``` + +Defined in: [base-adapter.ts:55](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/base-adapter.ts#L55) + +#### Parameters + +##### options + +[`SummarizationOptions`](../../interfaces/SummarizationOptions.md) + +#### Returns + +`Promise`\<[`SummarizationResult`](../../interfaces/SummarizationResult.md)\> + +#### Implementation of + +[`AIAdapter`](../../interfaces/AIAdapter.md).[`summarize`](../../interfaces/AIAdapter.md#summarize) diff --git a/docs/reference/classes/ToolCallManager.md b/docs/reference/classes/ToolCallManager.md new file mode 100644 index 000000000..1d713c081 --- /dev/null +++ b/docs/reference/classes/ToolCallManager.md @@ -0,0 +1,190 @@ +--- +id: ToolCallManager +title: ToolCallManager +--- + +# Class: ToolCallManager + +Defined in: [tools/tool-calls.ts:41](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L41) + +Manages tool call accumulation and execution for the chat() method's automatic tool execution loop. + +Responsibilities: +- Accumulates streaming tool call chunks (ID, name, arguments) +- Validates tool calls (filters out incomplete ones) +- Executes tool `execute` functions with parsed arguments +- Emits `tool_result` chunks for client visibility +- Returns tool result messages for conversation history + +This class is used internally by the AI.chat() method to handle the automatic +tool execution loop. It can also be used independently for custom tool execution logic. + +## Example + +```typescript +const manager = new ToolCallManager(tools); + +// During streaming, accumulate tool calls +for await (const chunk of stream) { + if (chunk.type === "tool_call") { + manager.addToolCallChunk(chunk); + } +} + +// After stream completes, execute tools +if (manager.hasToolCalls()) { + const toolResults = yield* manager.executeTools(doneChunk); + messages = [...messages, ...toolResults]; + manager.clear(); +} +``` + +## Constructors + +### Constructor + +```ts +new ToolCallManager(tools): ToolCallManager; +``` + +Defined in: [tools/tool-calls.ts:45](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L45) + +#### Parameters + +##### tools + +readonly [`Tool`](../../interfaces/Tool.md)[] + +#### Returns + +`ToolCallManager` + +## Methods + +### addToolCallChunk() + +```ts +addToolCallChunk(chunk): void; +``` + +Defined in: [tools/tool-calls.ts:53](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L53) + +Add a tool call chunk to the accumulator +Handles streaming tool calls by accumulating arguments + +#### Parameters + +##### chunk + +###### index + +`number` + +###### toolCall + +\{ + `function`: \{ + `arguments`: `string`; + `name`: `string`; + \}; + `id`: `string`; + `type`: `"function"`; +\} + +###### toolCall.function + +\{ + `arguments`: `string`; + `name`: `string`; +\} + +###### toolCall.function.arguments + +`string` + +###### toolCall.function.name + +`string` + +###### toolCall.id + +`string` + +###### toolCall.type + +`"function"` + +#### Returns + +`void` + +*** + +### clear() + +```ts +clear(): void; +``` + +Defined in: [tools/tool-calls.ts:171](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L171) + +Clear the tool calls map for the next iteration + +#### Returns + +`void` + +*** + +### executeTools() + +```ts +executeTools(doneChunk): AsyncGenerator; +``` + +Defined in: [tools/tool-calls.ts:111](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L111) + +Execute all tool calls and return tool result messages +Also yields tool_result chunks for streaming + +#### Parameters + +##### doneChunk + +[`DoneStreamChunk`](../../interfaces/DoneStreamChunk.md) + +#### Returns + +`AsyncGenerator`\<[`ToolResultStreamChunk`](../../interfaces/ToolResultStreamChunk.md), [`ModelMessage`](../../interfaces/ModelMessage.md)[], `void`\> + +*** + +### getToolCalls() + +```ts +getToolCalls(): ToolCall[]; +``` + +Defined in: [tools/tool-calls.ts:101](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L101) + +Get all complete tool calls (filtered for valid ID and name) + +#### Returns + +[`ToolCall`](../../interfaces/ToolCall.md)[] + +*** + +### hasToolCalls() + +```ts +hasToolCalls(): boolean; +``` + +Defined in: [tools/tool-calls.ts:94](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-calls.ts#L94) + +Check if there are any complete tool calls to execute + +#### Returns + +`boolean` diff --git a/docs/reference/functions/chat.md b/docs/reference/functions/chat.md new file mode 100644 index 000000000..9dd56efa8 --- /dev/null +++ b/docs/reference/functions/chat.md @@ -0,0 +1,55 @@ +--- +id: chat +title: chat +--- + +# Function: chat() + +```ts +function chat(options): AsyncIterable; +``` + +Defined in: [core/chat.ts:738](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/core/chat.ts#L738) + +Standalone chat streaming function with type inference from adapter +Returns an async iterable of StreamChunks for streaming responses +Includes automatic tool execution loop + +## Type Parameters + +### TAdapter + +`TAdapter` *extends* [`AIAdapter`](../../interfaces/AIAdapter.md)\<`any`, `any`, `any`, `any`, `any`\> + +### TModel + +`TModel` *extends* `any` + +## Parameters + +### options + +`Omit`\<[`ChatStreamOptionsUnion`](../../type-aliases/ChatStreamOptionsUnion.md)\<`TAdapter`\>, `"model"` \| `"providerOptions"` \| `"adapter"`\> & `object` + +Chat options + +## Returns + +`AsyncIterable`\<[`StreamChunk`](../../type-aliases/StreamChunk.md)\> + +## Example + +```typescript +const stream = chat({ + adapter: openai(), + model: 'gpt-4o', + messages: [{ role: 'user', content: 'Hello!' }], + tools: [weatherTool], // Optional: auto-executed when called +}); + +for await (const chunk of stream) { + if (chunk.type === 'content') { + console.log(chunk.delta); + } +} +``` diff --git a/docs/reference/functions/chatOptions.md b/docs/reference/functions/chatOptions.md new file mode 100644 index 000000000..90385ac78 --- /dev/null +++ b/docs/reference/functions/chatOptions.md @@ -0,0 +1,32 @@ +--- +id: chatOptions +title: chatOptions +--- + +# Function: chatOptions() + +```ts +function chatOptions(options): Omit, "model" | "providerOptions" | "messages" | "abortController"> & object; +``` + +Defined in: [utilities/chat-options.ts:3](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/utilities/chat-options.ts#L3) + +## Type Parameters + +### TAdapter + +`TAdapter` *extends* [`AIAdapter`](../../interfaces/AIAdapter.md)\<`any`, `any`, `any`, `any`, `any`\> + +### TModel + +`TModel` *extends* `any` + +## Parameters + +### options + +`Omit`\<[`ChatStreamOptionsUnion`](../../type-aliases/ChatStreamOptionsUnion.md)\<`TAdapter`\>, `"model"` \| `"providerOptions"` \| `"messages"` \| `"abortController"`\> & `object` + +## Returns + +`Omit`\<[`ChatStreamOptionsUnion`](../../type-aliases/ChatStreamOptionsUnion.md)\<`TAdapter`\>, `"model"` \| `"providerOptions"` \| `"messages"` \| `"abortController"`\> & `object` diff --git a/docs/reference/functions/combineStrategies.md b/docs/reference/functions/combineStrategies.md new file mode 100644 index 000000000..4821bbef9 --- /dev/null +++ b/docs/reference/functions/combineStrategies.md @@ -0,0 +1,44 @@ +--- +id: combineStrategies +title: combineStrategies +--- + +# Function: combineStrategies() + +```ts +function combineStrategies(strategies): AgentLoopStrategy; +``` + +Defined in: [utilities/agent-loop-strategies.ts:79](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/utilities/agent-loop-strategies.ts#L79) + +Creates a strategy that combines multiple strategies with AND logic +All strategies must return true to continue + +## Parameters + +### strategies + +[`AgentLoopStrategy`](../../type-aliases/AgentLoopStrategy.md)[] + +Array of strategies to combine + +## Returns + +[`AgentLoopStrategy`](../../type-aliases/AgentLoopStrategy.md) + +AgentLoopStrategy that continues only if all strategies return true + +## Example + +```typescript +const stream = chat({ + adapter: openai(), + model: "gpt-4o", + messages: [...], + tools: [weatherTool], + agentLoopStrategy: combineStrategies([ + maxIterations(10), + ({ messages }) => messages.length < 100, + ]), +}); +``` diff --git a/docs/reference/functions/embedding.md b/docs/reference/functions/embedding.md new file mode 100644 index 000000000..37b8806f0 --- /dev/null +++ b/docs/reference/functions/embedding.md @@ -0,0 +1,30 @@ +--- +id: embedding +title: embedding +--- + +# Function: embedding() + +```ts +function embedding(options): Promise; +``` + +Defined in: [core/embedding.ts:11](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/core/embedding.ts#L11) + +Standalone embedding function with type inference from adapter + +## Type Parameters + +### TAdapter + +`TAdapter` *extends* [`AIAdapter`](../../interfaces/AIAdapter.md)\<`any`, `any`, `any`, `any`, `any`\> + +## Parameters + +### options + +`Omit`\<[`EmbeddingOptions`](../../interfaces/EmbeddingOptions.md), `"model"`\> & `object` + +## Returns + +`Promise`\<[`EmbeddingResult`](../../interfaces/EmbeddingResult.md)\> diff --git a/docs/reference/functions/maxIterations.md b/docs/reference/functions/maxIterations.md new file mode 100644 index 000000000..696ccfad5 --- /dev/null +++ b/docs/reference/functions/maxIterations.md @@ -0,0 +1,40 @@ +--- +id: maxIterations +title: maxIterations +--- + +# Function: maxIterations() + +```ts +function maxIterations(max): AgentLoopStrategy; +``` + +Defined in: [utilities/agent-loop-strategies.ts:20](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/utilities/agent-loop-strategies.ts#L20) + +Creates a strategy that continues for a maximum number of iterations + +## Parameters + +### max + +`number` + +Maximum number of iterations to allow + +## Returns + +[`AgentLoopStrategy`](../../type-aliases/AgentLoopStrategy.md) + +AgentLoopStrategy that stops after max iterations + +## Example + +```typescript +const stream = chat({ + adapter: openai(), + model: "gpt-4o", + messages: [...], + tools: [weatherTool], + agentLoopStrategy: maxIterations(3), // Max 3 iterations +}); +``` diff --git a/docs/reference/functions/summarize.md b/docs/reference/functions/summarize.md new file mode 100644 index 000000000..2a0c35d25 --- /dev/null +++ b/docs/reference/functions/summarize.md @@ -0,0 +1,30 @@ +--- +id: summarize +title: summarize +--- + +# Function: summarize() + +```ts +function summarize(options): Promise; +``` + +Defined in: [core/summarize.ts:11](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/core/summarize.ts#L11) + +Standalone summarize function with type inference from adapter + +## Type Parameters + +### TAdapter + +`TAdapter` *extends* [`AIAdapter`](../../interfaces/AIAdapter.md)\<`any`, `any`, `any`, `any`, `any`\> + +## Parameters + +### options + +`Omit`\<[`SummarizationOptions`](../../interfaces/SummarizationOptions.md), `"model"`\> & `object` + +## Returns + +`Promise`\<[`SummarizationResult`](../../interfaces/SummarizationResult.md)\> diff --git a/docs/reference/functions/toServerSentEventsStream.md b/docs/reference/functions/toServerSentEventsStream.md new file mode 100644 index 000000000..7cbeffe57 --- /dev/null +++ b/docs/reference/functions/toServerSentEventsStream.md @@ -0,0 +1,47 @@ +--- +id: toServerSentEventsStream +title: toServerSentEventsStream +--- + +# Function: toServerSentEventsStream() + +```ts +function toServerSentEventsStream(stream, abortController?): ReadableStream>; +``` + +Defined in: [utilities/stream-to-response.ts:22](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/utilities/stream-to-response.ts#L22) + +Convert a StreamChunk async iterable to a ReadableStream in Server-Sent Events format + +This creates a ReadableStream that emits chunks in SSE format: +- Each chunk is prefixed with "data: " +- Each chunk is followed by "\n\n" +- Stream ends with "data: [DONE]\n\n" + +## Parameters + +### stream + +`AsyncIterable`\<[`StreamChunk`](../../type-aliases/StreamChunk.md)\> + +AsyncIterable of StreamChunks from chat() + +### abortController? + +`AbortController` + +Optional AbortController to abort when stream is cancelled + +## Returns + +`ReadableStream`\<`Uint8Array`\<`ArrayBufferLike`\>\> + +ReadableStream in Server-Sent Events format + +## Example + +```typescript +const stream = chat({ adapter: openai(), model: "gpt-4o", messages: [...] }); +const readableStream = toServerSentEventsStream(stream); +// Use with Response, or any API that accepts ReadableStream +``` diff --git a/docs/reference/functions/toStreamResponse.md b/docs/reference/functions/toStreamResponse.md new file mode 100644 index 000000000..883a9e222 --- /dev/null +++ b/docs/reference/functions/toStreamResponse.md @@ -0,0 +1,51 @@ +--- +id: toStreamResponse +title: toStreamResponse +--- + +# Function: toStreamResponse() + +```ts +function toStreamResponse(stream, init?): Response; +``` + +Defined in: [utilities/stream-to-response.ts:102](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/utilities/stream-to-response.ts#L102) + +Create a streaming HTTP response from a StreamChunk async iterable +Includes proper headers for Server-Sent Events + +## Parameters + +### stream + +`AsyncIterable`\<[`StreamChunk`](../../type-aliases/StreamChunk.md)\> + +AsyncIterable of StreamChunks from chat() + +### init? + +`ResponseInit` & `object` + +Optional Response initialization options + +## Returns + +`Response` + +Response object with SSE headers and streaming body + +## Example + +```typescript +export async function POST(request: Request) { + const { messages } = await request.json(); + const abortController = new AbortController(); + const stream = chat({ + adapter: openai(), + model: "gpt-4o", + messages, + options: { abortSignal: abortController.signal } + }); + return toStreamResponse(stream, undefined, abortController); +} +``` diff --git a/docs/reference/functions/tool.md b/docs/reference/functions/tool.md new file mode 100644 index 000000000..f5fb6c7be --- /dev/null +++ b/docs/reference/functions/tool.md @@ -0,0 +1,108 @@ +--- +id: tool +title: tool +--- + +# Function: tool() + +```ts +function tool(config): Tool; +``` + +Defined in: [tools/tool-utils.ts:70](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/tools/tool-utils.ts#L70) + +Helper to define a tool with enforced type safety. +Automatically infers the execute function argument types from the parameters schema. +User must provide the full Tool structure with type: "function" and function: {...} + +## Type Parameters + +### TProps + +`TProps` *extends* `Record`\<`string`, `any`\> + +### TRequired + +`TRequired` *extends* readonly `string`[] \| `undefined` + +## Parameters + +### config + +#### execute + +(`args`) => `string` \| `Promise`\<`string`\> + +#### function + +\{ + `description`: `string`; + `name`: `string`; + `parameters`: \{ + `properties`: `TProps`; + `required?`: `TRequired`; + `type`: `"object"`; + \}; +\} + +#### function.description + +`string` + +#### function.name + +`string` + +#### function.parameters + +\{ + `properties`: `TProps`; + `required?`: `TRequired`; + `type`: `"object"`; +\} + +#### function.parameters.properties + +`TProps` + +#### function.parameters.required? + +`TRequired` + +#### function.parameters.type + +`"object"` + +#### type + +`"function"` + +## Returns + +[`Tool`](../../interfaces/Tool.md) + +## Example + +```typescript +const tools = { + myTool: tool({ + type: "function", + function: { + name: "myTool", + description: "My tool description", + parameters: { + type: "object", + properties: { + id: { type: "string", description: "The ID" }, + optional: { type: "number", description: "Optional param" }, + }, + required: ["id"], + }, + }, + execute: async (args) => { + // āœ… args is automatically typed as { id: string; optional?: number } + return args.id; + }, + }), +}; +``` diff --git a/docs/reference/functions/untilFinishReason.md b/docs/reference/functions/untilFinishReason.md new file mode 100644 index 000000000..02f697ba9 --- /dev/null +++ b/docs/reference/functions/untilFinishReason.md @@ -0,0 +1,40 @@ +--- +id: untilFinishReason +title: untilFinishReason +--- + +# Function: untilFinishReason() + +```ts +function untilFinishReason(stopReasons): AgentLoopStrategy; +``` + +Defined in: [utilities/agent-loop-strategies.ts:41](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/utilities/agent-loop-strategies.ts#L41) + +Creates a strategy that continues until a specific finish reason is encountered + +## Parameters + +### stopReasons + +`string`[] + +Finish reasons that should stop the loop + +## Returns + +[`AgentLoopStrategy`](../../type-aliases/AgentLoopStrategy.md) + +AgentLoopStrategy that stops on specific finish reasons + +## Example + +```typescript +const stream = chat({ + adapter: openai(), + model: "gpt-4o", + messages: [...], + tools: [weatherTool], + agentLoopStrategy: untilFinishReason(["stop", "length"]), +}); +``` diff --git a/docs/reference/index.md b/docs/reference/index.md new file mode 100644 index 000000000..a1632d557 --- /dev/null +++ b/docs/reference/index.md @@ -0,0 +1,62 @@ +--- +id: "@tanstack/ai" +title: "@tanstack/ai" +--- + +# @tanstack/ai + +## Classes + +- [BaseAdapter](../classes/BaseAdapter.md) +- [ToolCallManager](../classes/ToolCallManager.md) + +## Interfaces + +- [AgentLoopState](../interfaces/AgentLoopState.md) +- [AIAdapter](../interfaces/AIAdapter.md) +- [AIAdapterConfig](../interfaces/AIAdapterConfig.md) +- [ApprovalRequestedStreamChunk](../interfaces/ApprovalRequestedStreamChunk.md) +- [BaseStreamChunk](../interfaces/BaseStreamChunk.md) +- [ChatCompletionChunk](../interfaces/ChatCompletionChunk.md) +- [ChatOptions](../interfaces/ChatOptions.md) +- [ContentStreamChunk](../interfaces/ContentStreamChunk.md) +- [DoneStreamChunk](../interfaces/DoneStreamChunk.md) +- [EmbeddingOptions](../interfaces/EmbeddingOptions.md) +- [EmbeddingResult](../interfaces/EmbeddingResult.md) +- [ErrorStreamChunk](../interfaces/ErrorStreamChunk.md) +- [ModelMessage](../interfaces/ModelMessage.md) +- [ResponseFormat](../interfaces/ResponseFormat.md) +- [SummarizationOptions](../interfaces/SummarizationOptions.md) +- [SummarizationResult](../interfaces/SummarizationResult.md) +- [ThinkingStreamChunk](../interfaces/ThinkingStreamChunk.md) +- [Tool](../interfaces/Tool.md) +- [ToolCall](../interfaces/ToolCall.md) +- [ToolCallStreamChunk](../interfaces/ToolCallStreamChunk.md) +- [ToolConfig](../interfaces/ToolConfig.md) +- [ToolInputAvailableStreamChunk](../interfaces/ToolInputAvailableStreamChunk.md) +- [ToolResultStreamChunk](../interfaces/ToolResultStreamChunk.md) + +## Type Aliases + +- [AgentLoopStrategy](../type-aliases/AgentLoopStrategy.md) +- [ChatStreamOptionsUnion](../type-aliases/ChatStreamOptionsUnion.md) +- [ExtractModelsFromAdapter](../type-aliases/ExtractModelsFromAdapter.md) +- [StreamChunk](../type-aliases/StreamChunk.md) +- [StreamChunkType](../type-aliases/StreamChunkType.md) + +## Variables + +- [aiEventClient](../variables/aiEventClient.md) + +## Functions + +- [chat](../functions/chat.md) +- [chatOptions](../functions/chatOptions.md) +- [combineStrategies](../functions/combineStrategies.md) +- [embedding](../functions/embedding.md) +- [maxIterations](../functions/maxIterations.md) +- [summarize](../functions/summarize.md) +- [tool](../functions/tool.md) +- [toServerSentEventsStream](../functions/toServerSentEventsStream.md) +- [toStreamResponse](../functions/toStreamResponse.md) +- [untilFinishReason](../functions/untilFinishReason.md) diff --git a/docs/reference/interfaces/AIAdapter.md b/docs/reference/interfaces/AIAdapter.md new file mode 100644 index 000000000..6595d6dc9 --- /dev/null +++ b/docs/reference/interfaces/AIAdapter.md @@ -0,0 +1,182 @@ +--- +id: AIAdapter +title: AIAdapter +--- + +# Interface: AIAdapter\ + +Defined in: [types.ts:425](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L425) + +AI adapter interface with support for endpoint-specific models and provider options. + +Generic parameters: +- TChatModels: Models that support chat/text completion +- TImageModels: Models that support image generation +- TEmbeddingModels: Models that support embeddings +- TAudioModels: Models that support audio (transcription and text-to-speech) +- TVideoModels: Models that support video generation +- TChatProviderOptions: Provider-specific options for chat endpoint +- TImageProviderOptions: Provider-specific options for image endpoint +- TEmbeddingProviderOptions: Provider-specific options for embedding endpoint +- TAudioProviderOptions: Provider-specific options for audio endpoint +- TVideoProviderOptions: Provider-specific options for video endpoint + +## Type Parameters + +### TChatModels + +`TChatModels` *extends* `ReadonlyArray`\<`string`\> = `ReadonlyArray`\<`string`\> + +### TEmbeddingModels + +`TEmbeddingModels` *extends* `ReadonlyArray`\<`string`\> = `ReadonlyArray`\<`string`\> + +### TChatProviderOptions + +`TChatProviderOptions` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +### TEmbeddingProviderOptions + +`TEmbeddingProviderOptions` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +### TModelProviderOptionsByName + +`TModelProviderOptionsByName` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +## Properties + +### \_chatProviderOptions? + +```ts +optional _chatProviderOptions: TChatProviderOptions; +``` + +Defined in: [types.ts:441](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L441) + +*** + +### \_embeddingProviderOptions? + +```ts +optional _embeddingProviderOptions: TEmbeddingProviderOptions; +``` + +Defined in: [types.ts:442](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L442) + +*** + +### \_modelProviderOptionsByName + +```ts +_modelProviderOptionsByName: TModelProviderOptionsByName; +``` + +Defined in: [types.ts:448](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L448) + +Type-only map from model name to its specific provider options. +Used by the core AI types to narrow providerOptions based on the selected model. +Must be provided by all adapters. + +*** + +### \_providerOptions? + +```ts +optional _providerOptions: TChatProviderOptions; +``` + +Defined in: [types.ts:440](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L440) + +*** + +### chatStream() + +```ts +chatStream: (options) => AsyncIterable; +``` + +Defined in: [types.ts:451](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L451) + +#### Parameters + +##### options + +[`ChatOptions`](../ChatOptions.md)\<`string`, `TChatProviderOptions`\> + +#### Returns + +`AsyncIterable`\<[`StreamChunk`](../../type-aliases/StreamChunk.md)\> + +*** + +### createEmbeddings() + +```ts +createEmbeddings: (options) => Promise; +``` + +Defined in: [types.ts:459](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L459) + +#### Parameters + +##### options + +[`EmbeddingOptions`](../EmbeddingOptions.md) + +#### Returns + +`Promise`\<[`EmbeddingResult`](../EmbeddingResult.md)\> + +*** + +### embeddingModels? + +```ts +optional embeddingModels: TEmbeddingModels; +``` + +Defined in: [types.ts:437](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L437) + +Models that support embeddings + +*** + +### models + +```ts +models: TChatModels; +``` + +Defined in: [types.ts:434](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L434) + +Models that support chat/text completion + +*** + +### name + +```ts +name: string; +``` + +Defined in: [types.ts:432](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L432) + +*** + +### summarize() + +```ts +summarize: (options) => Promise; +``` + +Defined in: [types.ts:456](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L456) + +#### Parameters + +##### options + +[`SummarizationOptions`](../SummarizationOptions.md) + +#### Returns + +`Promise`\<[`SummarizationResult`](../SummarizationResult.md)\> diff --git a/docs/reference/interfaces/AIAdapterConfig.md b/docs/reference/interfaces/AIAdapterConfig.md new file mode 100644 index 000000000..aed5908ba --- /dev/null +++ b/docs/reference/interfaces/AIAdapterConfig.md @@ -0,0 +1,58 @@ +--- +id: AIAdapterConfig +title: AIAdapterConfig +--- + +# Interface: AIAdapterConfig + +Defined in: [types.ts:462](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L462) + +## Properties + +### apiKey? + +```ts +optional apiKey: string; +``` + +Defined in: [types.ts:463](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L463) + +*** + +### baseUrl? + +```ts +optional baseUrl: string; +``` + +Defined in: [types.ts:464](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L464) + +*** + +### headers? + +```ts +optional headers: Record; +``` + +Defined in: [types.ts:467](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L467) + +*** + +### maxRetries? + +```ts +optional maxRetries: number; +``` + +Defined in: [types.ts:466](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L466) + +*** + +### timeout? + +```ts +optional timeout: number; +``` + +Defined in: [types.ts:465](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L465) diff --git a/docs/reference/interfaces/AgentLoopState.md b/docs/reference/interfaces/AgentLoopState.md new file mode 100644 index 000000000..14a78b4f6 --- /dev/null +++ b/docs/reference/interfaces/AgentLoopState.md @@ -0,0 +1,46 @@ +--- +id: AgentLoopState +title: AgentLoopState +--- + +# Interface: AgentLoopState + +Defined in: [types.ts:205](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L205) + +State passed to agent loop strategy for determining whether to continue + +## Properties + +### finishReason + +```ts +finishReason: string | null; +``` + +Defined in: [types.ts:211](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L211) + +Finish reason from the last response + +*** + +### iterationCount + +```ts +iterationCount: number; +``` + +Defined in: [types.ts:207](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L207) + +Current iteration count (0-indexed) + +*** + +### messages + +```ts +messages: ModelMessage[]; +``` + +Defined in: [types.ts:209](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L209) + +Current messages array diff --git a/docs/reference/interfaces/ApprovalRequestedStreamChunk.md b/docs/reference/interfaces/ApprovalRequestedStreamChunk.md new file mode 100644 index 000000000..0d8e25f04 --- /dev/null +++ b/docs/reference/interfaces/ApprovalRequestedStreamChunk.md @@ -0,0 +1,120 @@ +--- +id: ApprovalRequestedStreamChunk +title: ApprovalRequestedStreamChunk +--- + +# Interface: ApprovalRequestedStreamChunk + +Defined in: [types.ts:323](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L323) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### approval + +```ts +approval: object; +``` + +Defined in: [types.ts:328](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L328) + +#### id + +```ts +id: string; +``` + +#### needsApproval + +```ts +needsApproval: true; +``` + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### input + +```ts +input: any; +``` + +Defined in: [types.ts:327](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L327) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### toolCallId + +```ts +toolCallId: string; +``` + +Defined in: [types.ts:325](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L325) + +*** + +### toolName + +```ts +toolName: string; +``` + +Defined in: [types.ts:326](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L326) + +*** + +### type + +```ts +type: "approval-requested"; +``` + +Defined in: [types.ts:324](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L324) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/interfaces/BaseStreamChunk.md b/docs/reference/interfaces/BaseStreamChunk.md new file mode 100644 index 000000000..f8fc91435 --- /dev/null +++ b/docs/reference/interfaces/BaseStreamChunk.md @@ -0,0 +1,59 @@ +--- +id: BaseStreamChunk +title: BaseStreamChunk +--- + +# Interface: BaseStreamChunk + +Defined in: [types.ts:272](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L272) + +## Extended by + +- [`ContentStreamChunk`](../ContentStreamChunk.md) +- [`ToolCallStreamChunk`](../ToolCallStreamChunk.md) +- [`ToolResultStreamChunk`](../ToolResultStreamChunk.md) +- [`DoneStreamChunk`](../DoneStreamChunk.md) +- [`ErrorStreamChunk`](../ErrorStreamChunk.md) +- [`ApprovalRequestedStreamChunk`](../ApprovalRequestedStreamChunk.md) +- [`ToolInputAvailableStreamChunk`](../ToolInputAvailableStreamChunk.md) +- [`ThinkingStreamChunk`](../ThinkingStreamChunk.md) + +## Properties + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +*** + +### type + +```ts +type: StreamChunkType; +``` + +Defined in: [types.ts:273](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L273) diff --git a/docs/reference/interfaces/ChatCompletionChunk.md b/docs/reference/interfaces/ChatCompletionChunk.md new file mode 100644 index 000000000..ffe878907 --- /dev/null +++ b/docs/reference/interfaces/ChatCompletionChunk.md @@ -0,0 +1,86 @@ +--- +id: ChatCompletionChunk +title: ChatCompletionChunk +--- + +# Interface: ChatCompletionChunk + +Defined in: [types.ts:362](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L362) + +## Properties + +### content + +```ts +content: string; +``` + +Defined in: [types.ts:365](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L365) + +*** + +### finishReason? + +```ts +optional finishReason: "stop" | "length" | "content_filter" | null; +``` + +Defined in: [types.ts:367](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L367) + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:363](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L363) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:364](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L364) + +*** + +### role? + +```ts +optional role: "assistant"; +``` + +Defined in: [types.ts:366](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L366) + +*** + +### usage? + +```ts +optional usage: object; +``` + +Defined in: [types.ts:368](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L368) + +#### completionTokens + +```ts +completionTokens: number; +``` + +#### promptTokens + +```ts +promptTokens: number; +``` + +#### totalTokens + +```ts +totalTokens: number; +``` diff --git a/docs/reference/interfaces/ChatOptions.md b/docs/reference/interfaces/ChatOptions.md new file mode 100644 index 000000000..5866ecd14 --- /dev/null +++ b/docs/reference/interfaces/ChatOptions.md @@ -0,0 +1,145 @@ +--- +id: ChatOptions +title: ChatOptions +--- + +# Interface: ChatOptions\ + +Defined in: [types.ts:231](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L231) + +Options passed into the SDK and further piped to the AI provider. + +## Type Parameters + +### TModel + +`TModel` *extends* `string` = `string` + +### TProviderOptionsSuperset + +`TProviderOptionsSuperset` *extends* `Record`\<`string`, `any`\> = `Record`\<`string`, `any`\> + +### TOutput + +`TOutput` *extends* [`ResponseFormat`](../ResponseFormat.md)\<`any`\> \| `undefined` = `undefined` + +### TProviderOptionsForModel + +`TProviderOptionsForModel` = `TProviderOptionsSuperset` + +## Properties + +### abortController? + +```ts +optional abortController: AbortController; +``` + +Defined in: [types.ts:259](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L259) + +AbortController for request cancellation. + +Allows you to cancel an in-progress request using an AbortController. +Useful for implementing timeouts or user-initiated cancellations. + +#### Example + +```ts +const abortController = new AbortController(); +setTimeout(() => abortController.abort(), 5000); // Cancel after 5 seconds +await chat({ ..., abortController }); +``` + +#### See + +https://developer.mozilla.org/en-US/docs/Web/API/AbortController + +*** + +### agentLoopStrategy? + +```ts +optional agentLoopStrategy: AgentLoopStrategy; +``` + +Defined in: [types.ts:241](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L241) + +*** + +### messages + +```ts +messages: ModelMessage[]; +``` + +Defined in: [types.ts:238](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L238) + +*** + +### model + +```ts +model: TModel; +``` + +Defined in: [types.ts:237](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L237) + +*** + +### options? + +```ts +optional options: CommonOptions; +``` + +Defined in: [types.ts:242](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L242) + +*** + +### output? + +```ts +optional output: TOutput; +``` + +Defined in: [types.ts:245](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L245) + +*** + +### providerOptions? + +```ts +optional providerOptions: TProviderOptionsForModel; +``` + +Defined in: [types.ts:243](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L243) + +*** + +### request? + +```ts +optional request: Request | RequestInit; +``` + +Defined in: [types.ts:244](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L244) + +*** + +### systemPrompts? + +```ts +optional systemPrompts: string[]; +``` + +Defined in: [types.ts:240](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L240) + +*** + +### tools? + +```ts +optional tools: Tool[]; +``` + +Defined in: [types.ts:239](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L239) diff --git a/docs/reference/interfaces/ContentStreamChunk.md b/docs/reference/interfaces/ContentStreamChunk.md new file mode 100644 index 000000000..9f953f9a6 --- /dev/null +++ b/docs/reference/interfaces/ContentStreamChunk.md @@ -0,0 +1,98 @@ +--- +id: ContentStreamChunk +title: ContentStreamChunk +--- + +# Interface: ContentStreamChunk + +Defined in: [types.ts:279](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L279) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### content + +```ts +content: string; +``` + +Defined in: [types.ts:282](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L282) + +*** + +### delta + +```ts +delta: string; +``` + +Defined in: [types.ts:281](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L281) + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### role? + +```ts +optional role: "assistant"; +``` + +Defined in: [types.ts:283](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L283) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### type + +```ts +type: "content"; +``` + +Defined in: [types.ts:280](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L280) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/interfaces/DoneStreamChunk.md b/docs/reference/interfaces/DoneStreamChunk.md new file mode 100644 index 000000000..51689b8a5 --- /dev/null +++ b/docs/reference/interfaces/DoneStreamChunk.md @@ -0,0 +1,106 @@ +--- +id: DoneStreamChunk +title: DoneStreamChunk +--- + +# Interface: DoneStreamChunk + +Defined in: [types.ts:305](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L305) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### finishReason + +```ts +finishReason: "stop" | "length" | "content_filter" | "tool_calls" | null; +``` + +Defined in: [types.ts:307](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L307) + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### type + +```ts +type: "done"; +``` + +Defined in: [types.ts:306](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L306) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) + +*** + +### usage? + +```ts +optional usage: object; +``` + +Defined in: [types.ts:308](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L308) + +#### completionTokens + +```ts +completionTokens: number; +``` + +#### promptTokens + +```ts +promptTokens: number; +``` + +#### totalTokens + +```ts +totalTokens: number; +``` diff --git a/docs/reference/interfaces/EmbeddingOptions.md b/docs/reference/interfaces/EmbeddingOptions.md new file mode 100644 index 000000000..0c73ef4da --- /dev/null +++ b/docs/reference/interfaces/EmbeddingOptions.md @@ -0,0 +1,38 @@ +--- +id: EmbeddingOptions +title: EmbeddingOptions +--- + +# Interface: EmbeddingOptions + +Defined in: [types.ts:394](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L394) + +## Properties + +### dimensions? + +```ts +optional dimensions: number; +``` + +Defined in: [types.ts:397](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L397) + +*** + +### input + +```ts +input: string | string[]; +``` + +Defined in: [types.ts:396](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L396) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:395](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L395) diff --git a/docs/reference/interfaces/EmbeddingResult.md b/docs/reference/interfaces/EmbeddingResult.md new file mode 100644 index 000000000..f1ec170c0 --- /dev/null +++ b/docs/reference/interfaces/EmbeddingResult.md @@ -0,0 +1,60 @@ +--- +id: EmbeddingResult +title: EmbeddingResult +--- + +# Interface: EmbeddingResult + +Defined in: [types.ts:400](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L400) + +## Properties + +### embeddings + +```ts +embeddings: number[][]; +``` + +Defined in: [types.ts:403](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L403) + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:401](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L401) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:402](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L402) + +*** + +### usage + +```ts +usage: object; +``` + +Defined in: [types.ts:404](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L404) + +#### promptTokens + +```ts +promptTokens: number; +``` + +#### totalTokens + +```ts +totalTokens: number; +``` diff --git a/docs/reference/interfaces/ErrorStreamChunk.md b/docs/reference/interfaces/ErrorStreamChunk.md new file mode 100644 index 000000000..a75aba122 --- /dev/null +++ b/docs/reference/interfaces/ErrorStreamChunk.md @@ -0,0 +1,90 @@ +--- +id: ErrorStreamChunk +title: ErrorStreamChunk +--- + +# Interface: ErrorStreamChunk + +Defined in: [types.ts:315](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L315) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### error + +```ts +error: object; +``` + +Defined in: [types.ts:317](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L317) + +#### code? + +```ts +optional code: string; +``` + +#### message + +```ts +message: string; +``` + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### type + +```ts +type: "error"; +``` + +Defined in: [types.ts:316](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L316) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/interfaces/ModelMessage.md b/docs/reference/interfaces/ModelMessage.md new file mode 100644 index 000000000..403690993 --- /dev/null +++ b/docs/reference/interfaces/ModelMessage.md @@ -0,0 +1,58 @@ +--- +id: ModelMessage +title: ModelMessage +--- + +# Interface: ModelMessage + +Defined in: [types.ts:12](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L12) + +## Properties + +### content + +```ts +content: string | null; +``` + +Defined in: [types.ts:14](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L14) + +*** + +### name? + +```ts +optional name: string; +``` + +Defined in: [types.ts:15](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L15) + +*** + +### role + +```ts +role: "system" | "user" | "assistant" | "tool"; +``` + +Defined in: [types.ts:13](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L13) + +*** + +### toolCallId? + +```ts +optional toolCallId: string; +``` + +Defined in: [types.ts:17](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L17) + +*** + +### toolCalls? + +```ts +optional toolCalls: ToolCall[]; +``` + +Defined in: [types.ts:16](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L16) diff --git a/docs/reference/interfaces/ResponseFormat.md b/docs/reference/interfaces/ResponseFormat.md new file mode 100644 index 000000000..62f53a2f4 --- /dev/null +++ b/docs/reference/interfaces/ResponseFormat.md @@ -0,0 +1,151 @@ +--- +id: ResponseFormat +title: ResponseFormat +--- + +# Interface: ResponseFormat\ + +Defined in: [types.ts:121](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L121) + +Structured output format specification. + +Constrains the model's output to match a specific JSON structure. +Useful for extracting structured data, form filling, or ensuring consistent response formats. + +## See + + - https://platform.openai.com/docs/guides/structured-outputs + - https://sdk.vercel.ai/docs/ai-sdk-core/structured-outputs + +## Type Parameters + +### TData + +`TData` = `any` + +TypeScript type of the expected data structure (for type safety) + +## Properties + +### \_\_data? + +```ts +optional __data: TData; +``` + +Defined in: [types.ts:199](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L199) + +**`Internal`** + +Type-only property to carry the inferred data type. + +This is never set at runtime - it only exists for TypeScript type inference. +Allows the SDK to know what type to expect when parsing the response. + +*** + +### json\_schema? + +```ts +optional json_schema: object; +``` + +Defined in: [types.ts:138](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L138) + +JSON schema specification (required when type is "json_schema"). + +Defines the exact structure the model's output must conform to. +OpenAI's structured outputs will guarantee the output matches this schema. + +#### description? + +```ts +optional description: string; +``` + +Optional description of what the schema represents. + +Helps document the purpose of this structured output. + +##### Example + +```ts +"User profile information including name, email, and preferences" +``` + +#### name + +```ts +name: string; +``` + +Unique name for the schema. + +Used to identify the schema in logs and debugging. +Should be descriptive (e.g., "user_profile", "search_results"). + +#### schema + +```ts +schema: Record; +``` + +JSON Schema definition for the expected output structure. + +Must be a valid JSON Schema (draft 2020-12 or compatible). +The model's output will be validated against this schema. + +##### See + +https://json-schema.org/ + +##### Example + +```ts +{ + * type: "object", + * properties: { + * name: { type: "string" }, + * age: { type: "number" }, + * email: { type: "string", format: "email" } + * }, + * required: ["name", "email"], + * additionalProperties: false + * } +``` + +#### strict? + +```ts +optional strict: boolean; +``` + +Whether to enforce strict schema validation. + +When true (recommended), the model guarantees output will match the schema exactly. +When false, the model will "best effort" match the schema. + +Default: true (for providers that support it) + +##### See + +https://platform.openai.com/docs/guides/structured-outputs#strict-mode + +*** + +### type + +```ts +type: "json_object" | "json_schema"; +``` + +Defined in: [types.ts:130](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L130) + +Type of structured output. + +- "json_object": Forces the model to output valid JSON (any structure) +- "json_schema": Validates output against a provided JSON Schema (strict structure) + +#### See + +https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format diff --git a/docs/reference/interfaces/SummarizationOptions.md b/docs/reference/interfaces/SummarizationOptions.md new file mode 100644 index 000000000..dcd67c898 --- /dev/null +++ b/docs/reference/interfaces/SummarizationOptions.md @@ -0,0 +1,58 @@ +--- +id: SummarizationOptions +title: SummarizationOptions +--- + +# Interface: SummarizationOptions + +Defined in: [types.ts:375](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L375) + +## Properties + +### focus? + +```ts +optional focus: string[]; +``` + +Defined in: [types.ts:380](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L380) + +*** + +### maxLength? + +```ts +optional maxLength: number; +``` + +Defined in: [types.ts:378](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L378) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:376](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L376) + +*** + +### style? + +```ts +optional style: "bullet-points" | "paragraph" | "concise"; +``` + +Defined in: [types.ts:379](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L379) + +*** + +### text + +```ts +text: string; +``` + +Defined in: [types.ts:377](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L377) diff --git a/docs/reference/interfaces/SummarizationResult.md b/docs/reference/interfaces/SummarizationResult.md new file mode 100644 index 000000000..a926eb936 --- /dev/null +++ b/docs/reference/interfaces/SummarizationResult.md @@ -0,0 +1,66 @@ +--- +id: SummarizationResult +title: SummarizationResult +--- + +# Interface: SummarizationResult + +Defined in: [types.ts:383](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L383) + +## Properties + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:384](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L384) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:385](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L385) + +*** + +### summary + +```ts +summary: string; +``` + +Defined in: [types.ts:386](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L386) + +*** + +### usage + +```ts +usage: object; +``` + +Defined in: [types.ts:387](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L387) + +#### completionTokens + +```ts +completionTokens: number; +``` + +#### promptTokens + +```ts +promptTokens: number; +``` + +#### totalTokens + +```ts +totalTokens: number; +``` diff --git a/docs/reference/interfaces/ThinkingStreamChunk.md b/docs/reference/interfaces/ThinkingStreamChunk.md new file mode 100644 index 000000000..91a5fd21c --- /dev/null +++ b/docs/reference/interfaces/ThinkingStreamChunk.md @@ -0,0 +1,88 @@ +--- +id: ThinkingStreamChunk +title: ThinkingStreamChunk +--- + +# Interface: ThinkingStreamChunk + +Defined in: [types.ts:341](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L341) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### content + +```ts +content: string; +``` + +Defined in: [types.ts:344](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L344) + +*** + +### delta? + +```ts +optional delta: string; +``` + +Defined in: [types.ts:343](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L343) + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### type + +```ts +type: "thinking"; +``` + +Defined in: [types.ts:342](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L342) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/interfaces/Tool.md b/docs/reference/interfaces/Tool.md new file mode 100644 index 000000000..a0668f0fc --- /dev/null +++ b/docs/reference/interfaces/Tool.md @@ -0,0 +1,168 @@ +--- +id: Tool +title: Tool +--- + +# Interface: Tool + +Defined in: [types.ts:29](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L29) + +Tool/Function definition for function calling. + +Tools allow the model to interact with external systems, APIs, or perform computations. +The model will decide when to call tools based on the user's request and the tool descriptions. + +## See + + - https://platform.openai.com/docs/guides/function-calling + - https://docs.anthropic.com/claude/docs/tool-use + +## Properties + +### execute()? + +```ts +optional execute: (args) => string | Promise; +``` + +Defined in: [types.ts:99](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L99) + +Optional function to execute when the model calls this tool. + +If provided, the SDK will automatically execute the function with the model's arguments +and feed the result back to the model. This enables autonomous tool use loops. + +Returns the result as a string (or Promise) to send back to the model. + +#### Parameters + +##### args + +`any` + +The arguments parsed from the model's tool call (matches the parameters schema) + +#### Returns + +`string` \| `Promise`\<`string`\> + +Result string to send back to the model + +#### Example + +```ts +execute: async (args) => { + const weather = await fetchWeather(args.location); + return JSON.stringify(weather); +} +``` + +*** + +### function + +```ts +function: object; +``` + +Defined in: [types.ts:40](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L40) + +Function definition and metadata. + +#### description + +```ts +description: string; +``` + +Clear description of what the function does. + +This is crucial - the model uses this to decide when to call the function. +Be specific about what the function does, what parameters it needs, and what it returns. + +##### Example + +```ts +"Get the current weather in a given location. Returns temperature, conditions, and forecast." +``` + +#### name + +```ts +name: string; +``` + +Unique name of the function (used by the model to call it). + +Should be descriptive and follow naming conventions (e.g., snake_case or camelCase). +Must be unique within the tools array. + +##### Example + +```ts +"get_weather", "search_database", "sendEmail" +``` + +#### parameters + +```ts +parameters: Record; +``` + +JSON Schema describing the function's parameters. + +Defines the structure and types of arguments the function accepts. +The model will generate arguments matching this schema. + +##### See + +https://json-schema.org/ + +##### Example + +```ts +{ + * type: "object", + * properties: { + * location: { type: "string", description: "City name or coordinates" }, + * unit: { type: "string", enum: ["celsius", "fahrenheit"] } + * }, + * required: ["location"] + * } +``` + +*** + +### metadata? + +```ts +optional metadata: Record; +``` + +Defined in: [types.ts:103](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L103) + +*** + +### needsApproval? + +```ts +optional needsApproval: boolean; +``` + +Defined in: [types.ts:101](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L101) + +If true, tool execution requires user approval before running. Works with both server and client tools. + +*** + +### type + +```ts +type: "function"; +``` + +Defined in: [types.ts:35](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L35) + +Type of tool - currently only "function" is supported. + +Future versions may support additional tool types. diff --git a/docs/reference/interfaces/ToolCall.md b/docs/reference/interfaces/ToolCall.md new file mode 100644 index 000000000..4843850df --- /dev/null +++ b/docs/reference/interfaces/ToolCall.md @@ -0,0 +1,50 @@ +--- +id: ToolCall +title: ToolCall +--- + +# Interface: ToolCall + +Defined in: [types.ts:3](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L3) + +## Properties + +### function + +```ts +function: object; +``` + +Defined in: [types.ts:6](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L6) + +#### arguments + +```ts +arguments: string; +``` + +#### name + +```ts +name: string; +``` + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:4](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L4) + +*** + +### type + +```ts +type: "function"; +``` + +Defined in: [types.ts:5](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L5) diff --git a/docs/reference/interfaces/ToolCallStreamChunk.md b/docs/reference/interfaces/ToolCallStreamChunk.md new file mode 100644 index 000000000..92507c372 --- /dev/null +++ b/docs/reference/interfaces/ToolCallStreamChunk.md @@ -0,0 +1,118 @@ +--- +id: ToolCallStreamChunk +title: ToolCallStreamChunk +--- + +# Interface: ToolCallStreamChunk + +Defined in: [types.ts:286](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L286) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### index + +```ts +index: number; +``` + +Defined in: [types.ts:296](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L296) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### toolCall + +```ts +toolCall: object; +``` + +Defined in: [types.ts:288](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L288) + +#### function + +```ts +function: object; +``` + +##### function.arguments + +```ts +arguments: string; +``` + +##### function.name + +```ts +name: string; +``` + +#### id + +```ts +id: string; +``` + +#### type + +```ts +type: "function"; +``` + +*** + +### type + +```ts +type: "tool_call"; +``` + +Defined in: [types.ts:287](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L287) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/interfaces/ToolConfig.md b/docs/reference/interfaces/ToolConfig.md new file mode 100644 index 000000000..0cecdd3d6 --- /dev/null +++ b/docs/reference/interfaces/ToolConfig.md @@ -0,0 +1,14 @@ +--- +id: ToolConfig +title: ToolConfig +--- + +# Interface: ToolConfig + +Defined in: [types.ts:106](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L106) + +## Indexable + +```ts +[key: string]: Tool +``` diff --git a/docs/reference/interfaces/ToolInputAvailableStreamChunk.md b/docs/reference/interfaces/ToolInputAvailableStreamChunk.md new file mode 100644 index 000000000..c3f294788 --- /dev/null +++ b/docs/reference/interfaces/ToolInputAvailableStreamChunk.md @@ -0,0 +1,98 @@ +--- +id: ToolInputAvailableStreamChunk +title: ToolInputAvailableStreamChunk +--- + +# Interface: ToolInputAvailableStreamChunk + +Defined in: [types.ts:334](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L334) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### input + +```ts +input: any; +``` + +Defined in: [types.ts:338](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L338) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### toolCallId + +```ts +toolCallId: string; +``` + +Defined in: [types.ts:336](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L336) + +*** + +### toolName + +```ts +toolName: string; +``` + +Defined in: [types.ts:337](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L337) + +*** + +### type + +```ts +type: "tool-input-available"; +``` + +Defined in: [types.ts:335](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L335) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/interfaces/ToolResultStreamChunk.md b/docs/reference/interfaces/ToolResultStreamChunk.md new file mode 100644 index 000000000..58085881e --- /dev/null +++ b/docs/reference/interfaces/ToolResultStreamChunk.md @@ -0,0 +1,88 @@ +--- +id: ToolResultStreamChunk +title: ToolResultStreamChunk +--- + +# Interface: ToolResultStreamChunk + +Defined in: [types.ts:299](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L299) + +## Extends + +- [`BaseStreamChunk`](../BaseStreamChunk.md) + +## Properties + +### content + +```ts +content: string; +``` + +Defined in: [types.ts:302](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L302) + +*** + +### id + +```ts +id: string; +``` + +Defined in: [types.ts:274](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L274) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`id`](../BaseStreamChunk.md#id) + +*** + +### model + +```ts +model: string; +``` + +Defined in: [types.ts:275](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L275) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`model`](../BaseStreamChunk.md#model) + +*** + +### timestamp + +```ts +timestamp: number; +``` + +Defined in: [types.ts:276](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L276) + +#### Inherited from + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`timestamp`](../BaseStreamChunk.md#timestamp) + +*** + +### toolCallId + +```ts +toolCallId: string; +``` + +Defined in: [types.ts:301](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L301) + +*** + +### type + +```ts +type: "tool_result"; +``` + +Defined in: [types.ts:300](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L300) + +#### Overrides + +[`BaseStreamChunk`](../BaseStreamChunk.md).[`type`](../BaseStreamChunk.md#type) diff --git a/docs/reference/protocol.md b/docs/reference/protocol.md deleted file mode 100644 index 351d3194f..000000000 --- a/docs/reference/protocol.md +++ /dev/null @@ -1,415 +0,0 @@ -# Stream Protocol - -This document describes the structure of chunks sent from `@tanstack/ai` to `@tanstack/ai-client`, regardless of the transport mechanism (SSE, HTTP stream, direct stream, etc.). - -## Overview - -The protocol is based on a stream of JSON objects, where each object represents a chunk of data. All chunks share a common base structure and are distinguished by their `type` field. - -## Base Structure - -All chunks extend a base structure with the following required fields: - -```typescript -interface BaseStreamChunk { - type: StreamChunkType; - id: string; // Unique identifier for this chunk - model: string; // Model name that generated this chunk - timestamp: number; // Unix timestamp in milliseconds -} -``` - -## Chunk Types - -### 1. Content Chunk - -Represents incremental text content from the AI model. - -```typescript -interface ContentStreamChunk extends BaseStreamChunk { - type: "content"; - delta?: string; // The incremental content token (preferred) - content: string; // Full accumulated content so far - role?: "assistant"; -} -``` - -**Example:** - -```json -{ - "type": "content", - "id": "chunk_abc123", - "model": "gpt-4", - "timestamp": 1699123456789, - "delta": "Hello", - "content": "Hello", - "role": "assistant" -} -``` - -**Notes:** - -- `delta` is preferred over `content` for incremental updates -- `content` represents the full accumulated text up to this point -- The client should prefer `delta` when both are present - -### 2. Tool Call Chunk - -Represents incremental tool call arguments being streamed. - -```typescript -interface ToolCallStreamChunk extends BaseStreamChunk { - type: "tool_call"; - toolCall: { - id: string; // Unique identifier for this tool call - type: "function"; - function: { - name: string; // Name of the function/tool - arguments: string; // Incremental JSON arguments (may be incomplete) - }; - }; - index: number; // Zero-based index of this tool call in the current response -} -``` - -**Example:** - -```json -{ - "type": "tool_call", - "id": "chunk_def456", - "model": "gpt-4", - "timestamp": 1699123456790, - "toolCall": { - "id": "call_xyz789", - "type": "function", - "function": { - "name": "get_weather", - "arguments": "{\"location\": \"San" - } - }, - "index": 0 -} -``` - -**Notes:** - -- `arguments` is a JSON string that may be incomplete (partial JSON) -- Multiple chunks may be sent for the same tool call as arguments are streamed -- The client should accumulate and parse the arguments incrementally - -### 3. Tool Result Chunk - -Represents the result of a tool execution. - -```typescript -interface ToolResultStreamChunk extends BaseStreamChunk { - type: "tool_result"; - toolCallId: string; // ID of the tool call this result belongs to - content: string; // Result content (typically JSON stringified) -} -``` - -**Example:** - -```json -{ - "type": "tool_result", - "id": "chunk_ghi012", - "model": "gpt-4", - "timestamp": 1699123456791, - "toolCallId": "call_xyz789", - "content": "{\"temperature\": 72, \"condition\": \"sunny\"}" -} -``` - -### 4. Done Chunk - -Indicates the stream has completed. - -```typescript -interface DoneStreamChunk extends BaseStreamChunk { - type: "done"; - finishReason: "stop" | "length" | "content_filter" | "tool_calls" | null; - usage?: { - promptTokens: number; - completionTokens: number; - totalTokens: number; - }; -} -``` - -**Example:** - -```json -{ - "type": "done", - "id": "chunk_jkl345", - "model": "gpt-4", - "timestamp": 1699123456792, - "finishReason": "stop", - "usage": { - "promptTokens": 150, - "completionTokens": 75, - "totalTokens": 225 - } -} -``` - -**Notes:** - -- `finishReason: "tool_calls"` indicates the model wants to make tool calls -- `finishReason: "stop"` indicates normal completion -- `finishReason: "length"` indicates the response was truncated due to token limits -- `usage` is optional and may not be present in all cases - -### 5. Error Chunk - -Indicates an error occurred during streaming. - -```typescript -interface ErrorStreamChunk extends BaseStreamChunk { - type: "error"; - error: { - message: string; - code?: string; - }; -} -``` - -**Example:** - -```json -{ - "type": "error", - "id": "chunk_mno678", - "model": "gpt-4", - "timestamp": 1699123456793, - "error": { - "message": "Rate limit exceeded", - "code": "rate_limit_exceeded" - } -} -``` - -**Notes:** - -- When an error chunk is received, the stream should be considered terminated -- The client should handle the error and stop processing further chunks - -### 6. Approval Requested Chunk - -Indicates a tool call requires user approval before execution. - -```typescript -interface ApprovalRequestedStreamChunk extends BaseStreamChunk { - type: "approval-requested"; - toolCallId: string; // ID of the tool call requiring approval - toolName: string; // Name of the tool - input: any; // Parsed input arguments for the tool - approval: { - id: string; // Unique approval request ID - needsApproval: true; - }; -} -``` - -**Example:** - -```json -{ - "type": "approval-requested", - "id": "chunk_pqr901", - "model": "gpt-4", - "timestamp": 1699123456794, - "toolCallId": "call_xyz789", - "toolName": "send_email", - "input": { - "to": "user@example.com", - "subject": "Important Update", - "body": "Your request has been processed." - }, - "approval": { - "id": "approval_abc123", - "needsApproval": true - } -} -``` - -**Notes:** - -- This chunk is emitted when a tool has `needsApproval: true` in its definition -- The client should pause execution and wait for user approval -- The approval ID is used to respond to the approval request - -### 7. Tool Input Available Chunk - -Indicates a tool call's input is available for client-side execution. - -```typescript -interface ToolInputAvailableStreamChunk extends BaseStreamChunk { - type: "tool-input-available"; - toolCallId: string; // ID of the tool call - toolName: string; // Name of the tool - input: any; // Parsed input arguments for the tool -} -``` - -**Example:** - -```json -{ - "type": "tool-input-available", - "id": "chunk_stu234", - "model": "gpt-4", - "timestamp": 1699123456795, - "toolCallId": "call_xyz789", - "toolName": "update_ui", - "input": { - "component": "status", - "value": "completed" - } -} -``` - -**Notes:** - -- This chunk is emitted for client-side tools (tools without server-side execution) -- The client should execute the tool locally and return the result -- This is separate from approval-requested - a tool can be client-side without requiring approval - -### 8. Thinking Chunk - -Represents "thinking" or reasoning content from models that support it (e.g., Claude's thinking mode). - -```typescript -interface ThinkingStreamChunk extends BaseStreamChunk { - type: "thinking"; - delta?: string; // The incremental thinking token (preferred) - content: string; // Full accumulated thinking content so far -} -``` - -**Example:** - -```json -{ - "type": "thinking", - "id": "chunk_vwx567", - "model": "claude-3-opus", - "timestamp": 1699123456796, - "delta": "Let me", - "content": "Let me" -} -``` - -**Notes:** - -- Similar to content chunks, `delta` is preferred over `content` -- This represents internal reasoning that may not be shown to the user -- Not all models support thinking chunks - -## Complete Type Definition - -```typescript -type StreamChunkType = - | "content" - | "tool_call" - | "tool_result" - | "done" - | "error" - | "approval-requested" - | "tool-input-available" - | "thinking"; - -type StreamChunk = - | ContentStreamChunk - | ToolCallStreamChunk - | ToolResultStreamChunk - | DoneStreamChunk - | ErrorStreamChunk - | ApprovalRequestedStreamChunk - | ToolInputAvailableStreamChunk - | ThinkingStreamChunk; -``` - -## Transport Mechanisms - -The protocol is transport-agnostic. Chunks can be sent via: - -1. **Server-Sent Events (SSE)**: Each chunk is sent as `data: \n\n` -2. **HTTP Stream**: Newline-delimited JSON (NDJSON) -3. **Direct Stream**: AsyncIterable of chunk objects - -### SSE Format - -``` -data: {"type":"content","id":"chunk_1","model":"gpt-4","timestamp":1699123456789,"delta":"Hello","content":"Hello"} - -data: {"type":"content","id":"chunk_2","model":"gpt-4","timestamp":1699123456790,"delta":" world","content":"Hello world"} - -data: [DONE] -``` - -### NDJSON Format - -``` -{"type":"content","id":"chunk_1","model":"gpt-4","timestamp":1699123456789,"delta":"Hello","content":"Hello"} -{"type":"content","id":"chunk_2","model":"gpt-4","timestamp":1699123456790,"delta":" world","content":"Hello world"} -``` - -## Chunk Flow - -### Typical Text Response - -1. Multiple `content` chunks (with `delta` and `content`) -2. One `done` chunk (with `finishReason: "stop"`) - -### Tool Call Flow - -1. Multiple `tool_call` chunks (incremental arguments) -2. One `done` chunk (with `finishReason: "tool_calls"`) -3. Either: - - `approval-requested` chunk (if tool needs approval) - - `tool-input-available` chunk (if client-side tool) - - `tool_result` chunk (if server executed) -4. Continue with more content or another tool call cycle - -### Error Flow - -1. Any number of chunks -2. One `error` chunk -3. Stream terminates - -## Client Processing - -The `@tanstack/ai-client` package processes these chunks through: - -1. **Connection Adapter**: Receives chunks from transport -2. **Stream Parser**: Converts adapter format to processor format (if needed) -3. **Stream Processor**: Accumulates state, tracks tool calls, emits events -4. **Chat Client**: Manages message state and UI updates - -The processor handles: - -- Accumulating text content from `delta` or `content` fields -- Tracking tool call state (awaiting-input, input-streaming, input-complete) -- Parsing partial JSON arguments incrementally -- Emitting lifecycle events for tool calls -- Managing parallel tool calls - -## Best Practices - -1. **Always prefer `delta` over `content`** when both are present -2. **Handle partial JSON** in tool call arguments gracefully -3. **Track tool call state** using the `id` field, not the `index` -4. **Handle errors gracefully** - an error chunk terminates the stream -5. **Respect approval flow** - wait for user approval when `approval-requested` is received -6. **Use timestamps** for debugging and ordering chunks if needed - -## See Also - -- [Chat Client API](/docs/api/ai-client.md) - How to use the client -- [Streaming Guide](/docs/guides/streaming.md) - Streaming patterns and examples -- [Tool Registry](/docs/guides/tool-registry.md) - Tool execution and approval diff --git a/docs/reference/type-aliases/AgentLoopStrategy.md b/docs/reference/type-aliases/AgentLoopStrategy.md new file mode 100644 index 000000000..371cb3c32 --- /dev/null +++ b/docs/reference/type-aliases/AgentLoopStrategy.md @@ -0,0 +1,35 @@ +--- +id: AgentLoopStrategy +title: AgentLoopStrategy +--- + +# Type Alias: AgentLoopStrategy() + +```ts +type AgentLoopStrategy = (state) => boolean; +``` + +Defined in: [types.ts:226](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L226) + +Strategy function that determines whether the agent loop should continue + +## Parameters + +### state + +[`AgentLoopState`](../../interfaces/AgentLoopState.md) + +Current state of the agent loop + +## Returns + +`boolean` + +true to continue looping, false to stop + +## Example + +```typescript +// Continue for up to 5 iterations +const strategy: AgentLoopStrategy = ({ iterationCount }) => iterationCount < 5; +``` diff --git a/docs/reference/type-aliases/ChatStreamOptionsUnion.md b/docs/reference/type-aliases/ChatStreamOptionsUnion.md new file mode 100644 index 000000000..3599785a3 --- /dev/null +++ b/docs/reference/type-aliases/ChatStreamOptionsUnion.md @@ -0,0 +1,18 @@ +--- +id: ChatStreamOptionsUnion +title: ChatStreamOptionsUnion +--- + +# Type Alias: ChatStreamOptionsUnion\ + +```ts +type ChatStreamOptionsUnion = TAdapter extends AIAdapter ? Models[number] extends infer TModel ? TModel extends string ? Omit & object : never : never : never; +``` + +Defined in: [types.ts:470](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L470) + +## Type Parameters + +### TAdapter + +`TAdapter` *extends* [`AIAdapter`](../../interfaces/AIAdapter.md)\<`any`, `any`, `any`, `any`, `any`\> diff --git a/docs/reference/type-aliases/ExtractModelsFromAdapter.md b/docs/reference/type-aliases/ExtractModelsFromAdapter.md new file mode 100644 index 000000000..1e5ab2524 --- /dev/null +++ b/docs/reference/type-aliases/ExtractModelsFromAdapter.md @@ -0,0 +1,18 @@ +--- +id: ExtractModelsFromAdapter +title: ExtractModelsFromAdapter +--- + +# Type Alias: ExtractModelsFromAdapter\ + +```ts +type ExtractModelsFromAdapter = T extends AIAdapter ? M[number] : never; +``` + +Defined in: [types.ts:494](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L494) + +## Type Parameters + +### T + +`T` diff --git a/docs/reference/type-aliases/StreamChunk.md b/docs/reference/type-aliases/StreamChunk.md new file mode 100644 index 000000000..758a39737 --- /dev/null +++ b/docs/reference/type-aliases/StreamChunk.md @@ -0,0 +1,22 @@ +--- +id: StreamChunk +title: StreamChunk +--- + +# Type Alias: StreamChunk + +```ts +type StreamChunk = + | ContentStreamChunk + | ToolCallStreamChunk + | ToolResultStreamChunk + | DoneStreamChunk + | ErrorStreamChunk + | ApprovalRequestedStreamChunk + | ToolInputAvailableStreamChunk + | ThinkingStreamChunk; +``` + +Defined in: [types.ts:350](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L350) + +Chunk returned by the sdk during streaming chat completions. diff --git a/docs/reference/type-aliases/StreamChunkType.md b/docs/reference/type-aliases/StreamChunkType.md new file mode 100644 index 000000000..1fface013 --- /dev/null +++ b/docs/reference/type-aliases/StreamChunkType.md @@ -0,0 +1,20 @@ +--- +id: StreamChunkType +title: StreamChunkType +--- + +# Type Alias: StreamChunkType + +```ts +type StreamChunkType = + | "content" + | "tool_call" + | "tool_result" + | "done" + | "error" + | "approval-requested" + | "tool-input-available" + | "thinking"; +``` + +Defined in: [types.ts:262](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/types.ts#L262) diff --git a/docs/reference/variables/aiEventClient.md b/docs/reference/variables/aiEventClient.md new file mode 100644 index 000000000..80b4d1405 --- /dev/null +++ b/docs/reference/variables/aiEventClient.md @@ -0,0 +1,12 @@ +--- +id: aiEventClient +title: aiEventClient +--- + +# Variable: aiEventClient + +```ts +const aiEventClient: AiEventClient; +``` + +Defined in: [event-client.ts:357](https://github.com/TanStack/ai/blob/main/packages/typescript/ai/src/event-client.ts#L357) diff --git a/eslint.config.js b/eslint.config.js new file mode 100644 index 000000000..95261a887 --- /dev/null +++ b/eslint.config.js @@ -0,0 +1,25 @@ +// @ts-check + +// @ts-ignore Needed due to moduleResolution Node vs Bundler +import { tanstackConfig } from '@tanstack/config/eslint' +import unusedImports from 'eslint-plugin-unused-imports' + +/** @type {import('eslint').Linter.FlatConfig[]} */ +const config = [ + ...tanstackConfig, + { + name: 'tanstack/temp', + plugins: { + 'unused-imports': unusedImports, + }, + rules: { + 'no-case-declarations': 'off', + 'no-shadow': 'off', + 'unused-imports/no-unused-imports': 'warn', + 'pnpm/enforce-catalog': 'off', + 'pnpm/json-enforce-catalog': 'off', + }, + }, +] + +export default config diff --git a/examples/README.md b/examples/README.md index 749f38756..e04570d09 100644 --- a/examples/README.md +++ b/examples/README.md @@ -1,437 +1,465 @@ -# TanStack AI Examples - -This directory contains comprehensive examples demonstrating TanStack AI across multiple languages and frameworks. - -## Quick Start - -Choose an example based on your use case: - -- **Want a full-stack TypeScript app?** → [TanStack Chat (ts-chat)](#tanstack-chat-ts-chat) -- **Want a simple CLI tool?** → [CLI Example](#cli-example) -- **Need a vanilla JS frontend?** → [Vanilla Chat](#vanilla-chat) -- **Building a Python backend?** → [Python FastAPI Server](#python-fastapi-server) -- **Building a PHP backend?** → [PHP Slim Framework Server](#php-slim-framework-server) - -## TypeScript Examples - -### TanStack Chat (ts-chat) - -A full-featured chat application built with the TanStack ecosystem. - -**Tech Stack:** -- TanStack Start (full-stack React framework) -- TanStack Router (type-safe routing) -- TanStack Store (state management) -- `@tanstack/ai` (AI backend) -- `@tanstack/ai-react` (React hooks) -- `@tanstack/ai-client` (headless client) - -**Features:** -- āœ… Real-time streaming with OpenAI GPT-4o -- āœ… Automatic tool execution loop -- āœ… Rich markdown rendering -- āœ… Conversation management -- āœ… Modern UI with Tailwind CSS - -**Getting Started:** -```bash -cd examples/ts-chat -pnpm install -cp env.example .env -# Edit .env and add your OPENAI_API_KEY -pnpm start -``` - -šŸ“– [Full Documentation](ts-chat/README.md) - ---- - -### CLI Example - -An interactive command-line interface for AI interactions. - -**Features:** -- āœ… Multi-provider support (OpenAI, Anthropic, Ollama, Gemini) -- āœ… Interactive chat with streaming -- āœ… Automatic tool/function calling -- āœ… Smart API key management -- āœ… Debug mode for development - -**Getting Started:** -```bash -cd examples/cli -pnpm install -pnpm dev chat --provider openai -``` - -**Available Commands:** -- `chat` - Interactive chat with streaming -- `generate` - One-shot text generation -- `summarize` - Text summarization -- `embed` - Generate embeddings - -šŸ“– [Full Documentation](cli/README.md) - ---- - -### Vanilla Chat - -A framework-free chat application using pure JavaScript and `@tanstack/ai-client`. - -**Tech Stack:** -- Vanilla JavaScript (no frameworks!) -- `@tanstack/ai-client` (headless client) -- Vite (dev server) -- Connects to Python FastAPI backend - -**Features:** -- āœ… Pure vanilla JavaScript -- āœ… Real-time streaming messages -- āœ… Beautiful, responsive UI -- āœ… No framework dependencies - -**Getting Started:** -```bash -# Start the Python backend first -cd examples/python-fastapi -python anthropic-server.py - -# Then start the frontend -cd examples/vanilla-chat -pnpm install -pnpm dev -``` - -Open `http://localhost:3000` - -šŸ“– [Full Documentation](vanilla-chat/README.md) - ---- - -## Python Examples - -### Python FastAPI Server - -A FastAPI server that streams AI responses in Server-Sent Events (SSE) format, compatible with TanStack AI clients. - -**Features:** -- āœ… FastAPI with SSE streaming -- āœ… Converts Anthropic/OpenAI events to `StreamChunk` format -- āœ… Compatible with `@tanstack/ai-client` -- āœ… Tool call support -- āœ… Type-safe with Pydantic - -**Getting Started:** -```bash -cd examples/python-fastapi - -# Create virtual environment -python3 -m venv venv -source venv/bin/activate # On Windows: venv\Scripts\activate - -# Install dependencies -pip install -r requirements.txt - -# Set up environment -cp env.example .env -# Edit .env and add your ANTHROPIC_API_KEY or OPENAI_API_KEY - -# Run the server -python anthropic-server.py # or openai-server.py -``` - -**API Endpoints:** -- `POST /chat` - Stream chat responses in SSE format -- `GET /health` - Health check - -**Usage with TypeScript Client:** -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("http://localhost:8000/chat"), -}); - -await client.sendMessage("Hello!"); -``` - -šŸ“– [Full Documentation](python-fastapi/README.md) - ---- - -## PHP Examples - -### PHP Slim Framework Server - -A PHP Slim Framework server that streams AI responses in SSE format, with support for both Anthropic and OpenAI. - -**Features:** -- āœ… Slim Framework with SSE streaming -- āœ… Converts Anthropic/OpenAI events to `StreamChunk` format -- āœ… Compatible with `@tanstack/ai-client` -- āœ… Tool call support -- āœ… PHP 8.1+ with type safety - -**Getting Started:** -```bash -cd examples/php-slim - -# Install dependencies -composer install - -# Set up environment -cp env.example .env -# Edit .env and add your ANTHROPIC_API_KEY and/or OPENAI_API_KEY - -# Run the server -composer start-anthropic # Runs on port 8000 -# or -composer start-openai # Runs on port 8001 -``` - -**API Endpoints:** -- `POST /chat` - Stream chat responses in SSE format -- `GET /health` - Health check - -**Usage with TypeScript Client:** -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("http://localhost:8000/chat"), -}); - -await client.sendMessage("Hello!"); -``` - -šŸ“– [Full Documentation](php-slim/README.md) - ---- - -## Architecture Patterns - -### Full-Stack TypeScript - -Use TanStack AI end-to-end in TypeScript: - -``` -Frontend (React) - ↓ (useChat hook) -@tanstack/ai-react - ↓ (ChatClient) -@tanstack/ai-client - ↓ (SSE/HTTP) -Backend (TanStack Start API Route) - ↓ (chat() function) -@tanstack/ai - ↓ (adapter) -AI Provider (OpenAI/Anthropic/etc.) -``` - -**Example:** [TanStack Chat (ts-chat)](ts-chat/README.md) - -### Multi-Language Backend - -Use Python or PHP for the backend, TypeScript for the frontend: - -``` -Frontend (Vanilla JS/React/Vue/etc.) - ↓ (ChatClient) -@tanstack/ai-client - ↓ (SSE/HTTP) -Backend (Python FastAPI or PHP Slim) - ↓ (tanstack-ai or tanstack/ai) -Stream Conversion & Message Formatting - ↓ (provider SDK) -AI Provider (OpenAI/Anthropic/etc.) -``` - -**Examples:** -- [Python FastAPI](python-fastapi/README.md) + [Vanilla Chat](vanilla-chat/README.md) -- [PHP Slim](php-slim/README.md) + any frontend with `@tanstack/ai-client` - -### CLI Tool - -Use TanStack AI in command-line applications: - -``` -CLI - ↓ (chat() function) -@tanstack/ai - ↓ (adapter) -AI Provider (OpenAI/Anthropic/Ollama/Gemini) -``` - -**Example:** [CLI Example](cli/README.md) - ---- - -## Common Patterns - -### Server-Sent Events (SSE) Streaming - -All examples use SSE for real-time streaming: - -**Backend (TypeScript):** -```typescript -import { chat, toStreamResponse } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; - -const stream = chat({ - adapter: openai(), - model: "gpt-4o", - messages, -}); - -return toStreamResponse(stream); -``` - -**Backend (Python):** -```python -from tanstack_ai import StreamChunkConverter, format_sse_chunk - -async for event in anthropic_stream: - chunks = await converter.convert_event(event) - for chunk in chunks: - yield format_sse_chunk(chunk) -``` - -**Backend (PHP):** -```php -use TanStack\AI\StreamChunkConverter; -use TanStack\AI\SSEFormatter; - -foreach ($anthropicStream as $event) { - $chunks = $converter->convertEvent($event); - foreach ($chunks as $chunk) { - echo SSEFormatter::formatChunk($chunk); - } -} -``` - -**Frontend:** -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), -}); -``` - -### Automatic Tool Execution - -The TypeScript backend (`@tanstack/ai`) automatically handles tool execution: - -```typescript -import { chat, tool } from "@tanstack/ai"; - -const weatherTool = tool({ - function: { - name: "getWeather", - description: "Get weather for a location", - parameters: { /* ... */ }, - }, - execute: async (args) => { - // This is called automatically by the SDK - return JSON.stringify({ temp: 72, condition: "sunny" }); - }, -}); - -const stream = chat({ - adapter: openai(), - model: "gpt-4o", - messages, - tools: [weatherTool], // SDK executes these automatically -}); -``` - -Clients receive: -- `content` chunks - text from the model -- `tool_call` chunks - when the model calls a tool -- `tool_result` chunks - results from tool execution -- `done` chunk - conversation complete - ---- - -## Development Tips - -### Running Multiple Examples - -You can run backend and frontend examples together: - -```bash -# Terminal 1: Start Python backend -cd examples/python-fastapi -python anthropic-server.py - -# Terminal 2: Start vanilla frontend -cd examples/vanilla-chat -pnpm dev - -# Terminal 3: Start ts-chat (full-stack) -cd examples/ts-chat -pnpm start -``` - -### Environment Variables - -Each example has an `env.example` file. Copy it to `.env` and add your API keys: - -```bash -# TypeScript examples -OPENAI_API_KEY=sk-... -ANTHROPIC_API_KEY=sk-ant-... - -# Python examples -ANTHROPIC_API_KEY=sk-ant-... -OPENAI_API_KEY=sk-... - -# PHP examples -ANTHROPIC_API_KEY=sk-ant-... -OPENAI_API_KEY=sk-... -``` - -### Building for Production - -**TypeScript:** -```bash -pnpm build -``` - -**Python:** -```bash -# Use a production ASGI server -uvicorn anthropic-server:app --host 0.0.0.0 --port 8000 -``` - -**PHP:** -```bash -# Use a production web server (Apache, Nginx, etc.) -# See php-slim/README.md for deployment details -``` - ---- - -## Contributing - -When adding new examples: - -1. **Create a README.md** with setup instructions -2. **Add an env.example** file with required environment variables -3. **Document the tech stack** and key features -4. **Include usage examples** with code snippets -5. **Update this README** to list your example - ---- - -## Learn More - -- šŸ“– [Main README](../README.md) - Project overview -- šŸ“– [Documentation](../docs/) - Comprehensive guides -- šŸ“– [TypeScript Packages](../packages/typescript/) - Core libraries -- šŸ“– [Python Package](../packages/python/tanstack-ai/) - Python utilities -- šŸ“– [PHP Package](../packages/php/tanstack-ai/) - PHP utilities - ---- - -Built with ā¤ļø by the TanStack community +# TanStack AI Examples + +This directory contains comprehensive examples demonstrating TanStack AI across multiple languages and frameworks. + +## Quick Start + +Choose an example based on your use case: + +- **Want a full-stack TypeScript app?** → [TanStack Chat (ts-chat)](#tanstack-chat-ts-chat) +- **Want a simple CLI tool?** → [CLI Example](#cli-example) +- **Need a vanilla JS frontend?** → [Vanilla Chat](#vanilla-chat) +- **Building a Python backend?** → [Python FastAPI Server](#python-fastapi-server) +- **Building a PHP backend?** → [PHP Slim Framework Server](#php-slim-framework-server) + +## TypeScript Examples + +### TanStack Chat (ts-chat) + +A full-featured chat application built with the TanStack ecosystem. + +**Tech Stack:** + +- TanStack Start (full-stack React framework) +- TanStack Router (type-safe routing) +- TanStack Store (state management) +- `@tanstack/ai` (AI backend) +- `@tanstack/ai-react` (React hooks) +- `@tanstack/ai-client` (headless client) + +**Features:** + +- āœ… Real-time streaming with OpenAI GPT-4o +- āœ… Automatic tool execution loop +- āœ… Rich markdown rendering +- āœ… Conversation management +- āœ… Modern UI with Tailwind CSS + +**Getting Started:** + +```bash +cd examples/ts-chat +pnpm install +cp env.example .env +# Edit .env and add your OPENAI_API_KEY +pnpm start +``` + +šŸ“– [Full Documentation](ts-chat/README.md) + +--- + +### CLI Example + +An interactive command-line interface for AI interactions. + +**Features:** + +- āœ… Multi-provider support (OpenAI, Anthropic, Ollama, Gemini) +- āœ… Interactive chat with streaming +- āœ… Automatic tool/function calling +- āœ… Smart API key management +- āœ… Debug mode for development + +**Getting Started:** + +```bash +cd examples/cli +pnpm install +pnpm dev chat --provider openai +``` + +**Available Commands:** + +- `chat` - Interactive chat with streaming +- `generate` - One-shot text generation +- `summarize` - Text summarization +- `embed` - Generate embeddings + +šŸ“– [Full Documentation](cli/README.md) + +--- + +### Vanilla Chat + +A framework-free chat application using pure JavaScript and `@tanstack/ai-client`. + +**Tech Stack:** + +- Vanilla JavaScript (no frameworks!) +- `@tanstack/ai-client` (headless client) +- Vite (dev server) +- Connects to Python FastAPI backend + +**Features:** + +- āœ… Pure vanilla JavaScript +- āœ… Real-time streaming messages +- āœ… Beautiful, responsive UI +- āœ… No framework dependencies + +**Getting Started:** + +```bash +# Start the Python backend first +cd examples/python-fastapi +python anthropic-server.py + +# Then start the frontend +cd examples/vanilla-chat +pnpm install +pnpm dev +``` + +Open `http://localhost:3000` + +šŸ“– [Full Documentation](vanilla-chat/README.md) + +--- + +## Python Examples + +### Python FastAPI Server + +A FastAPI server that streams AI responses in Server-Sent Events (SSE) format, compatible with TanStack AI clients. + +**Features:** + +- āœ… FastAPI with SSE streaming +- āœ… Converts Anthropic/OpenAI events to `StreamChunk` format +- āœ… Compatible with `@tanstack/ai-client` +- āœ… Tool call support +- āœ… Type-safe with Pydantic + +**Getting Started:** + +```bash +cd examples/python-fastapi + +# Create virtual environment +python3 -m venv venv +source venv/bin/activate # On Windows: venv\Scripts\activate + +# Install dependencies +pip install -r requirements.txt + +# Set up environment +cp env.example .env +# Edit .env and add your ANTHROPIC_API_KEY or OPENAI_API_KEY + +# Run the server +python anthropic-server.py # or openai-server.py +``` + +**API Endpoints:** + +- `POST /chat` - Stream chat responses in SSE format +- `GET /health` - Health check + +**Usage with TypeScript Client:** + +```typescript +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' + +const client = new ChatClient({ + connection: fetchServerSentEvents('http://localhost:8000/chat'), +}) + +await client.sendMessage('Hello!') +``` + +šŸ“– [Full Documentation](python-fastapi/README.md) + +--- + +## PHP Examples + +### PHP Slim Framework Server + +A PHP Slim Framework server that streams AI responses in SSE format, with support for both Anthropic and OpenAI. + +**Features:** + +- āœ… Slim Framework with SSE streaming +- āœ… Converts Anthropic/OpenAI events to `StreamChunk` format +- āœ… Compatible with `@tanstack/ai-client` +- āœ… Tool call support +- āœ… PHP 8.1+ with type safety + +**Getting Started:** + +```bash +cd examples/php-slim + +# Install dependencies +composer install + +# Set up environment +cp env.example .env +# Edit .env and add your ANTHROPIC_API_KEY and/or OPENAI_API_KEY + +# Run the server +composer start-anthropic # Runs on port 8000 +# or +composer start-openai # Runs on port 8001 +``` + +**API Endpoints:** + +- `POST /chat` - Stream chat responses in SSE format +- `GET /health` - Health check + +**Usage with TypeScript Client:** + +```typescript +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' + +const client = new ChatClient({ + connection: fetchServerSentEvents('http://localhost:8000/chat'), +}) + +await client.sendMessage('Hello!') +``` + +šŸ“– [Full Documentation](php-slim/README.md) + +--- + +## Architecture Patterns + +### Full-Stack TypeScript + +Use TanStack AI end-to-end in TypeScript: + +``` +Frontend (React) + ↓ (useChat hook) +@tanstack/ai-react + ↓ (ChatClient) +@tanstack/ai-client + ↓ (SSE/HTTP) +Backend (TanStack Start API Route) + ↓ (chat() function) +@tanstack/ai + ↓ (adapter) +AI Provider (OpenAI/Anthropic/etc.) +``` + +**Example:** [TanStack Chat (ts-chat)](ts-chat/README.md) + +### Multi-Language Backend + +Use Python or PHP for the backend, TypeScript for the frontend: + +``` +Frontend (Vanilla JS/React/Vue/etc.) + ↓ (ChatClient) +@tanstack/ai-client + ↓ (SSE/HTTP) +Backend (Python FastAPI or PHP Slim) + ↓ (tanstack-ai or tanstack/ai) +Stream Conversion & Message Formatting + ↓ (provider SDK) +AI Provider (OpenAI/Anthropic/etc.) +``` + +**Examples:** + +- [Python FastAPI](python-fastapi/README.md) + [Vanilla Chat](vanilla-chat/README.md) +- [PHP Slim](php-slim/README.md) + any frontend with `@tanstack/ai-client` + +### CLI Tool + +Use TanStack AI in command-line applications: + +``` +CLI + ↓ (chat() function) +@tanstack/ai + ↓ (adapter) +AI Provider (OpenAI/Anthropic/Ollama/Gemini) +``` + +**Example:** [CLI Example](cli/README.md) + +--- + +## Common Patterns + +### Server-Sent Events (SSE) Streaming + +All examples use SSE for real-time streaming: + +**Backend (TypeScript):** + +```typescript +import { chat, toStreamResponse } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' + +const stream = chat({ + adapter: openai(), + model: 'gpt-4o', + messages, +}) + +return toStreamResponse(stream) +``` + +**Backend (Python):** + +```python +from tanstack_ai import StreamChunkConverter, format_sse_chunk + +async for event in anthropic_stream: + chunks = await converter.convert_event(event) + for chunk in chunks: + yield format_sse_chunk(chunk) +``` + +**Backend (PHP):** + +```php +use TanStack\AI\StreamChunkConverter; +use TanStack\AI\SSEFormatter; + +foreach ($anthropicStream as $event) { + $chunks = $converter->convertEvent($event); + foreach ($chunks as $chunk) { + echo SSEFormatter::formatChunk($chunk); + } +} +``` + +**Frontend:** + +```typescript +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' + +const client = new ChatClient({ + connection: fetchServerSentEvents('/api/chat'), +}) +``` + +### Automatic Tool Execution + +The TypeScript backend (`@tanstack/ai`) automatically handles tool execution: + +```typescript +import { chat, tool } from '@tanstack/ai' + +const weatherTool = tool({ + function: { + name: 'getWeather', + description: 'Get weather for a location', + parameters: { + /* ... */ + }, + }, + execute: async (args) => { + // This is called automatically by the SDK + return JSON.stringify({ temp: 72, condition: 'sunny' }) + }, +}) + +const stream = chat({ + adapter: openai(), + model: 'gpt-4o', + messages, + tools: [weatherTool], // SDK executes these automatically +}) +``` + +Clients receive: + +- `content` chunks - text from the model +- `tool_call` chunks - when the model calls a tool +- `tool_result` chunks - results from tool execution +- `done` chunk - conversation complete + +--- + +## Development Tips + +### Running Multiple Examples + +You can run backend and frontend examples together: + +```bash +# Terminal 1: Start Python backend +cd examples/python-fastapi +python anthropic-server.py + +# Terminal 2: Start vanilla frontend +cd examples/vanilla-chat +pnpm dev + +# Terminal 3: Start ts-chat (full-stack) +cd examples/ts-chat +pnpm start +``` + +### Environment Variables + +Each example has an `env.example` file. Copy it to `.env` and add your API keys: + +```bash +# TypeScript examples +OPENAI_API_KEY=sk-... +ANTHROPIC_API_KEY=sk-ant-... + +# Python examples +ANTHROPIC_API_KEY=sk-ant-... +OPENAI_API_KEY=sk-... + +# PHP examples +ANTHROPIC_API_KEY=sk-ant-... +OPENAI_API_KEY=sk-... +``` + +### Building for Production + +**TypeScript:** + +```bash +pnpm build +``` + +**Python:** + +```bash +# Use a production ASGI server +uvicorn anthropic-server:app --host 0.0.0.0 --port 8000 +``` + +**PHP:** + +```bash +# Use a production web server (Apache, Nginx, etc.) +# See php-slim/README.md for deployment details +``` + +--- + +## Contributing + +When adding new examples: + +1. **Create a README.md** with setup instructions +2. **Add an env.example** file with required environment variables +3. **Document the tech stack** and key features +4. **Include usage examples** with code snippets +5. **Update this README** to list your example + +--- + +## Learn More + +- šŸ“– [Main README](../README.md) - Project overview +- šŸ“– [Documentation](../docs/) - Comprehensive guides +- šŸ“– [TypeScript Packages](../packages/typescript/) - Core libraries +- šŸ“– [Python Package](../packages/python/tanstack-ai/) - Python utilities +- šŸ“– [PHP Package](../packages/php/tanstack-ai/) - PHP utilities + +--- + +Built with ā¤ļø by the TanStack community diff --git a/examples/php-slim/README.md b/examples/php-slim/README.md index b2ea4ea8f..2d74876d6 100644 --- a/examples/php-slim/README.md +++ b/examples/php-slim/README.md @@ -45,26 +45,31 @@ cp env.example .env 4. **Run the server:** **For Anthropic:** + ```bash php -S 0.0.0.0:8000 -t public public/anthropic-server.php ``` Or using Composer: + ```bash composer start-anthropic ``` **For OpenAI:** + ```bash php -S 0.0.0.0:8001 -t public public/openai-server.php ``` Or using Composer: + ```bash composer start-openai ``` The servers will start on: + - Anthropic: `http://localhost:8000` - OpenAI: `http://localhost:8001` @@ -108,13 +113,13 @@ Health check endpoint. This server is compatible with the TypeScript TanStack AI client: ```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' const client = new ChatClient({ - connection: fetchServerSentEvents("http://localhost:8000/chat"), -}); + connection: fetchServerSentEvents('http://localhost:8000/chat'), +}) -await client.sendMessage("Hello!"); +await client.sendMessage('Hello!') ``` ## StreamChunk Format @@ -131,6 +136,7 @@ See `packages/typescript/ai/src/types.ts` for the full TypeScript type definitio ## Supported Providers The converter currently supports: + - āœ… **Anthropic** (Claude models) - fully implemented - āœ… **OpenAI** (GPT models) - fully implemented @@ -169,6 +175,7 @@ The converter package is installed as a local dependency, making it easy to deve To use the local `tanstack/ai` package during development: 1. Add to `composer.json`: + ```json { "repositories": [ @@ -184,7 +191,7 @@ To use the local `tanstack/ai` package during development: ``` 2. Run: + ```bash composer update tanstack/ai ``` - diff --git a/examples/php-slim/composer.json b/examples/php-slim/composer.json index c7d997e9b..5422a972c 100644 --- a/examples/php-slim/composer.json +++ b/examples/php-slim/composer.json @@ -1,40 +1,40 @@ { - "name": "tanstack/ai-php-example", - "description": "PHP Slim Framework example for TanStack AI", - "type": "project", - "repositories": [ - { - "type": "path", - "url": "../../packages/php/tanstack-ai" - } - ], - "require": { - "php": ">=8.1", - "slim/slim": "^4.12", - "slim/psr7": "^1.6", - "vlucas/phpdotenv": "^5.5", - "monolog/monolog": "^3.0", - "anthropic-ai/sdk": "^0.3.0", - "openai-php/client": "^0.10.0", - "tanstack/ai": "@dev", - "symfony/http-client": "^7.3" - }, - "require-dev": { - "slim/psr7": "^1.6" - }, - "autoload": { - "psr-4": { - "TanStack\\AI\\Example\\": "src/" - } - }, - "scripts": { - "start": "php -S 0.0.0.0:8000 -t public public/index.php", - "start-anthropic": "php -S 0.0.0.0:8000 -t public public/anthropic-server.php", - "start-openai": "php -S 0.0.0.0:8001 -t public public/openai-server.php" - }, - "config": { - "allow-plugins": { - "php-http/discovery": true - } + "name": "tanstack/ai-php-example", + "description": "PHP Slim Framework example for TanStack AI", + "type": "project", + "repositories": [ + { + "type": "path", + "url": "../../packages/php/tanstack-ai" } + ], + "require": { + "php": ">=8.1", + "slim/slim": "^4.12", + "slim/psr7": "^1.6", + "vlucas/phpdotenv": "^5.5", + "monolog/monolog": "^3.0", + "anthropic-ai/sdk": "^0.3.0", + "openai-php/client": "^0.10.0", + "tanstack/ai": "@dev", + "symfony/http-client": "^7.3" + }, + "require-dev": { + "slim/psr7": "^1.6" + }, + "autoload": { + "psr-4": { + "TanStack\\AI\\Example\\": "src/" + } + }, + "scripts": { + "start": "php -S 0.0.0.0:8000 -t public public/index.php", + "start-anthropic": "php -S 0.0.0.0:8000 -t public public/anthropic-server.php", + "start-openai": "php -S 0.0.0.0:8001 -t public public/openai-server.php" + }, + "config": { + "allow-plugins": { + "php-http/discovery": true + } + } } diff --git a/examples/php-slim/package.json b/examples/php-slim/package.json new file mode 100644 index 000000000..d8747401d --- /dev/null +++ b/examples/php-slim/package.json @@ -0,0 +1,5 @@ +{ + "name": "php-slim", + "version": "0.0.0", + "private": true +} diff --git a/examples/python-fastapi/README.md b/examples/python-fastapi/README.md index 55104bb8f..a7cd78cf0 100644 --- a/examples/python-fastapi/README.md +++ b/examples/python-fastapi/README.md @@ -34,7 +34,6 @@ python3 -m venv venv ``` 3. **Activate the virtual environment:** - - **On macOS/Linux:** ```bash @@ -129,13 +128,13 @@ Health check endpoint. This server is compatible with the TypeScript TanStack AI client: ```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' const client = new ChatClient({ - connection: fetchServerSentEvents("http://localhost:8000/chat"), -}); + connection: fetchServerSentEvents('http://localhost:8000/chat'), +}) -await client.sendMessage("Hello!"); +await client.sendMessage('Hello!') ``` ## StreamChunk Format @@ -152,6 +151,7 @@ See `packages/typescript/ai/src/types.ts` for the full TypeScript type definitio ## Supported Providers The converter currently supports: + - āœ… **Anthropic** (Claude models) - fully implemented - āœ… **OpenAI** (GPT models) - converter implemented, ready to use diff --git a/examples/python-fastapi/package.json b/examples/python-fastapi/package.json new file mode 100644 index 000000000..ec87def56 --- /dev/null +++ b/examples/python-fastapi/package.json @@ -0,0 +1,5 @@ +{ + "name": "python-fastapi", + "version": "0.0.0", + "private": true +} diff --git a/examples/ts-chat/.cta.json b/examples/ts-chat/.cta.json index 2371ddc70..8142313ea 100644 --- a/examples/ts-chat/.cta.json +++ b/examples/ts-chat/.cta.json @@ -8,10 +8,5 @@ "git": true, "version": 1, "framework": "react-cra", - "chosenAddOns": [ - "nitro", - "start", - "tanchat", - "store" - ] -} \ No newline at end of file + "chosenAddOns": ["nitro", "start", "tanchat", "store"] +} diff --git a/examples/ts-chat/README.md b/examples/ts-chat/README.md index 0db61b98a..c2671c43d 100644 --- a/examples/ts-chat/README.md +++ b/examples/ts-chat/README.md @@ -94,7 +94,7 @@ Now that you have two routes you can use a `Link` component to navigate between To use SPA (Single Page Application) navigation you will need to import the `Link` component from `@tanstack/react-router`. ```tsx -import { Link } from "@tanstack/react-router"; +import { Link } from '@tanstack/react-router' ``` Then anywhere in your JSX you can use it like so: @@ -114,10 +114,10 @@ In the File Based Routing setup the layout is located in `src/routes/__root.tsx` Here is an example layout that includes a header: ```tsx -import { Outlet, createRootRoute } from "@tanstack/react-router"; -import { TanStackRouterDevtools } from "@tanstack/react-router-devtools"; +import { Outlet, createRootRoute } from '@tanstack/react-router' +import { TanStackRouterDevtools } from '@tanstack/react-router-devtools' -import { Link } from "@tanstack/react-router"; +import { Link } from '@tanstack/react-router' export const Route = createRootRoute({ component: () => ( @@ -132,7 +132,7 @@ export const Route = createRootRoute({ ), -}); +}) ``` The `` component is not required so you can remove it if you don't want it in your layout. @@ -148,26 +148,26 @@ For example: ```tsx const peopleRoute = createRoute({ getParentRoute: () => rootRoute, - path: "/people", + path: '/people', loader: async () => { - const response = await fetch("https://swapi.dev/api/people"); + const response = await fetch('https://swapi.dev/api/people') return response.json() as Promise<{ results: { - name: string; - }[]; - }>; + name: string + }[] + }> }, component: () => { - const data = peopleRoute.useLoaderData(); + const data = peopleRoute.useLoaderData() return (
    {data.results.map((person) => (
  • {person.name}
  • ))}
- ); + ) }, -}); +}) ``` Loaders simplify your data fetching logic dramatically. Check out more information in the [Loader documentation](https://tanstack.com/router/latest/docs/framework/react/guide/data-loading#loader-parameters). @@ -185,29 +185,29 @@ pnpm add @tanstack/react-query @tanstack/react-query-devtools Next we'll need to create a query client and provider. We recommend putting those in `main.tsx`. ```tsx -import { QueryClient, QueryClientProvider } from "@tanstack/react-query"; +import { QueryClient, QueryClientProvider } from '@tanstack/react-query' // ... -const queryClient = new QueryClient(); +const queryClient = new QueryClient() // ... if (!rootElement.innerHTML) { - const root = ReactDOM.createRoot(rootElement); + const root = ReactDOM.createRoot(rootElement) root.render( - - ); + , + ) } ``` You can also add TanStack Query Devtools to the root route (optional). ```tsx -import { ReactQueryDevtools } from "@tanstack/react-query-devtools"; +import { ReactQueryDevtools } from '@tanstack/react-query-devtools' const rootRoute = createRootRoute({ component: () => ( @@ -217,25 +217,25 @@ const rootRoute = createRootRoute({ ), -}); +}) ``` Now you can use `useQuery` to fetch your data. ```tsx -import { useQuery } from "@tanstack/react-query"; +import { useQuery } from '@tanstack/react-query' -import "./App.css"; +import './App.css' function App() { const { data } = useQuery({ - queryKey: ["people"], + queryKey: ['people'], queryFn: () => - fetch("https://swapi.dev/api/people") + fetch('https://swapi.dev/api/people') .then((res) => res.json()) .then((data) => data.results as { name: string }[]), initialData: [], - }); + }) return (
@@ -245,10 +245,10 @@ function App() { ))}
- ); + ) } -export default App; +export default App ``` You can find out everything you need to know on how to use React-Query in the [React-Query documentation](https://tanstack.com/query/latest/docs/framework/react/overview). @@ -266,24 +266,24 @@ pnpm add @tanstack/store Now let's create a simple counter in the `src/App.tsx` file as a demonstration. ```tsx -import { useStore } from "@tanstack/react-store"; -import { Store } from "@tanstack/store"; -import "./App.css"; +import { useStore } from '@tanstack/react-store' +import { Store } from '@tanstack/store' +import './App.css' -const countStore = new Store(0); +const countStore = new Store(0) function App() { - const count = useStore(countStore); + const count = useStore(countStore) return (
- ); + ) } -export default App; +export default App ``` One of the many nice features of TanStack Store is the ability to derive state from other state. That derived state will update when the base state updates. @@ -291,21 +291,21 @@ One of the many nice features of TanStack Store is the ability to derive state f Let's check this out by doubling the count using derived state. ```tsx -import { useStore } from "@tanstack/react-store"; -import { Store, Derived } from "@tanstack/store"; -import "./App.css"; +import { useStore } from '@tanstack/react-store' +import { Store, Derived } from '@tanstack/store' +import './App.css' -const countStore = new Store(0); +const countStore = new Store(0) const doubledStore = new Derived({ fn: () => countStore.state * 2, deps: [countStore], -}); -doubledStore.mount(); +}) +doubledStore.mount() function App() { - const count = useStore(countStore); - const doubledCount = useStore(doubledStore); + const count = useStore(countStore) + const doubledCount = useStore(doubledStore) return (
@@ -314,10 +314,10 @@ function App() {
Doubled - {doubledCount}
- ); + ) } -export default App; +export default App ``` We use the `Derived` class to create a new store that is derived from another store. The `Derived` class has a `mount` method that will start the derived store updating. diff --git a/examples/ts-chat/api-verification.ts b/examples/ts-chat/api-verification.ts index 8aa07479d..336ce12bb 100644 --- a/examples/ts-chat/api-verification.ts +++ b/examples/ts-chat/api-verification.ts @@ -1 +1 @@ -export { }; +export {} diff --git a/examples/ts-chat/package.json b/examples/ts-chat/package.json index 45a5dfcdd..b76671bf3 100644 --- a/examples/ts-chat/package.json +++ b/examples/ts-chat/package.json @@ -9,10 +9,10 @@ "test": "exit 0" }, "dependencies": { - "@ai-sdk/openai": "^2.0.52", + "@ai-sdk/openai": "^2.0.73", "@ai-sdk/provider": "^2.0.0", - "@ai-sdk/provider-utils": "^3.0.12", - "@tailwindcss/vite": "^4.0.6", + "@ai-sdk/provider-utils": "^3.0.17", + "@tailwindcss/vite": "^4.1.17", "@tanstack/ai": "workspace:*", "@tanstack/ai-anthropic": "workspace:*", "@tanstack/ai-client": "workspace:*", @@ -21,39 +21,41 @@ "@tanstack/ai-openai": "workspace:*", "@tanstack/ai-react": "workspace:*", "@tanstack/ai-react-ui": "workspace:*", - "@tanstack/nitro-v2-vite-plugin": "^1.132.31", - "@tanstack/react-devtools": "^0.8.0", - "@tanstack/react-router": "^1.132.0", - "@tanstack/react-router-devtools": "^1.132.0", - "@tanstack/react-router-ssr-query": "^1.131.7", - "@tanstack/react-start": "^1.132.0", - "@tanstack/react-store": "^0.7.0", - "@tanstack/router-plugin": "^1.132.0", - "@tanstack/store": "^0.7.0", + "@tanstack/nitro-v2-vite-plugin": "^1.139.0", + "@tanstack/react-devtools": "^0.8.2", + "@tanstack/react-router": "^1.139.7", + "@tanstack/react-router-devtools": "^1.139.7", + "@tanstack/react-router-ssr-query": "^1.139.7", + "@tanstack/react-start": "^1.139.8", + "@tanstack/react-store": "^0.8.0", + "@tanstack/router-plugin": "^1.139.7", + "@tanstack/store": "^0.8.0", "highlight.js": "^11.11.1", - "lucide-react": "^0.544.0", - "react": "^19.0.0", - "react-dom": "^19.0.0", + "lucide-react": "^0.555.0", + "react": "^19.2.0", + "react-dom": "^19.2.0", "react-markdown": "^10.1.0", - "rehype-highlight": "^7.0.0", + "rehype-highlight": "^7.0.2", "rehype-raw": "^7.0.0", "rehype-sanitize": "^6.0.0", "remark-gfm": "^4.0.1", - "tailwindcss": "^4.0.6", + "tailwindcss": "^4.1.17", "vite-tsconfig-paths": "^5.1.4", - "zod": "^4.1.11" + "zod": "^4.1.13" }, "devDependencies": { - "@testing-library/dom": "^10.4.0", - "@testing-library/react": "^16.2.0", - "@types/node": "^22.10.2", - "@types/react": "^19.0.8", - "@types/react-dom": "^19.0.3", - "@vitejs/plugin-react": "^5.0.4", - "jsdom": "^27.0.0", - "typescript": "^5.7.2", - "vite": "^7.1.7", - "vitest": "^4.0.13", + "@tanstack/devtools-vite": "^0.3.11", + "@tanstack/react-ai-devtools": "workspace:*", + "@testing-library/dom": "^10.4.1", + "@testing-library/react": "^16.3.0", + "@types/node": "^24.10.1", + "@types/react": "^19.2.7", + "@types/react-dom": "^19.2.3", + "@vitejs/plugin-react": "^5.1.1", + "jsdom": "^27.2.0", + "typescript": "5.9.3", + "vite": "^7.2.4", + "vitest": "^4.0.14", "web-vitals": "^5.1.0" } } diff --git a/examples/ts-chat/src/components/Approval.tsx b/examples/ts-chat/src/components/Approval.tsx index 2390d54bf..5b7e71bda 100644 --- a/examples/ts-chat/src/components/Approval.tsx +++ b/examples/ts-chat/src/components/Approval.tsx @@ -1,8 +1,8 @@ export interface ApprovalProps { - toolName: string; - input: any; - onApprove: () => void; - onDeny: () => void; + toolName: string + input: any + onApprove: () => void + onDeny: () => void } export default function Approval({ @@ -34,6 +34,5 @@ export default function Approval({
- ); + ) } - diff --git a/examples/ts-chat/src/components/Header.tsx b/examples/ts-chat/src/components/Header.tsx index 44b204671..57745b7b0 100644 --- a/examples/ts-chat/src/components/Header.tsx +++ b/examples/ts-chat/src/components/Header.tsx @@ -1,10 +1,10 @@ -import { Link } from "@tanstack/react-router"; +import { Link } from '@tanstack/react-router' -import { useState } from "react"; -import { Guitar, Home, Menu, X } from "lucide-react"; +import { useState } from 'react' +import { Guitar, Home, Menu, X } from 'lucide-react' export default function Header() { - const [isOpen, setIsOpen] = useState(false); + const [isOpen, setIsOpen] = useState(false) return ( <> @@ -29,7 +29,7 @@ export default function Header() { - ); + ) } diff --git a/examples/ts-chat/src/components/example-GuitarRecommendation.tsx b/examples/ts-chat/src/components/example-GuitarRecommendation.tsx index ed2e7608a..8f04cd99e 100644 --- a/examples/ts-chat/src/components/example-GuitarRecommendation.tsx +++ b/examples/ts-chat/src/components/example-GuitarRecommendation.tsx @@ -1,12 +1,12 @@ -import { useNavigate } from "@tanstack/react-router"; +import { useNavigate } from '@tanstack/react-router' -import guitars from "../data/example-guitars"; +import guitars from '../data/example-guitars' export default function GuitarRecommendation({ id }: { id: string }) { - const navigate = useNavigate(); - const guitar = guitars.find((guitar) => guitar.id === +id); + const navigate = useNavigate() + const guitar = guitars.find((guitar) => guitar.id === +id) if (!guitar) { - return null; + return null } return (
@@ -29,9 +29,9 @@ export default function GuitarRecommendation({ id }: { id: string }) {
- ); + ) } diff --git a/examples/ts-chat/src/lib/guitar-tools.ts b/examples/ts-chat/src/lib/guitar-tools.ts index 3a36c745a..c50303341 100644 --- a/examples/ts-chat/src/lib/guitar-tools.ts +++ b/examples/ts-chat/src/lib/guitar-tools.ts @@ -1,98 +1,98 @@ -import { tool } from "@tanstack/ai"; -import guitars from "@/data/example-guitars"; +import { tool } from '@tanstack/ai' +import guitars from '@/data/example-guitars' export const getGuitarsTool = tool({ - type: "function", + type: 'function', function: { - name: "getGuitars", - description: "Get all products from the database", + name: 'getGuitars', + description: 'Get all products from the database', parameters: { - type: "object", + type: 'object', properties: {}, required: [], }, }, execute: async () => { - return JSON.stringify(guitars); + return JSON.stringify(guitars) }, -}); +}) export const recommendGuitarTool = tool({ - type: "function", + type: 'function', function: { - name: "recommendGuitar", + name: 'recommendGuitar', description: - "REQUIRED tool to display a guitar recommendation to the user. This tool MUST be used whenever recommending a guitar - do NOT write recommendations yourself. This displays the guitar in a special appealing format with a buy button.", + 'REQUIRED tool to display a guitar recommendation to the user. This tool MUST be used whenever recommending a guitar - do NOT write recommendations yourself. This displays the guitar in a special appealing format with a buy button.', parameters: { - type: "object", + type: 'object', properties: { id: { - type: "string", + type: 'string', description: - "The ID of the guitar to recommend (from the getGuitars results)", + 'The ID of the guitar to recommend (from the getGuitars results)', }, }, - required: ["id"], + required: ['id'], }, }, -}); +}) export const getPersonalGuitarPreferenceTool = tool({ - type: "function", + type: 'function', function: { - name: "getPersonalGuitarPreference", + name: 'getPersonalGuitarPreference', description: "Get the user's guitar preference from their local browser storage", parameters: { - type: "object", + type: 'object', properties: {}, }, }, // No execute = client-side tool -}); +}) export const addToWishListTool = tool({ - type: "function", + type: 'function', function: { - name: "addToWishList", + name: 'addToWishList', description: "Add a guitar to the user's wish list (requires approval)", parameters: { - type: "object", + type: 'object', properties: { - guitarId: { type: "string" }, + guitarId: { type: 'string' }, }, - required: ["guitarId"], + required: ['guitarId'], }, }, needsApproval: true, // No execute = client-side but needs approval -}); +}) export const addToCartTool = tool({ - type: "function", + type: 'function', function: { - name: "addToCart", - description: "Add a guitar to the shopping cart (requires approval)", + name: 'addToCart', + description: 'Add a guitar to the shopping cart (requires approval)', parameters: { - type: "object", + type: 'object', properties: { - guitarId: { type: "string" }, - quantity: { type: "number" }, + guitarId: { type: 'string' }, + quantity: { type: 'number' }, }, - required: ["guitarId", "quantity"], + required: ['guitarId', 'quantity'], }, }, needsApproval: true, execute: async (args) => { return JSON.stringify({ success: true, - cartId: "CART_" + Date.now(), + cartId: 'CART_' + Date.now(), guitarId: args.guitarId, quantity: args.quantity, totalItems: args.quantity, - }); + }) }, -}); +}) export const allTools = [ getGuitarsTool, @@ -100,4 +100,4 @@ export const allTools = [ getPersonalGuitarPreferenceTool, addToWishListTool, addToCartTool, -]; +] diff --git a/examples/ts-chat/src/lib/stub-adapter.ts b/examples/ts-chat/src/lib/stub-adapter.ts index b50432999..0d519594c 100644 --- a/examples/ts-chat/src/lib/stub-adapter.ts +++ b/examples/ts-chat/src/lib/stub-adapter.ts @@ -1,5 +1,9 @@ -import type { AIAdapter, ChatCompletionOptions, StreamChunk } from "@tanstack/ai"; -import { stubLLM } from "./stub-llm"; +import type { + AIAdapter, + ChatCompletionOptions, + StreamChunk, +} from '@tanstack/ai' +import { stubLLM } from './stub-llm' /** * Stub adapter for testing without using real LLM tokens @@ -18,11 +22,12 @@ export function stubAdapter(): AIAdapter< any > { return { - name: "stub", - async *chatStream(options: ChatCompletionOptions): AsyncIterable { + name: 'stub', + async *chatStream( + options: ChatCompletionOptions, + ): AsyncIterable { // Use stub LLM instead of real API - yield* stubLLM(options.messages); + yield* stubLLM(options.messages) }, - } as any; + } as any } - diff --git a/examples/ts-chat/src/lib/stub-llm.ts b/examples/ts-chat/src/lib/stub-llm.ts index 203ae04b4..064851151 100644 --- a/examples/ts-chat/src/lib/stub-llm.ts +++ b/examples/ts-chat/src/lib/stub-llm.ts @@ -1,67 +1,67 @@ -import type { ModelMessage, StreamChunk } from "@tanstack/ai"; +import type { ModelMessage, StreamChunk } from '@tanstack/ai' /** * Stub LLM for testing tool calls without burning tokens * Detects which tool to call from user message keywords */ export async function* stubLLM( - messages: ModelMessage[] + messages: ModelMessage[], ): AsyncIterable { - const lastMessage = messages[messages.length - 1]; - const userMessage = lastMessage?.content?.toLowerCase() || ""; + const lastMessage = messages[messages.length - 1] + const userMessage = lastMessage?.content?.toLowerCase() || '' - const baseId = `stub_${Date.now()}`; - const timestamp = Date.now(); + const baseId = `stub_${Date.now()}` + const timestamp = Date.now() // Check if we have any assistant messages with tool calls already // If so, this is a continuation after approval/execution, not a new request // Handle both ModelMessage (toolCalls) and UIMessage (parts) formats const hasExistingToolCalls = messages.some((m) => { - if (m.role !== "assistant") return false; + if (m.role !== 'assistant') return false // Check ModelMessage format - if (m.toolCalls && m.toolCalls.length > 0) return true; + if (m.toolCalls && m.toolCalls.length > 0) return true // Check UIMessage format if ((m as any).parts) { - const parts = (m as any).parts; - return parts.some((p: any) => p.type === "tool-call"); + const parts = (m as any).parts + return parts.some((p: any) => p.type === 'tool-call') } - return false; - }); - - if (hasExistingToolCalls && lastMessage?.role === "assistant") { + return false + }) + + if (hasExistingToolCalls && lastMessage?.role === 'assistant') { // This means we're being called after an approval/tool execution // Check if the user approved or denied by looking at tool results - let wasApproved = false; - let wasDenied = false; - + let wasApproved = false + let wasDenied = false + // Check for tool results - const toolResults = messages.filter((m) => m.role === "tool"); + const toolResults = messages.filter((m) => m.role === 'tool') if (toolResults.length > 0) { // Check if any were successful or had errors for (const result of toolResults) { try { - const parsed = JSON.parse(result.content || "{}"); - if (parsed.error && parsed.error.includes("declined")) { - wasDenied = true; + const parsed = JSON.parse(result.content || '{}') + if (parsed.error && parsed.error.includes('declined')) { + wasDenied = true } else if (parsed.success || !parsed.error) { - wasApproved = true; + wasApproved = true } } catch { // If we can't parse, assume success - wasApproved = true; + wasApproved = true } } } else { // No tool results yet, must have just gotten approval response // Check the assistant message for approval status in UIMessage format if ((lastMessage as any).parts) { - const parts = (lastMessage as any).parts; + const parts = (lastMessage as any).parts for (const part of parts) { - if (part.type === "tool-call" && part.approval) { + if (part.type === 'tool-call' && part.approval) { if (part.approval.approved === true) { - wasApproved = true; + wasApproved = true } else if (part.approval.approved === false) { - wasDenied = true; + wasDenied = true } } } @@ -69,325 +69,331 @@ export async function* stubLLM( // This is a ModelMessage, check if it has approval info in content // (won't have it here, but we can infer from lack of tool results) // Default to approved for now - wasApproved = true; + wasApproved = true } } - - let response = ""; + + let response = '' if (wasDenied) { - response = "No worries! Maybe another time. Let me know if you need anything else."; + response = + 'No worries! Maybe another time. Let me know if you need anything else.' } else if (wasApproved) { - response = "All set! Let me know if you need anything else."; + response = 'All set! Let me know if you need anything else.' } else { - response = "Let me know if you need anything else!"; + response = 'Let me know if you need anything else!' } - + for (let i = 0; i < response.length; i++) { yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: response[i], content: response.substring(0, i + 1), - role: "assistant", - }; + role: 'assistant', + } } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "stop", - }; - return; + finishReason: 'stop', + } + return } // Check if this is a follow-up after tool execution - const hasToolResults = messages.some((m) => m.role === "tool"); - + const hasToolResults = messages.some((m) => m.role === 'tool') + if (hasToolResults) { // Check if user approved or denied const lastAssistant = [...messages] .reverse() - .find((m) => m.role === "assistant" && m.toolCalls); - + .find((m) => m.role === 'assistant' && m.toolCalls) + if (lastAssistant) { // Look for approval in tool results const approvedTools = messages - .filter((m) => m.role === "tool") + .filter((m) => m.role === 'tool') .filter((m) => { try { - const result = JSON.parse(m.content || "{}"); - return !result.error; + const result = JSON.parse(m.content || '{}') + return !result.error } catch { - return true; + return true } - }); - + }) + const deniedTools = messages - .filter((m) => m.role === "tool") + .filter((m) => m.role === 'tool') .filter((m) => { try { - const result = JSON.parse(m.content || "{}"); - return result.error?.includes("declined"); + const result = JSON.parse(m.content || '{}') + return result.error?.includes('declined') } catch { - return false; + return false } - }); + }) - let responseText = ""; + let responseText = '' if (approvedTools.length > 0) { - responseText = "Good for you! I've processed that request."; + responseText = "Good for you! I've processed that request." } else if (deniedTools.length > 0) { - responseText = "Bummer! Maybe another time."; + responseText = 'Bummer! Maybe another time.' } else { - responseText = "Complete! If you need anything else, feel free to ask."; + responseText = 'Complete! If you need anything else, feel free to ask.' } // Send final response for (const char of responseText) { - const accumulated = responseText.substring(0, responseText.indexOf(char) + 1); + const accumulated = responseText.substring( + 0, + responseText.indexOf(char) + 1, + ) yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: char, content: accumulated, - role: "assistant", - }; + role: 'assistant', + } } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "stop", - }; - return; + finishReason: 'stop', + } + return } } // Detect which tool to call based on user message - if (userMessage.includes("preference")) { + if (userMessage.includes('preference')) { // Send initial text - const initText = "Let me check your preferences..."; + const initText = 'Let me check your preferences...' for (let i = 0; i < initText.length; i++) { yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: initText[i], content: initText.substring(0, i + 1), - role: "assistant", - }; + role: 'assistant', + } } // Call getPersonalGuitarPreference yield { - type: "tool_call", + type: 'tool_call', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, toolCall: { id: `call_${Date.now()}`, - type: "function", + type: 'function', function: { - name: "getPersonalGuitarPreference", - arguments: "{}", + name: 'getPersonalGuitarPreference', + arguments: '{}', }, }, index: 0, - }; + } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "tool_calls", - }; - return; + finishReason: 'tool_calls', + } + return } - if (userMessage.includes("recommend") || userMessage.includes("acoustic")) { + if (userMessage.includes('recommend') || userMessage.includes('acoustic')) { // Send initial text - const initText = "Let me find the perfect guitar for you!"; + const initText = 'Let me find the perfect guitar for you!' for (let i = 0; i < initText.length; i++) { yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: initText[i], content: initText.substring(0, i + 1), - role: "assistant", - }; + role: 'assistant', + } } // Call getGuitars then recommendGuitar - const getGuitarsId = `call_${Date.now()}_1`; + const getGuitarsId = `call_${Date.now()}_1` yield { - type: "tool_call", + type: 'tool_call', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, toolCall: { id: getGuitarsId, - type: "function", + type: 'function', function: { - name: "getGuitars", - arguments: "{}", + name: 'getGuitars', + arguments: '{}', }, }, index: 0, - }; + } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "tool_calls", - }; + finishReason: 'tool_calls', + } // After getGuitars result, call recommendGuitar - const recommendId = `call_${Date.now()}_2`; + const recommendId = `call_${Date.now()}_2` yield { - type: "tool_call", - id: baseId + "_2", - model: "stub-llm", + type: 'tool_call', + id: baseId + '_2', + model: 'stub-llm', timestamp: timestamp + 100, toolCall: { id: recommendId, - type: "function", + type: 'function', function: { - name: "recommendGuitar", - arguments: JSON.stringify({ id: "6" }), + name: 'recommendGuitar', + arguments: JSON.stringify({ id: '6' }), }, }, index: 0, - }; + } yield { - type: "done", - id: baseId + "_2", - model: "stub-llm", + type: 'done', + id: baseId + '_2', + model: 'stub-llm', timestamp: timestamp + 100, - finishReason: "tool_calls", - }; - return; + finishReason: 'tool_calls', + } + return } - if (userMessage.includes("wish list")) { + if (userMessage.includes('wish list')) { // Send initial text - const initText = "I'll add that to your wish list. Just need your approval first!"; + const initText = + "I'll add that to your wish list. Just need your approval first!" for (let i = 0; i < initText.length; i++) { yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: initText[i], content: initText.substring(0, i + 1), - role: "assistant", - }; + role: 'assistant', + } } // Call addToWishList (needs approval) yield { - type: "tool_call", + type: 'tool_call', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, toolCall: { id: `call_${Date.now()}`, - type: "function", + type: 'function', function: { - name: "addToWishList", - arguments: JSON.stringify({ guitarId: "6" }), + name: 'addToWishList', + arguments: JSON.stringify({ guitarId: '6' }), }, }, index: 0, - }; + } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "tool_calls", - }; - return; + finishReason: 'tool_calls', + } + return } - if (userMessage.includes("cart")) { + if (userMessage.includes('cart')) { // Send initial text - const initText = "Ready to add to your cart! I'll need your approval to proceed."; + const initText = + "Ready to add to your cart! I'll need your approval to proceed." for (let i = 0; i < initText.length; i++) { yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: initText[i], content: initText.substring(0, i + 1), - role: "assistant", - }; + role: 'assistant', + } } // Call addToCart (needs approval) yield { - type: "tool_call", + type: 'tool_call', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, toolCall: { id: `call_${Date.now()}`, - type: "function", + type: 'function', function: { - name: "addToCart", - arguments: JSON.stringify({ guitarId: "6", quantity: 1 }), + name: 'addToCart', + arguments: JSON.stringify({ guitarId: '6', quantity: 1 }), }, }, index: 0, - }; + } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "tool_calls", - }; - return; + finishReason: 'tool_calls', + } + return } // Default response - const response = "I can help with guitar preferences, recommendations, wish lists, and cart!"; + const response = + 'I can help with guitar preferences, recommendations, wish lists, and cart!' for (const char of response) { - const accumulated = response.substring(0, response.indexOf(char) + 1); + const accumulated = response.substring(0, response.indexOf(char) + 1) yield { - type: "content", + type: 'content', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, delta: char, content: accumulated, - role: "assistant", - }; + role: 'assistant', + } } yield { - type: "done", + type: 'done', id: baseId, - model: "stub-llm", + model: 'stub-llm', timestamp, - finishReason: "stop", - }; + finishReason: 'stop', + } } - diff --git a/examples/ts-chat/src/routes/__root.tsx b/examples/ts-chat/src/routes/__root.tsx index c1146dad6..f7ceb4813 100644 --- a/examples/ts-chat/src/routes/__root.tsx +++ b/examples/ts-chat/src/routes/__root.tsx @@ -1,33 +1,34 @@ -import { HeadContent, Scripts, createRootRoute } from "@tanstack/react-router"; -import { TanStackRouterDevtoolsPanel } from "@tanstack/react-router-devtools"; -import { TanStackDevtools } from "@tanstack/react-devtools"; -import Header from "../components/Header"; -import appCss from "../styles.css?url"; +import { HeadContent, Scripts, createRootRoute } from '@tanstack/react-router' +import { TanStackRouterDevtoolsPanel } from '@tanstack/react-router-devtools' +import { TanStackDevtools } from '@tanstack/react-devtools' +import { aiDevtoolsPlugin } from '@tanstack/react-ai-devtools' +import Header from '../components/Header' +import appCss from '../styles.css?url' export const Route = createRootRoute({ head: () => ({ meta: [ { - charSet: "utf-8", + charSet: 'utf-8', }, { - name: "viewport", - content: "width=device-width, initial-scale=1", + name: 'viewport', + content: 'width=device-width, initial-scale=1', }, { - title: "TanStack Start Starter", + title: 'TanStack Start Starter', }, ], links: [ { - rel: "stylesheet", + rel: 'stylesheet', href: appCss, }, ], }), shellComponent: RootDocument, -}); +}) function RootDocument({ children }: { children: React.ReactNode }) { return ( @@ -40,13 +41,14 @@ function RootDocument({ children }: { children: React.ReactNode }) { {children} , }, + aiDevtoolsPlugin(), ]} eventBusConfig={{ connectToServerBus: true, @@ -55,5 +57,5 @@ function RootDocument({ children }: { children: React.ReactNode }) { - ); + ) } diff --git a/examples/ts-chat/src/routes/api.tanchat.ts b/examples/ts-chat/src/routes/api.tanchat.ts index 1ddc1d097..fa6adaf91 100644 --- a/examples/ts-chat/src/routes/api.tanchat.ts +++ b/examples/ts-chat/src/routes/api.tanchat.ts @@ -1,10 +1,10 @@ -import { createFileRoute } from "@tanstack/react-router"; -import { chat, toStreamResponse, maxIterations } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; +import { createFileRoute } from '@tanstack/react-router' +import { chat, maxIterations, toStreamResponse } from '@tanstack/ai' +import { openai } from '@tanstack/ai-openai' // import { ollama } from "@tanstack/ai-ollama"; // import { anthropic } from "@tanstack/ai-anthropic"; // import { gemini } from "@tanstack/ai-gemini"; -import { allTools } from "@/lib/guitar-tools"; +import { allTools } from '@/lib/guitar-tools' const SYSTEM_PROMPT = `You are a helpful assistant for a guitar store. @@ -27,23 +27,23 @@ User: "I want an acoustic guitar" Step 1: Call getGuitars() Step 2: Call recommendGuitar(id: "6") Step 3: Done - do NOT add any text after calling recommendGuitar -`; +` -export const Route = createFileRoute("/api/tanchat")({ +export const Route = createFileRoute('/api/tanchat')({ server: { handlers: { POST: async ({ request }) => { // Capture request signal before reading body (it may be aborted after body is consumed) - const requestSignal = request.signal; + const requestSignal = request.signal // If request is already aborted, return early - if (requestSignal?.aborted) { - return new Response(null, { status: 499 }); // 499 = Client Closed Request + if (requestSignal.aborted) { + return new Response(null, { status: 499 }) // 499 = Client Closed Request } - const abortController = new AbortController(); + const abortController = new AbortController() - const { messages } = await request.json(); + const { messages } = await request.json() try { // Use the stream abort signal for proper cancellation handling const stream = chat({ @@ -52,7 +52,7 @@ export const Route = createFileRoute("/api/tanchat")({ // - OpenAI: "gpt-5", "o3", "o3-pro", "o3-mini" (with reasoning option) // - Anthropic: "claude-sonnet-4-5-20250929", "claude-opus-4-5-20251101" (with thinking option) // - Gemini: "gemini-3-pro-preview", "gemini-2.5-pro" (with thinkingConfig option) - model: "gpt-5", + model: 'gpt-5', // model: "claude-sonnet-4-5-20250929", // model: "smollm", // model: "gemini-2.5-flash", @@ -72,11 +72,11 @@ export const Route = createFileRoute("/api/tanchat")({ }, */ }, abortController, - }); + }) - return toStreamResponse(stream, { abortController }); + return toStreamResponse(stream, { abortController }) } catch (error: any) { - console.error("[API Route] Error in chat request:", { + console.error('[API Route] Error in chat request:', { message: error?.message, name: error?.name, status: error?.status, @@ -85,22 +85,22 @@ export const Route = createFileRoute("/api/tanchat")({ type: error?.type, stack: error?.stack, error: error, - }); + }) // If request was aborted, return early (don't send error response) - if (error.name === "AbortError" || abortController.signal.aborted) { - return new Response(null, { status: 499 }); // 499 = Client Closed Request + if (error.name === 'AbortError' || abortController.signal.aborted) { + return new Response(null, { status: 499 }) // 499 = Client Closed Request } return new Response( JSON.stringify({ - error: error.message || "An error occurred", + error: error.message || 'An error occurred', }), { status: 500, - headers: { "Content-Type": "application/json" }, - } - ); + headers: { 'Content-Type': 'application/json' }, + }, + ) } }, }, }, -}); +}) diff --git a/examples/ts-chat/src/routes/api.test-chat.ts b/examples/ts-chat/src/routes/api.test-chat.ts index 78ba47385..5ee771966 100644 --- a/examples/ts-chat/src/routes/api.test-chat.ts +++ b/examples/ts-chat/src/routes/api.test-chat.ts @@ -1,19 +1,19 @@ -import { createFileRoute } from "@tanstack/react-router"; -import { chat, toStreamResponse, maxIterations } from "@tanstack/ai"; -import { stubAdapter } from "@/lib/stub-adapter"; -import { allTools } from "@/lib/guitar-tools"; +import { createFileRoute } from '@tanstack/react-router' +import { chat, toStreamResponse, maxIterations } from '@tanstack/ai' +import { stubAdapter } from '@/lib/stub-adapter' +import { allTools } from '@/lib/guitar-tools' -export const Route = createFileRoute("/api/test-chat")({ +export const Route = createFileRoute('/api/test-chat')({ server: { handlers: { POST: async ({ request }) => { - const { messages } = await request.json(); + const { messages } = await request.json() try { const stream = chat({ adapter: stubAdapter(), messages, - model: "gpt-4.1-nano", // Doesn't matter for stub + model: 'gpt-4.1-nano', // Doesn't matter for stub tools: allTools, systemPrompts: [], options: { @@ -27,21 +27,21 @@ export const Route = createFileRoute("/api/test-chat")({ }, agentLoopStrategy: maxIterations(20), providerOptions: {}, - }); + }) - return toStreamResponse(stream); + return toStreamResponse(stream) } catch (error: any) { return new Response( JSON.stringify({ - error: error.message || "An error occurred", + error: error.message || 'An error occurred', }), { status: 500, - headers: { "Content-Type": "application/json" }, - } - ); + headers: { 'Content-Type': 'application/json' }, + }, + ) } }, }, }, -}); +}) diff --git a/examples/ts-chat/src/routes/demo.tsx b/examples/ts-chat/src/routes/demo.tsx index 630c86f34..2bc79366e 100644 --- a/examples/ts-chat/src/routes/demo.tsx +++ b/examples/ts-chat/src/routes/demo.tsx @@ -1,48 +1,48 @@ -import { createFileRoute } from "@tanstack/react-router"; -import { useCallback } from "react"; +import { createFileRoute } from '@tanstack/react-router' +import { useCallback } from 'react' import { Chat, ChatMessages, ChatMessage, ChatInput, TextPart, -} from "@tanstack/ai-react-ui"; -import { fetchServerSentEvents } from "@tanstack/ai-client"; +} from '@tanstack/ai-react-ui' +import { fetchServerSentEvents } from '@tanstack/ai-client' -import GuitarRecommendation from "@/components/example-GuitarRecommendation"; +import GuitarRecommendation from '@/components/example-GuitarRecommendation' -export const Route = createFileRoute("/demo")({ +export const Route = createFileRoute('/demo')({ component: DemoPage, -}); +}) function DemoPage() { const handleToolCall = useCallback( async ({ toolName, input }: { toolName: string; input: any }) => { switch (toolName) { - case "getPersonalGuitarPreference": - return { preference: "acoustic" }; - case "recommendGuitar": - return { id: input.id }; - case "addToWishList": - const wishList = JSON.parse(localStorage.getItem("wishList") || "[]"); - wishList.push(input.guitarId); - localStorage.setItem("wishList", JSON.stringify(wishList)); + case 'getPersonalGuitarPreference': + return { preference: 'acoustic' } + case 'recommendGuitar': + return { id: input.id } + case 'addToWishList': + const wishList = JSON.parse(localStorage.getItem('wishList') || '[]') + wishList.push(input.guitarId) + localStorage.setItem('wishList', JSON.stringify(wishList)) return { success: true, guitarId: input.guitarId, totalItems: wishList.length, - }; + } default: - throw new Error(`Unknown tool: ${toolName}`); + throw new Error(`Unknown tool: ${toolName}`) } }, - [] - ); + [], + ) return (
@@ -65,10 +65,10 @@ function DemoPage() { toolsRenderer={{ recommendGuitar: ({ arguments: args }) => { try { - const parsed = JSON.parse(args); - return ; + const parsed = JSON.parse(args) + return } catch { - return null; + return null } }, }} @@ -83,5 +83,5 @@ function DemoPage() { />
- ); + ) } diff --git a/examples/ts-chat/src/routes/example.guitars/index.tsx b/examples/ts-chat/src/routes/example.guitars/index.tsx index b68ed3f51..8b6c87c7a 100644 --- a/examples/ts-chat/src/routes/example.guitars/index.tsx +++ b/examples/ts-chat/src/routes/example.guitars/index.tsx @@ -1,9 +1,9 @@ -import { Link, createFileRoute } from "@tanstack/react-router"; -import guitars from "../../data/example-guitars"; +import { Link, createFileRoute } from '@tanstack/react-router' +import guitars from '../../data/example-guitars' -export const Route = createFileRoute("/example/guitars/")({ +export const Route = createFileRoute('/example/guitars/')({ component: GuitarsIndex, -}); +}) function GuitarsIndex() { return ( @@ -50,5 +50,5 @@ function GuitarsIndex() { ))} - ); + ) } diff --git a/examples/ts-chat/src/routes/index.tsx b/examples/ts-chat/src/routes/index.tsx index 5bd4dfe84..d8c13e74b 100644 --- a/examples/ts-chat/src/routes/index.tsx +++ b/examples/ts-chat/src/routes/index.tsx @@ -1,49 +1,49 @@ -import { useEffect, useRef, useState } from "react"; -import { createFileRoute } from "@tanstack/react-router"; -import { Send, Square } from "lucide-react"; -import ReactMarkdown from "react-markdown"; -import rehypeRaw from "rehype-raw"; -import rehypeSanitize from "rehype-sanitize"; -import rehypeHighlight from "rehype-highlight"; -import remarkGfm from "remark-gfm"; +import { useEffect, useRef, useState } from 'react' +import { createFileRoute } from '@tanstack/react-router' +import { Send, Square } from 'lucide-react' +import ReactMarkdown from 'react-markdown' +import rehypeRaw from 'rehype-raw' +import rehypeSanitize from 'rehype-sanitize' +import rehypeHighlight from 'rehype-highlight' +import remarkGfm from 'remark-gfm' import { useChat, fetchServerSentEvents, type UIMessage, -} from "@tanstack/ai-react"; -import { ThinkingPart } from "@tanstack/ai-react-ui"; +} from '@tanstack/ai-react' +import { ThinkingPart } from '@tanstack/ai-react-ui' -import GuitarRecommendation from "@/components/example-GuitarRecommendation"; +import GuitarRecommendation from '@/components/example-GuitarRecommendation' function ChatInputArea({ children }: { children: React.ReactNode }) { return (
{children}
- ); + ) } function Messages({ messages, addToolApprovalResponse, }: { - messages: Array; + messages: Array addToolApprovalResponse: (response: { - id: string; - approved: boolean; - }) => Promise; + id: string + approved: boolean + }) => Promise }) { - const messagesContainerRef = useRef(null); + const messagesContainerRef = useRef(null) useEffect(() => { if (messagesContainerRef.current) { messagesContainerRef.current.scrollTop = - messagesContainerRef.current.scrollHeight; + messagesContainerRef.current.scrollHeight } - }, [messages]); + }, [messages]) if (!messages.length) { - return null; + return null } return ( @@ -56,13 +56,13 @@ function Messages({
- {role === "assistant" ? ( + {role === 'assistant' ? (
AI
@@ -75,11 +75,11 @@ function Messages({ {/* Render parts in order */} {parts.map((part, index) => { // Thinking part - if (part.type === "thinking") { + if (part.type === 'thinking') { // Check if thinking is complete (if there's a text part after) const isComplete = parts .slice(index + 1) - .some((p) => p.type === "text"); + .some((p) => p.type === 'text') return (
- ); + ) } - if (part.type === "text" && part.content) { + if (part.type === 'text' && part.content) { return (
- ); + ) } // Approval UI if ( - part.type === "tool-call" && - part.state === "approval-requested" && + part.type === 'tool-call' && + part.state === 'approval-requested' && part.approval ) { return ( @@ -130,7 +130,7 @@ function Messages({ {JSON.stringify( JSON.parse(part.arguments), null, - 2 + 2, )}
@@ -159,13 +159,13 @@ function Messages({
- ); + ) } // Guitar recommendation card if ( - part.type === "tool-call" && - part.name === "recommendGuitar" && + part.type === 'tool-call' && + part.name === 'recommendGuitar' && part.output ) { try { @@ -173,21 +173,21 @@ function Messages({
- ); + ) } catch { - return null; + return null } } - return null; + return null })} - ); + ) })} - ); + ) } function DebugPanel({ @@ -195,17 +195,17 @@ function DebugPanel({ chunks, onClearChunks, }: { - messages: Array; - chunks: any[]; - onClearChunks: () => void; + messages: Array + chunks: any[] + onClearChunks: () => void }) { - const [activeTab, setActiveTab] = useState<"messages" | "chunks">("messages"); + const [activeTab, setActiveTab] = useState<'messages' | 'chunks'>('messages') const exportToTypeScript = () => { - const tsCode = `const rawChunks = ${JSON.stringify(chunks, null, 2)};`; - navigator.clipboard.writeText(tsCode); - alert("TypeScript code copied to clipboard!"); - }; + const tsCode = `const rawChunks = ${JSON.stringify(chunks, null, 2)};` + navigator.clipboard.writeText(tsCode) + alert('TypeScript code copied to clipboard!') + } return (
@@ -218,21 +218,21 @@ function DebugPanel({ {/* Tabs */}
- {activeTab === "messages" && ( + {activeTab === 'messages' && (
               {JSON.stringify(messages, null, 2)}
@@ -249,7 +249,7 @@ function DebugPanel({
           
)} - {activeTab === "chunks" && ( + {activeTab === 'chunks' && (
- ); + ) } function ChatPage() { - const [chunks, setChunks] = useState([]); + const [chunks, setChunks] = useState([]) const { messages, sendMessage, isLoading, addToolApprovalResponse, stop } = useChat({ - connection: fetchServerSentEvents("/api/tanchat"), + connection: fetchServerSentEvents('/api/tanchat'), onChunk: (chunk: any) => { - setChunks((prev) => [...prev, chunk]); + setChunks((prev) => [...prev, chunk]) }, onToolCall: async ({ toolName, input }) => { // Handle client-side tool execution switch (toolName) { - case "getPersonalGuitarPreference": + case 'getPersonalGuitarPreference': // Pure client tool - executes immediately - return { preference: "acoustic" }; + return { preference: 'acoustic' } - case "recommendGuitar": + case 'recommendGuitar': // Client tool for UI display - return { id: input.id }; + return { id: input.id } - case "addToWishList": + case 'addToWishList': // Hybrid: client execution AFTER approval // Only runs after user approves const wishList = JSON.parse( - localStorage.getItem("wishList") || "[]" - ); - wishList.push(input.guitarId); - localStorage.setItem("wishList", JSON.stringify(wishList)); + localStorage.getItem('wishList') || '[]', + ) + wishList.push(input.guitarId) + localStorage.setItem('wishList', JSON.stringify(wishList)) return { success: true, guitarId: input.guitarId, totalItems: wishList.length, - }; + } default: - throw new Error(`Unknown client tool: ${toolName}`); + throw new Error(`Unknown client tool: ${toolName}`) } }, - }); - const [input, setInput] = useState(""); + }) + const [input, setInput] = useState('') - const clearChunks = () => setChunks([]); + const clearChunks = () => setChunks([]) return (
@@ -411,27 +411,27 @@ function ChatPage() { placeholder="Type something clever (or don't, we won't judge)..." className="w-full rounded-lg border border-orange-500/20 bg-gray-800/50 pl-4 pr-12 py-3 text-sm text-white placeholder-gray-400 focus:outline-none focus:ring-2 focus:ring-orange-500/50 focus:border-transparent resize-none overflow-hidden shadow-lg" rows={1} - style={{ minHeight: "44px", maxHeight: "200px" }} + style={{ minHeight: '44px', maxHeight: '200px' }} disabled={isLoading} onInput={(e) => { - const target = e.target as HTMLTextAreaElement; - target.style.height = "auto"; + const target = e.target as HTMLTextAreaElement + target.style.height = 'auto' target.style.height = - Math.min(target.scrollHeight, 200) + "px"; + Math.min(target.scrollHeight, 200) + 'px' }} onKeyDown={(e) => { - if (e.key === "Enter" && !e.shiftKey && input.trim()) { - e.preventDefault(); - sendMessage(input); - setInput(""); + if (e.key === 'Enter' && !e.shiftKey && input.trim()) { + e.preventDefault() + sendMessage(input) + setInput('') } }} />
- ); + ) } -export const Route = createFileRoute("/")({ +export const Route = createFileRoute('/')({ component: ChatPage, -}); +}) diff --git a/examples/ts-chat/src/routes/tanchat.css b/examples/ts-chat/src/routes/tanchat.css index 49f1c69ae..a74144fea 100644 --- a/examples/ts-chat/src/routes/tanchat.css +++ b/examples/ts-chat/src/routes/tanchat.css @@ -1,5 +1,5 @@ -@import "tailwindcss"; -@import "highlight.js/styles/github-dark.css"; +@import 'tailwindcss'; +@import 'highlight.js/styles/github-dark.css'; /* Custom scrollbar styles */ ::-webkit-scrollbar { @@ -58,13 +58,17 @@ html { color: inherit; } -.prose h1, .prose h2, .prose h3, .prose h4 { +.prose h1, +.prose h2, +.prose h3, +.prose h4 { color: #f9fafb; /* text-gray-50 */ /* margin-top: 2em; */ /* margin-bottom: 1em; */ } -.prose ul, .prose ol { +.prose ul, +.prose ol { margin-top: 1.25em; margin-bottom: 1.25em; padding-left: 1.625em; @@ -105,7 +109,8 @@ html { margin: 1.25em 0; } -.prose th, .prose td { +.prose th, +.prose td { padding: 0.75em; border: 1px solid rgba(249, 115, 22, 0.2); } @@ -124,7 +129,9 @@ html { .message-enter-active { opacity: 1; transform: translateY(0); - transition: opacity 300ms, transform 300ms; + transition: + opacity 300ms, + transform 300ms; } .message-exit { @@ -217,4 +224,4 @@ html { background-color: transparent; padding: 0; border-radius: 0; -} \ No newline at end of file +} diff --git a/examples/ts-chat/src/styles.css b/examples/ts-chat/src/styles.css index 89be6093a..06f1bca4b 100644 --- a/examples/ts-chat/src/styles.css +++ b/examples/ts-chat/src/styles.css @@ -1,15 +1,15 @@ -@import "tailwindcss"; +@import 'tailwindcss'; body { @apply m-0; - font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Roboto", "Oxygen", - "Ubuntu", "Cantarell", "Fira Sans", "Droid Sans", "Helvetica Neue", - sans-serif; + font-family: + -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', 'Ubuntu', + 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } code { - font-family: source-code-pro, Menlo, Monaco, Consolas, "Courier New", - monospace; + font-family: + source-code-pro, Menlo, Monaco, Consolas, 'Courier New', monospace; } diff --git a/examples/ts-chat/vite.config.ts b/examples/ts-chat/vite.config.ts index 8a53245c0..809a10692 100644 --- a/examples/ts-chat/vite.config.ts +++ b/examples/ts-chat/vite.config.ts @@ -1,20 +1,23 @@ -import { defineConfig } from "vite"; -import { tanstackStart } from "@tanstack/react-start/plugin/vite"; -import viteReact from "@vitejs/plugin-react"; -import viteTsConfigPaths from "vite-tsconfig-paths"; -import tailwindcss from "@tailwindcss/vite"; -import { nitroV2Plugin } from "@tanstack/nitro-v2-vite-plugin"; +import { defineConfig } from 'vite' +import { tanstackStart } from '@tanstack/react-start/plugin/vite' +import viteReact from '@vitejs/plugin-react' +import viteTsConfigPaths from 'vite-tsconfig-paths' +import tailwindcss from '@tailwindcss/vite' +import { nitroV2Plugin } from '@tanstack/nitro-v2-vite-plugin' +import { devtools } from '@tanstack/devtools-vite' + const config = defineConfig({ plugins: [ + devtools(), nitroV2Plugin(), // this is the plugin that enables path aliases viteTsConfigPaths({ - projects: ["./tsconfig.json"], + projects: ['./tsconfig.json'], }), tailwindcss(), tanstackStart(), viteReact(), ], -}); +}) -export default config; +export default config diff --git a/examples/vanilla-chat/README.md b/examples/vanilla-chat/README.md index 5fd765c3f..11b865a92 100644 --- a/examples/vanilla-chat/README.md +++ b/examples/vanilla-chat/README.md @@ -63,7 +63,7 @@ vanilla-chat/ The app uses `ChatClient` from `@tanstack/ai-client` with the `fetchServerSentEvents` connection adapter to connect to the FastAPI server: ```javascript -import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'; +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' const client = new ChatClient({ connection: fetchServerSentEvents('http://localhost:8080/chat'), @@ -73,9 +73,7 @@ const client = new ChatClient({ onLoadingChange: (isLoading) => { // Update loading state }, -}); +}) ``` The FastAPI server streams responses in Server-Sent Events (SSE) format, which the client automatically parses and displays. - - diff --git a/examples/vanilla-chat/index.html b/examples/vanilla-chat/index.html index 488e45519..2f65e7f8d 100644 --- a/examples/vanilla-chat/index.html +++ b/examples/vanilla-chat/index.html @@ -1,40 +1,38 @@ - + - - - - TanStack AI - Vanilla Chat - - - -
-
-

šŸ’¬ TanStack AI Chat

-

Vanilla JavaScript + FastAPI Example

-
+ + + + TanStack AI - Vanilla Chat + + + +
+
+

šŸ’¬ TanStack AI Chat

+

Vanilla JavaScript + FastAPI Example

+
-
-
- -
-
- - -
-
+
+
+ +
+
+ + +
+
- + +
-
- - + + - - diff --git a/examples/vanilla-chat/package.json b/examples/vanilla-chat/package.json index 650840479..11e24475b 100644 --- a/examples/vanilla-chat/package.json +++ b/examples/vanilla-chat/package.json @@ -12,6 +12,6 @@ "@tanstack/ai-client": "workspace:*" }, "devDependencies": { - "vite": "^7.1.7" + "vite": "^7.2.4" } } diff --git a/examples/vanilla-chat/src/main.js b/examples/vanilla-chat/src/main.js index df630c753..bdf9a4008 100644 --- a/examples/vanilla-chat/src/main.js +++ b/examples/vanilla-chat/src/main.js @@ -1,79 +1,79 @@ -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; +import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client' // Initialize ChatClient const client = new ChatClient({ - connection: fetchServerSentEvents("http://localhost:8000/chat"), + connection: fetchServerSentEvents('http://localhost:8000/chat'), onMessagesChange: (messages) => { - renderMessages(messages); + renderMessages(messages) }, onLoadingChange: (isLoading) => { - updateLoadingState(isLoading); + updateLoadingState(isLoading) }, onErrorChange: (error) => { - showError(error); + showError(error) }, -}); +}) // DOM elements -const messagesContainer = document.getElementById("messages"); -const chatForm = document.getElementById("chat-form"); -const messageInput = document.getElementById("message-input"); -const sendButton = document.getElementById("send-button"); -const errorDiv = document.getElementById("error"); +const messagesContainer = document.getElementById('messages') +const chatForm = document.getElementById('chat-form') +const messageInput = document.getElementById('message-input') +const sendButton = document.getElementById('send-button') +const errorDiv = document.getElementById('error') // Auto-resize textarea -messageInput.addEventListener("input", () => { - messageInput.style.height = "auto"; - messageInput.style.height = messageInput.scrollHeight + "px"; -}); +messageInput.addEventListener('input', () => { + messageInput.style.height = 'auto' + messageInput.style.height = messageInput.scrollHeight + 'px' +}) // Handle form submission -chatForm.addEventListener("submit", async (e) => { - e.preventDefault(); +chatForm.addEventListener('submit', async (e) => { + e.preventDefault() - const message = messageInput.value.trim(); - if (!message || client.getIsLoading()) return; + const message = messageInput.value.trim() + if (!message || client.getIsLoading()) return // Clear input - messageInput.value = ""; - messageInput.style.height = "auto"; + messageInput.value = '' + messageInput.style.height = 'auto' // Focus back on input - messageInput.focus(); + messageInput.focus() try { - await client.sendMessage(message); + await client.sendMessage(message) } catch (error) { - console.error("Error sending message:", error); - showError(error); + console.error('Error sending message:', error) + showError(error) } -}); +}) // Allow Enter to send (Shift+Enter for new line) -messageInput.addEventListener("keydown", (e) => { - if (e.key === "Enter" && !e.shiftKey) { - e.preventDefault(); - chatForm.dispatchEvent(new Event("submit")); +messageInput.addEventListener('keydown', (e) => { + if (e.key === 'Enter' && !e.shiftKey) { + e.preventDefault() + chatForm.dispatchEvent(new Event('submit')) } -}); +}) // Render messages function renderMessages(messages) { - if (!messagesContainer) return; + if (!messagesContainer) return - messagesContainer.innerHTML = ""; + messagesContainer.innerHTML = '' messages.forEach((message) => { - const messageDiv = document.createElement("div"); - messageDiv.className = `message ${message.role}`; + const messageDiv = document.createElement('div') + messageDiv.className = `message ${message.role}` - if (message.role === "user") { + if (message.role === 'user') { messageDiv.innerHTML = ` -
${escapeHtml(message.content || "")}
- `; - } else if (message.role === "assistant") { - const content = message.content || ""; - const toolCalls = message.toolCalls || []; +
${escapeHtml(message.content || '')}
+ ` + } else if (message.role === 'assistant') { + const content = message.content || '' + const toolCalls = message.toolCalls || [] messageDiv.innerHTML = `
${escapeHtml(content)}
@@ -87,74 +87,74 @@ function renderMessages(messages) {
${escapeHtml(tc.function.name)}
${escapeHtml(
-                  tc.function.arguments
+                  tc.function.arguments,
                 )}
- ` + `, ) - .join("")} + .join('')}
` - : "" + : '' } - `; + ` } - messagesContainer.appendChild(messageDiv); - }); + messagesContainer.appendChild(messageDiv) + }) // Scroll to bottom - messagesContainer.scrollTop = messagesContainer.scrollHeight; + messagesContainer.scrollTop = messagesContainer.scrollHeight } // Update loading state function updateLoadingState(isLoading) { if (sendButton) { - sendButton.disabled = isLoading; - sendButton.textContent = isLoading ? "Sending..." : "Send"; + sendButton.disabled = isLoading + sendButton.textContent = isLoading ? 'Sending...' : 'Send' } if (messageInput) { - messageInput.disabled = isLoading; + messageInput.disabled = isLoading } // Show typing indicator if (isLoading && messagesContainer) { - const typingIndicator = document.createElement("div"); - typingIndicator.className = "message assistant typing"; - typingIndicator.id = "typing-indicator"; - typingIndicator.innerHTML = '
...
'; - messagesContainer.appendChild(typingIndicator); - messagesContainer.scrollTop = messagesContainer.scrollHeight; + const typingIndicator = document.createElement('div') + typingIndicator.className = 'message assistant typing' + typingIndicator.id = 'typing-indicator' + typingIndicator.innerHTML = '
...
' + messagesContainer.appendChild(typingIndicator) + messagesContainer.scrollTop = messagesContainer.scrollHeight } else { - const indicator = document.getElementById("typing-indicator"); + const indicator = document.getElementById('typing-indicator') if (indicator) { - indicator.remove(); + indicator.remove() } } } // Show error function showError(error) { - if (!errorDiv) return; + if (!errorDiv) return if (error) { - errorDiv.textContent = error.message || "An error occurred"; - errorDiv.style.display = "block"; + errorDiv.textContent = error.message || 'An error occurred' + errorDiv.style.display = 'block' } else { - errorDiv.style.display = "none"; + errorDiv.style.display = 'none' } } // Escape HTML to prevent XSS function escapeHtml(text) { - const div = document.createElement("div"); - div.textContent = text; - return div.innerHTML; + const div = document.createElement('div') + div.textContent = text + return div.innerHTML } // Initialize - render any existing messages -renderMessages(client.getMessages()); +renderMessages(client.getMessages()) // Focus input on load -messageInput?.focus(); +messageInput?.focus() diff --git a/examples/vanilla-chat/src/style.css b/examples/vanilla-chat/src/style.css index 553092752..32d356c9a 100644 --- a/examples/vanilla-chat/src/style.css +++ b/examples/vanilla-chat/src/style.css @@ -5,7 +5,9 @@ } body { - font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif; + font-family: + -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, + Cantarell, sans-serif; background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); min-height: 100vh; display: flex; @@ -109,7 +111,8 @@ header p { } @keyframes pulse { - 0%, 100% { + 0%, + 100% { opacity: 1; } 50% { @@ -190,7 +193,9 @@ header p { font-size: 15px; font-weight: 600; cursor: pointer; - transition: background 0.2s, transform 0.1s; + transition: + background 0.2s, + transform 0.1s; min-width: 80px; } @@ -259,5 +264,3 @@ header p { padding: 16px; } } - - diff --git a/examples/vanilla-chat/vite.config.ts b/examples/vanilla-chat/vite.config.ts index 462005163..12a9d0843 100644 --- a/examples/vanilla-chat/vite.config.ts +++ b/examples/vanilla-chat/vite.config.ts @@ -9,8 +9,6 @@ export default defineConfig({ // target: 'http://localhost:8080', // changeOrigin: true, // } - } - } + }, + }, }) - - diff --git a/knip.json b/knip.json new file mode 100644 index 000000000..2a761388c --- /dev/null +++ b/knip.json @@ -0,0 +1,10 @@ +{ + "$schema": "https://unpkg.com/knip@5/schema.json", + "ignoreDependencies": ["@faker-js/faker"], + "ignoreWorkspaces": ["examples/**"], + "workspaces": { + "packages/react-ai": { + "ignore": [] + } + } +} diff --git a/nx.json b/nx.json new file mode 100644 index 000000000..adfddd411 --- /dev/null +++ b/nx.json @@ -0,0 +1,63 @@ +{ + "$schema": "./node_modules/nx/schemas/nx-schema.json", + "tui": { + "enabled": false + }, + "defaultBase": "main", + "useInferencePlugins": false, + "parallel": 5, + "namedInputs": { + "sharedGlobals": [ + "{workspaceRoot}/.nvmrc", + "{workspaceRoot}/package.json", + "{workspaceRoot}/tsconfig.json" + ], + "default": [ + "sharedGlobals", + "{projectRoot}/**/*", + "!{projectRoot}/**/*.md" + ], + "production": [ + "default", + "!{projectRoot}/tests/**/*", + "!{projectRoot}/eslint.config.js" + ] + }, + "targetDefaults": { + "test:lib": { + "cache": true, + "dependsOn": ["^build"], + "inputs": ["default", "^production"], + "outputs": ["{projectRoot}/coverage"] + }, + "test:eslint": { + "cache": true, + "dependsOn": ["^build"], + "inputs": ["default", "^production", "{workspaceRoot}/eslint.config.js"] + }, + "test:types": { + "cache": true, + "dependsOn": ["^build"], + "inputs": ["default", "^production"] + }, + "test:build": { + "cache": true, + "dependsOn": ["build"], + "inputs": ["production"] + }, + "build": { + "cache": true, + "dependsOn": ["^build"], + "inputs": ["production", "^production"], + "outputs": ["{projectRoot}/build", "{projectRoot}/dist"] + }, + "test:knip": { + "cache": true, + "inputs": ["{workspaceRoot}/**/*"] + }, + "test:sherif": { + "cache": true, + "inputs": ["{workspaceRoot}/**/package.json"] + } + } +} diff --git a/package.json b/package.json index 823679346..862b2c3ab 100644 --- a/package.json +++ b/package.json @@ -8,21 +8,90 @@ "packageManager": "pnpm@10.17.0", "type": "module", "scripts": { - "build": "pnpm run -r build", - "dev": "pnpm run --filter '@tanstack/ai*' build && pnpm run -r --parallel dev", "dev:packages": "pnpm run --filter '@tanstack/ai*' dev", - "test": "pnpm run -r test", "typecheck": "pnpm run -r typecheck", "lint": "pnpm run -r lint", - "clean": "./scripts/clean.sh", - "cli": "cd examples/cli && pnpm dev", - "cli:build": "cd examples/cli && pnpm build && pnpm start", - "demo:tools": "cd examples/cli && pnpm tools" + "build": "nx affected --skip-nx-cache --targets=build --exclude=examples/** && size-limit", + "build:all": "nx run-many --targets=build --exclude=examples/** && size-limit", + "build:core": "nx run-many --targets=build --projects=packages/ai && size-limit", + "changeset": "changeset", + "changeset:publish": "changeset publish", + "changeset:version": "changeset version && pnpm install --no-frozen-lockfile && pnpm prettier:write", + "clean": "find . -name 'dist' -type d -prune -exec rm -rf {} +", + "clean:node_modules": "find . -name 'node_modules' -type d -prune -exec rm -rf {} +", + "clean:all": "pnpm run clean && pnpm run clean:node_modules", + "copy:readme": "cp README.md packages/typescript/ai/README.md && cp README.md packages/typescript/ai-devtools/README.md && cp README.md packages/typescript/ai-client/README.md && cp README.md packages/typescript/ai-gemini/README.md && cp README.md packages/typescript/ai-ollama/README.md && cp README.md packages/typescript/ai-openai/README.md && cp README.md packages/typescript/ai-react/README.md && cp README.md packages/typescript/ai-react-ui/README.md && cp README.md packages/typescript/react-ai-devtools/README.md && cp README.md packages/typescript/solid-ai-devtools/README.md && cp README.md packages/typescript/tests-adapters/README.md", + "dev": "pnpm run watch", + "docs:generate": "node scripts/generateDocs.js && pnpm run copy:readme", + "format": "pnpm run prettier:write", + "lint:fix": "nx affected --target=lint:fix --exclude=examples/**", + "lint:fix:all": "pnpm run format && nx run-many --targets=lint --fix", + "preinstall": "node -e \"if(process.env.CI == 'true') {console.log('Skipping preinstall...'); process.exit(1)}\" || npx -y only-allow pnpm", + "prettier": "prettier --ignore-unknown '**/*'", + "prettier:write": "pnpm run prettier --write", + "size": "size-limit", + "test": "pnpm run test:ci", + "test:build": "nx affected --target=test:build --exclude=examples/**", + "test:ci": "nx run-many --targets=test:format,test:eslint,test:sherif,test:knip,test:lib,test:types,test:build,test:verify-links", + "test:eslint": "nx affected --target=test:eslint --exclude=examples/**", + "test:format": "pnpm run prettier --check", + "test:knip": "knip --max-issues=33", + "test:lib": "nx affected --targets=test:lib --exclude=examples/**", + "test:lib:dev": "pnpm test:lib && nx watch --all -- pnpm test:lib", + "test:pr": "nx affected --targets=test:format,test:eslint,test:sherif,test:knip,test:lib,test:types,test:build,build", + "test:sherif": "sherif", + "test:types": "nx affected --targets=test:types --exclude=examples/**", + "test:verify-links": "node scripts/verify-links.ts", + "watch": "pnpm run build:all && nx watch --all -- pnpm run build:all" }, + "nx": { + "includedScripts": [ + "test:knip", + "test:sherif" + ] + }, + "size-limit": [ + { + "path": "packages/typescript/ai/dist/esm/index.js", + "limit": "10 KB" + } + ], "devDependencies": { - "@types/node": "^22.10.2", - "tsdown": "^0.15.9", - "typescript": "^5.7.2", - "vitest": "^4.0.13" + "@changesets/cli": "^2.29.7", + "@faker-js/faker": "^10.1.0", + "@size-limit/preset-small-lib": "^11.2.0", + "@svitejs/changesets-changelog-github-compact": "^1.2.0", + "@tanstack/config": "0.22.1", + "@types/node": "^24.10.1", + "eslint": "^9.39.1", + "eslint-plugin-unused-imports": "^4.3.0", + "fast-glob": "^3.3.3", + "happy-dom": "^20.0.10", + "knip": "^5.70.2", + "markdown-link-extractor": "^4.0.3", + "nx": "^22.1.2", + "premove": "^4.0.0", + "prettier": "^3.6.2", + "prettier-plugin-svelte": "^3.4.0", + "publint": "^0.3.15", + "sherif": "^1.9.0", + "size-limit": "^11.2.0", + "typescript": "5.9.3", + "vite": "^7.2.4", + "vitest": "^4.0.14" + }, + "overrides": { + "@tanstack/ai": "workspace:*", + "@tanstack/ai-anthropic": "workspace:*", + "@tanstack/ai-client": "workspace:*", + "@tanstack/ai-devtools": "workspace:*", + "@tanstack/ai-gemini": "workspace:*", + "@tanstack/ai-ollama": "workspace:*", + "@tanstack/ai-openai": "workspace:*", + "@tanstack/ai-react": "workspace:*", + "@tanstack/ai-react-ui": "workspace:*", + "@tanstack/react-ai-devtools": "workspace:*", + "@tanstack/solid-ai-devtools": "workspace:*", + "@tanstack/tests-adapters": "workspace:*" } } diff --git a/packages/php/tanstack-ai/README.md b/packages/php/tanstack-ai/README.md index 7dce5d72b..6081db88e 100644 --- a/packages/php/tanstack-ai/README.md +++ b/packages/php/tanstack-ai/README.md @@ -93,4 +93,3 @@ function generateStream($stream, $converter) { ## License MIT - diff --git a/packages/python/tanstack-ai/README.md b/packages/python/tanstack-ai/README.md index 35bd3d069..6f753597d 100644 --- a/packages/python/tanstack-ai/README.md +++ b/packages/python/tanstack-ai/README.md @@ -83,4 +83,3 @@ async def generate_stream(): ## License MIT - diff --git a/packages/typescript/ai-anthropic/package.json b/packages/typescript/ai-anthropic/package.json index d6fe9e897..d17289033 100644 --- a/packages/typescript/ai-anthropic/package.json +++ b/packages/typescript/ai-anthropic/package.json @@ -9,13 +9,20 @@ "url": "git+https://github.com/TanStack/ai.git", "directory": "packages/typescript/ai-anthropic" }, + "keywords": [ + "ai", + "anthropic", + "claude", + "tanstack", + "adapter" + ], "type": "module", - "module": "./dist/index.js", - "types": "./dist/index.d.ts", + "module": "./dist/esm/index.js", + "types": "./dist/esm/index.d.ts", "exports": { ".": { - "types": "./dist/index.d.ts", - "import": "./dist/index.js" + "types": "./dist/esm/index.d.ts", + "import": "./dist/esm/index.js" } }, "files": [ @@ -23,34 +30,23 @@ "src" ], "scripts": { - "build": "tsdown", - "dev": "tsdown --watch", - "test": "vitest run", - "test:watch": "vitest", - "test:coverage": "vitest run --coverage", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" + "build": "vite build", + "clean": "premove ./build ./dist", + "lint:fix": "eslint ./src --fix", + "test:build": "publint --strict", + "test:eslint": "eslint ./src", + "test:lib": "vitest", + "test:lib:dev": "pnpm test:lib --watch", + "test:types": "tsc" }, - "keywords": [ - "ai", - "anthropic", - "claude", - "tanstack", - "adapter" - ], "dependencies": { - "@anthropic-ai/sdk": "^0.70.0", + "@anthropic-ai/sdk": "^0.71.0", "@tanstack/ai": "workspace:*" }, "devDependencies": { - "@types/node": "^22.10.2", - "@vitest/coverage-v8": "4.0.13", - "tsdown": "^0.15.9", - "typescript": "^5.7.2", - "vitest": "^4.0.13" + "@vitest/coverage-v8": "4.0.14" }, "peerDependencies": { "@tanstack/ai": "workspace:*" } -} \ No newline at end of file +} diff --git a/packages/typescript/ai-anthropic/src/anthropic-adapter.ts b/packages/typescript/ai-anthropic/src/anthropic-adapter.ts index d6648194a..a84580f7e 100644 --- a/packages/typescript/ai-anthropic/src/anthropic-adapter.ts +++ b/packages/typescript/ai-anthropic/src/anthropic-adapter.ts @@ -1,97 +1,90 @@ -import Anthropic_SDK from "@anthropic-ai/sdk"; -import type { MessageParam } from "@anthropic-ai/sdk/resources/messages"; -import { - BaseAdapter, - type ChatStreamOptionsUnion, - type SummarizationOptions, - type SummarizationResult, - type EmbeddingOptions, - type EmbeddingResult, - type ModelMessage, - type StreamChunk, -} from "@tanstack/ai"; -import { - ANTHROPIC_EMBEDDING_MODELS, - ANTHROPIC_MODELS, - type AnthropicChatModelProviderOptionsByName, -} from "./model-meta"; -import { convertToolsToProviderFormat } from "./tools/tool-converter"; -import { +import Anthropic_SDK from '@anthropic-ai/sdk' +import { BaseAdapter } from '@tanstack/ai' +import { ANTHROPIC_MODELS } from './model-meta' +import { convertToolsToProviderFormat } from './tools/tool-converter' +import { validateTextProviderOptions } from './text/text-provider-options' +import type { + ChatStreamOptionsUnion, + EmbeddingOptions, + EmbeddingResult, + ModelMessage, + StreamChunk, + SummarizationOptions, + SummarizationResult, +} from '@tanstack/ai' +import type { AnthropicChatModelProviderOptionsByName } from './model-meta' +import type { ExternalTextProviderOptions, InternalTextProviderOptions, -} from "./text/text-provider-options"; +} from './text/text-provider-options' +import type { MessageParam } from '@anthropic-ai/sdk/resources/messages' export interface AnthropicConfig { - apiKey: string; + apiKey: string } /** * Anthropic-specific provider options * @see https://ai-sdk.dev/providers/ai-sdk-providers/anthropic */ -export type AnthropicProviderOptions = ExternalTextProviderOptions; - -type AnthropicContentBlocks = Extract< - MessageParam["content"], - Array -> extends Array - ? Block[] - : never; -type AnthropicContentBlock = AnthropicContentBlocks extends Array - ? Block - : never; - - -type AnthropicChatOptions = ChatStreamOptionsUnion, - AnthropicChatModelProviderOptionsByName ->> +export type AnthropicProviderOptions = ExternalTextProviderOptions + +type AnthropicContentBlocks = + Extract> extends Array + ? Array + : never +type AnthropicContentBlock = + AnthropicContentBlocks extends Array ? Block : never + +type AnthropicChatOptions = ChatStreamOptionsUnion< + BaseAdapter< + typeof ANTHROPIC_MODELS, + [], + AnthropicProviderOptions, + Record, + AnthropicChatModelProviderOptionsByName + > +> export class Anthropic extends BaseAdapter< typeof ANTHROPIC_MODELS, - typeof ANTHROPIC_EMBEDDING_MODELS, + [], AnthropicProviderOptions, Record, AnthropicChatModelProviderOptionsByName > { - name = "anthropic" as const; - models = ANTHROPIC_MODELS; - embeddingModels = ANTHROPIC_EMBEDDING_MODELS; - declare _modelProviderOptionsByName: AnthropicChatModelProviderOptionsByName; + name = 'anthropic' as const + models = ANTHROPIC_MODELS + + declare _modelProviderOptionsByName: AnthropicChatModelProviderOptionsByName - private client: Anthropic_SDK; + private client: Anthropic_SDK constructor(config: AnthropicConfig) { - super({}); + super({}) this.client = new Anthropic_SDK({ apiKey: config.apiKey, - }); + }) } - async *chatStream( - options: AnthropicChatOptions - ): AsyncIterable { + async *chatStream(options: AnthropicChatOptions): AsyncIterable { try { - // Map common options to Anthropic format using the centralized mapping function - const requestParams = this.mapCommonOptionsToAnthropic(options); + const requestParams = this.mapCommonOptionsToAnthropic(options) const stream = await this.client.beta.messages.create( { ...requestParams, stream: true }, { signal: options.request?.signal, headers: options.request?.headers, - } - ); + }, + ) yield* this.processAnthropicStream(stream, options.model, () => - this.generateId() - ); + this.generateId(), + ) } catch (error: any) { - console.error("[Anthropic Adapter] Error in chatStream:", { + console.error('[Anthropic Adapter] Error in chatStream:', { message: error?.message, status: error?.status, statusText: error?.statusText, @@ -99,37 +92,37 @@ export class Anthropic extends BaseAdapter< type: error?.type, error: error, stack: error?.stack, - }); + }) // Emit an error chunk yield { - type: "error", + type: 'error', id: this.generateId(), - model: options.model || "claude-3-sonnet-20240229", + model: options.model, timestamp: Date.now(), error: { - message: error?.message || "Unknown error occurred", + message: error?.message || 'Unknown error occurred', code: error?.code || error?.status, }, - }; + } } } async summarize(options: SummarizationOptions): Promise { - const systemPrompt = this.buildSummarizationPrompt(options); + const systemPrompt = this.buildSummarizationPrompt(options) const response = await this.client.messages.create({ model: options.model, - messages: [{ role: "user", content: options.text }], + messages: [{ role: 'user', content: options.text }], system: systemPrompt, max_tokens: options.maxLength || 500, temperature: 0.3, stream: false, - }); + }) const content = response.content - .map((c) => (c.type === "text" ? c.text : "")) - .join(""); + .map((c) => (c.type === 'text' ? c.text : '')) + .join('') return { id: response.id, @@ -140,43 +133,43 @@ export class Anthropic extends BaseAdapter< completionTokens: response.usage.output_tokens, totalTokens: response.usage.input_tokens + response.usage.output_tokens, }, - }; + } } async createEmbeddings(_options: EmbeddingOptions): Promise { // Note: Anthropic doesn't have a native embeddings API // You would need to use a different service or implement a workaround throw new Error( - "Embeddings are not natively supported by Anthropic. Consider using OpenAI or another provider for embeddings." - ); + 'Embeddings are not natively supported by Anthropic. Consider using OpenAI or another provider for embeddings.', + ) } private buildSummarizationPrompt(options: SummarizationOptions): string { - let prompt = "You are a professional summarizer. "; + let prompt = 'You are a professional summarizer. ' switch (options.style) { - case "bullet-points": - prompt += "Provide a summary in bullet point format. "; - break; - case "paragraph": - prompt += "Provide a summary in paragraph format. "; - break; - case "concise": - prompt += "Provide a very concise summary in 1-2 sentences. "; - break; + case 'bullet-points': + prompt += 'Provide a summary in bullet point format. ' + break + case 'paragraph': + prompt += 'Provide a summary in paragraph format. ' + break + case 'concise': + prompt += 'Provide a very concise summary in 1-2 sentences. ' + break default: - prompt += "Provide a clear and concise summary. "; + prompt += 'Provide a clear and concise summary. ' } if (options.focus && options.focus.length > 0) { - prompt += `Focus on the following aspects: ${options.focus.join(", ")}. `; + prompt += `Focus on the following aspects: ${options.focus.join(', ')}. ` } if (options.maxLength) { - prompt += `Keep the summary under ${options.maxLength} tokens. `; + prompt += `Keep the summary under ${options.maxLength} tokens. ` } - return prompt; + return prompt } /** @@ -186,47 +179,50 @@ export class Anthropic extends BaseAdapter< private mapCommonOptionsToAnthropic(options: AnthropicChatOptions) { const providerOptions = options.providerOptions as | InternalTextProviderOptions - | undefined; + | undefined - const formattedMessages = this.formatMessages(options.messages); + const formattedMessages = this.formatMessages(options.messages) const tools = options.tools ? convertToolsToProviderFormat(options.tools) - : undefined; + : undefined // Filter out invalid fields from providerOptions (like 'store' which is OpenAI-specific) - const validProviderOptions: Partial = {}; + const validProviderOptions: Partial = {} if (providerOptions) { - const validKeys: (keyof InternalTextProviderOptions)[] = [ - "container", - "context_management", - "mcp_servers", - "service_tier", - "stop_sequences", - "system", - "thinking", - "tool_choice", - "top_k", - ]; + const validKeys: Array = [ + 'container', + 'context_management', + 'mcp_servers', + 'service_tier', + 'stop_sequences', + 'system', + 'thinking', + 'tool_choice', + 'top_k', + ] for (const key of validKeys) { if (key in providerOptions) { - const value = (providerOptions)[key]; + const value = providerOptions[key] // Anthropic expects tool_choice to be an object, not a string - if (key === "tool_choice" && typeof value === "string") { - (validProviderOptions as any)[key] = { type: value }; + if (key === 'tool_choice' && typeof value === 'string') { + ;(validProviderOptions as any)[key] = { type: value } } else { - (validProviderOptions as any)[key] = value; + ;(validProviderOptions as any)[key] = value } } } } // Ensure max_tokens is greater than thinking.budget_tokens if thinking is enabled - const thinkingBudget = validProviderOptions.thinking?.type === "enabled" ? validProviderOptions.thinking?.budget_tokens : undefined; - const defaultMaxTokens = options.options?.maxTokens || 1024; + const thinkingBudget = + validProviderOptions.thinking?.type === 'enabled' + ? validProviderOptions.thinking.budget_tokens + : undefined + const defaultMaxTokens = options.options?.maxTokens || 1024 const maxTokens = thinkingBudget && thinkingBudget >= defaultMaxTokens ? thinkingBudget + 1 // Ensure max_tokens is greater than budget_tokens - : defaultMaxTokens; + : defaultMaxTokens const requestParams: InternalTextProviderOptions = { model: options.model, @@ -236,193 +232,194 @@ export class Anthropic extends BaseAdapter< messages: formattedMessages, tools: tools, ...validProviderOptions, - }; - return requestParams; + } + validateTextProviderOptions(requestParams) + return requestParams } private formatMessages( - messages: ModelMessage[] - ): InternalTextProviderOptions["messages"] { - const formattedMessages: InternalTextProviderOptions["messages"] = []; + messages: Array, + ): InternalTextProviderOptions['messages'] { + const formattedMessages: InternalTextProviderOptions['messages'] = [] for (const message of messages) { - const role = message.role ?? "user"; + const role = message.role - if (role === "system") { - continue; + if (role === 'system') { + continue } - if (role === "tool" && message.toolCallId) { + if (role === 'tool' && message.toolCallId) { formattedMessages.push({ - role: "user", + role: 'user', content: [ { - type: "tool_result", + type: 'tool_result', tool_use_id: message.toolCallId, - content: message.content ?? "", + content: message.content ?? '', }, ], - }); - continue; + }) + continue } - if (role === "assistant" && message.toolCalls?.length) { - const contentBlocks: AnthropicContentBlocks = []; + if (role === 'assistant' && message.toolCalls?.length) { + const contentBlocks: AnthropicContentBlocks = [] if (message.content) { const textBlock: AnthropicContentBlock = { - type: "text", + type: 'text', text: message.content, - }; - contentBlocks.push(textBlock); + } + contentBlocks.push(textBlock) } for (const toolCall of message.toolCalls) { - let parsedInput: unknown = {}; + let parsedInput: unknown = {} try { parsedInput = toolCall.function.arguments ? JSON.parse(toolCall.function.arguments) - : {}; + : {} } catch { - parsedInput = toolCall.function.arguments; + parsedInput = toolCall.function.arguments } const toolUseBlock: AnthropicContentBlock = { - type: "tool_use", + type: 'tool_use', id: toolCall.id, name: toolCall.function.name, input: parsedInput, - }; - contentBlocks.push(toolUseBlock); + } + contentBlocks.push(toolUseBlock) } formattedMessages.push({ - role: "assistant", + role: 'assistant', content: contentBlocks, - }); + }) - continue; + continue } formattedMessages.push({ - role: role === "assistant" ? "assistant" : "user", - content: message.content ?? "", - }); + role: role === 'assistant' ? 'assistant' : 'user', + content: message.content ?? '', + }) } - return formattedMessages; + return formattedMessages } private async *processAnthropicStream( stream: AsyncIterable, model: string, - generateId: () => string + generateId: () => string, ): AsyncIterable { - let accumulatedContent = ""; - let accumulatedThinking = ""; - const timestamp = Date.now(); + let accumulatedContent = '' + let accumulatedThinking = '' + const timestamp = Date.now() const toolCallsMap = new Map< number, { id: string; name: string; input: string } - >(); - let currentToolIndex = -1; + >() + let currentToolIndex = -1 try { for await (const event of stream) { - if (event.type === "content_block_start") { - if (event.content_block.type === "tool_use") { - currentToolIndex++; + if (event.type === 'content_block_start') { + if (event.content_block.type === 'tool_use') { + currentToolIndex++ toolCallsMap.set(currentToolIndex, { id: event.content_block.id, name: event.content_block.name, - input: "", - }); - } else if (event.content_block.type === "thinking") { + input: '', + }) + } else if (event.content_block.type === 'thinking') { // Reset thinking content when a new thinking block starts - accumulatedThinking = ""; + accumulatedThinking = '' } - } else if (event.type === "content_block_delta") { - if (event.delta.type === "text_delta") { - const delta = event.delta.text; - accumulatedContent += delta; + } else if (event.type === 'content_block_delta') { + if (event.delta.type === 'text_delta') { + const delta = event.delta.text + accumulatedContent += delta yield { - type: "content", + type: 'content', id: generateId(), model: model, timestamp, delta, content: accumulatedContent, - role: "assistant", - }; - } else if (event.delta.type === "thinking_delta") { - // Handle thinking content - const delta = event.delta.thinking ?? ""; - accumulatedThinking += delta; + role: 'assistant', + } + } else if (event.delta.type === 'thinking_delta') { + // Handle thinking content + const delta = event.delta.thinking + accumulatedThinking += delta yield { - type: "thinking", + type: 'thinking', id: generateId(), model: model, timestamp, delta, content: accumulatedThinking, - }; - } else if (event.delta.type === "input_json_delta") { + } + } else if (event.delta.type === 'input_json_delta') { // Tool input is being streamed - const existing = toolCallsMap.get(currentToolIndex); + const existing = toolCallsMap.get(currentToolIndex) if (existing) { - existing.input += event.delta.partial_json; + existing.input += event.delta.partial_json yield { - type: "tool_call", + type: 'tool_call', id: generateId(), model: model, timestamp, toolCall: { id: existing.id, - type: "function", + type: 'function', function: { name: existing.name, arguments: event.delta.partial_json, }, }, index: currentToolIndex, - }; + } } } - } else if (event.type === "message_stop") { + } else if (event.type === 'message_stop') { yield { - type: "done", + type: 'done', id: generateId(), model: model, timestamp, - finishReason: "stop", - }; - } else if (event.type === "message_delta") { + finishReason: 'stop', + } + } else if (event.type === 'message_delta') { if (event.delta.stop_reason) { yield { - type: "done", + type: 'done', id: generateId(), model: model, timestamp, finishReason: - event.delta.stop_reason === "tool_use" - ? "tool_calls" - // TODO Fix the any and map the responses properly - : (event.delta.stop_reason as any), - - usage: event.usage - ? { - promptTokens: event.usage.input_tokens || 0, - completionTokens: event.usage.output_tokens || 0, - totalTokens: (event.usage.input_tokens || 0) + (event.usage.output_tokens || 0), - } - : undefined, - }; + event.delta.stop_reason === 'tool_use' + ? 'tool_calls' + : // TODO Fix the any and map the responses properly + (event.delta.stop_reason as any), + + usage: { + promptTokens: event.usage.input_tokens || 0, + completionTokens: event.usage.output_tokens || 0, + totalTokens: + (event.usage.input_tokens || 0) + + (event.usage.output_tokens || 0), + }, + } } } } } catch (error: any) { - console.error("[Anthropic Adapter] Error in processAnthropicStream:", { + console.error('[Anthropic Adapter] Error in processAnthropicStream:', { message: error?.message, status: error?.status, statusText: error?.statusText, @@ -430,18 +427,18 @@ export class Anthropic extends BaseAdapter< type: error?.type, error: error, stack: error?.stack, - }); + }) yield { - type: "error", + type: 'error', id: generateId(), model: model, timestamp, error: { - message: error?.message || "Unknown error occurred", + message: error?.message || 'Unknown error occurred', code: error?.code || error?.status, }, - }; + } } } } @@ -464,9 +461,9 @@ export class Anthropic extends BaseAdapter< */ export function createAnthropic( apiKey: string, - config?: Omit + config?: Omit, ): Anthropic { - return new Anthropic({ apiKey, ...config }); + return new Anthropic({ apiKey, ...config }) } /** @@ -486,20 +483,20 @@ export function createAnthropic( * const aiInstance = ai(anthropic()); * ``` */ -export function anthropic(config?: Omit): Anthropic { +export function anthropic(config?: Omit): Anthropic { const env = - typeof globalThis !== "undefined" && (globalThis as any).window?.env + typeof globalThis !== 'undefined' && (globalThis as any).window?.env ? (globalThis as any).window.env - : typeof process !== "undefined" + : typeof process !== 'undefined' ? process.env - : undefined; - const key = env?.ANTHROPIC_API_KEY; + : undefined + const key = env?.ANTHROPIC_API_KEY if (!key) { throw new Error( - "ANTHROPIC_API_KEY is required. Please set it in your environment variables or use createAnthropic(apiKey, config) instead." - ); + 'ANTHROPIC_API_KEY is required. Please set it in your environment variables or use createAnthropic(apiKey, config) instead.', + ) } - return createAnthropic(key, config); + return createAnthropic(key, config) } diff --git a/packages/typescript/ai-anthropic/src/index.ts b/packages/typescript/ai-anthropic/src/index.ts index 9338b9fa6..718165016 100644 --- a/packages/typescript/ai-anthropic/src/index.ts +++ b/packages/typescript/ai-anthropic/src/index.ts @@ -1,10 +1,13 @@ -export { Anthropic, createAnthropic, anthropic, type AnthropicConfig } from "./anthropic-adapter"; -export type { AnthropicChatModelProviderOptionsByName } from "./model-meta"; +export { + Anthropic, + createAnthropic, + anthropic, + type AnthropicConfig, +} from './anthropic-adapter' +export type { AnthropicChatModelProviderOptionsByName } from './model-meta' // Export tool conversion utilities -export { - convertToolsToProviderFormat, -} from "./tools/tool-converter"; +export { convertToolsToProviderFormat } from './tools/tool-converter' // Export tool types -export type { AnthropicTool, CustomTool } from "./tools"; \ No newline at end of file +export type { AnthropicTool, CustomTool } from './tools' diff --git a/packages/typescript/ai-anthropic/src/model-meta.ts b/packages/typescript/ai-anthropic/src/model-meta.ts index 56b817017..19f5079a2 100644 --- a/packages/typescript/ai-anthropic/src/model-meta.ts +++ b/packages/typescript/ai-anthropic/src/model-meta.ts @@ -1,377 +1,449 @@ -import type { - AnthropicContainerOptions, - AnthropicContextManagementOptions, - AnthropicMCPOptions, - AnthropicServiceTierOptions, - AnthropicStopSequencesOptions, - AnthropicThinkingOptions, - AnthropicToolChoiceOptions, - AnthropicSamplingOptions, -} from "./text/text-provider-options"; - -interface ModelMeta< - TProviderOptions = unknown, - TToolCapabilities = unknown, - TMessageCapabilities = unknown -> { - name: string; - id: string; - supports: { - extended_thinking?: boolean; - priority_tier?: boolean; - - }; - context_window?: number; - max_output_tokens?: number; - knowledge_cutoff?: string; - pricing: { - input: { - normal: number; - cached?: number; - }; - output: { - normal: number; - }; - }; - /** - * Type-level description of which provider options this model supports. - */ - providerOptions?: TProviderOptions; - /** - * Type-level description of which tool capabilities this model supports. - */ - toolCapabilities?: TToolCapabilities; - /** - * Type-level description of which message/input capabilities this model supports. - */ - messageCapabilities?: TMessageCapabilities; -} -const CLAUDE_SONNET_4_5 = { - name: "claude-sonnet-4-5", - id: "claude-sonnet-4-5-20250929", - context_window: 200_000, - max_output_tokens: 64_000, - knowledge_cutoff: "2025-09-29", - pricing: { - input: { - normal: 3, - }, - output: { - normal: 15 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_HAIKU_4_5 = { - name: "claude-haiku-4-5", - id: "claude-haiku-4-5-20251001", - context_window: 200_000, - max_output_tokens: 64_000, - knowledge_cutoff: "2025-10-01", - pricing: { - input: { - normal: 1, - }, - output: { - normal: 5 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_OPUS_4_1 = { - name: "claude-opus-4-1", - id: "claude-opus-4-1-20250805", - context_window: 200_000, - max_output_tokens: 64_000, - knowledge_cutoff: "2025-08-05", - pricing: { - input: { - normal: 15, - }, - output: { - normal: 75 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_OPUS_4_5 = { - name: "claude-opus-4-5", - id: "claude-opus-4-5-20251101", - context_window: 200_000, - max_output_tokens: 32_000, - knowledge_cutoff: "2025-11-01", - pricing: { - input: { - normal: 15, - }, - output: { - normal: 75 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_SONNET_4 = { - name: "claude-sonnet-4", - id: "claude-sonnet-4-20250514", - context_window: 200_000, - max_output_tokens: 64_000, - knowledge_cutoff: "2025-05-14", - pricing: { - input: { - normal: 3, - }, - output: { - normal: 15 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_SONNET_3_7 = { - name: "claude-sonnet-3-7", - id: "claude-3-7-sonnet-20250219", - max_output_tokens: 64_000, - knowledge_cutoff: "2025-05-14", - pricing: { - input: { - normal: 3, - }, - output: { - normal: 15 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_OPUS_4 = { - name: "claude-opus-4", - id: "claude-opus-4-20250514", - context_window: 200_000, - max_output_tokens: 32_000, - knowledge_cutoff: "2025-05-14", - pricing: { - input: { - normal: 15, - }, - output: { - normal: 75 - } - }, - supports: { - extended_thinking: true, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_HAIKU_3_5 = { - name: "claude-haiku-3-5", - id: "claude-3-5-haiku-20241022", - context_window: 200_000, - max_output_tokens: 8_000, - knowledge_cutoff: "2025-10-22", - pricing: { - input: { - normal: 0.8, - }, - output: { - normal: 4 - } - }, - supports: { - extended_thinking: false, - priority_tier: true - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -const CLAUDE_HAIKU_3 = { - name: "claude-haiku-3", - id: "claude-3-haiku-20240307", - context_window: 200_000, - max_output_tokens: 4_000, - knowledge_cutoff: "2024-03-07", - pricing: { - input: { - normal: 0.25, - }, - output: { - normal: 1.25 - } - }, - supports: { - extended_thinking: false, - priority_tier: false - } -} as const satisfies ModelMeta< - AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions ->; - -export const ANTHROPIC_MODEL_META = { - [CLAUDE_OPUS_4_5.name]: CLAUDE_OPUS_4_5, - [CLAUDE_SONNET_4_5.name]: CLAUDE_SONNET_4_5, - [CLAUDE_HAIKU_4_5.name]: CLAUDE_HAIKU_4_5, - [CLAUDE_OPUS_4_1.name]: CLAUDE_OPUS_4_1, - [CLAUDE_SONNET_4.name]: CLAUDE_SONNET_4, - [CLAUDE_SONNET_3_7.name]: CLAUDE_SONNET_3_7, - [CLAUDE_OPUS_4.name]: CLAUDE_OPUS_4, - [CLAUDE_HAIKU_3_5.name]: CLAUDE_HAIKU_3_5, - [CLAUDE_HAIKU_3.name]: CLAUDE_HAIKU_3, -} as const; - -export type AnthropicModelMetaMap = typeof ANTHROPIC_MODEL_META; - -export type AnthropicModelProviderOptions< - TModel extends keyof AnthropicModelMetaMap -> = AnthropicModelMetaMap[TModel] extends ModelMeta - ? TProviderOptions - : unknown; - -export type AnthropicModelToolCapabilities< - TModel extends keyof AnthropicModelMetaMap -> = AnthropicModelMetaMap[TModel] extends ModelMeta - ? TToolCapabilities - : unknown; - -export type AnthropicModelMessageCapabilities< - TModel extends keyof AnthropicModelMetaMap -> = AnthropicModelMetaMap[TModel] extends ModelMeta - ? TMessageCapabilities - : unknown; - -export const ANTHROPIC_MODELS = [ - CLAUDE_OPUS_4_5.id, - CLAUDE_SONNET_4_5.id, - CLAUDE_HAIKU_4_5.id, - CLAUDE_OPUS_4_1.id, - CLAUDE_SONNET_4.id, - CLAUDE_SONNET_3_7.id, - CLAUDE_OPUS_4.id, - CLAUDE_HAIKU_3_5.id, - CLAUDE_HAIKU_3.id -] as const; - -export const ANTHROPIC_IMAGE_MODELS = [] as const; -export const ANTHROPIC_EMBEDDING_MODELS = [] as const; -export const ANTHROPIC_AUDIO_MODELS = [] as const; -export const ANTHROPIC_VIDEO_MODELS = [] as const; - -export type AnthropicModel = (typeof ANTHROPIC_MODELS)[number]; - -// Manual type map for per-model provider options -// Models are differentiated by extended_thinking and priority_tier support -export type AnthropicChatModelProviderOptionsByName = { - // Models with both extended_thinking and priority_tier - [CLAUDE_OPUS_4_5.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - [CLAUDE_SONNET_4_5.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - [CLAUDE_HAIKU_4_5.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - [CLAUDE_OPUS_4_1.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - [CLAUDE_SONNET_4.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - [CLAUDE_SONNET_3_7.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - [CLAUDE_OPUS_4.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicThinkingOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - - // Model with priority_tier but NO extended_thinking - [CLAUDE_HAIKU_3_5.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicServiceTierOptions & AnthropicStopSequencesOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; - - // Model with neither extended_thinking nor priority_tier - [CLAUDE_HAIKU_3.id]: AnthropicContainerOptions & AnthropicContextManagementOptions & AnthropicMCPOptions & AnthropicStopSequencesOptions & AnthropicToolChoiceOptions & AnthropicSamplingOptions; -}; \ No newline at end of file +import type { + AnthropicContainerOptions, + AnthropicContextManagementOptions, + AnthropicMCPOptions, + AnthropicSamplingOptions, + AnthropicServiceTierOptions, + AnthropicStopSequencesOptions, + AnthropicThinkingOptions, + AnthropicToolChoiceOptions, +} from './text/text-provider-options' + +interface ModelMeta< + TProviderOptions = unknown, + TToolCapabilities = unknown, + TMessageCapabilities = unknown, +> { + name: string + id: string + supports: { + extended_thinking?: boolean + priority_tier?: boolean + } + context_window?: number + max_output_tokens?: number + knowledge_cutoff?: string + pricing: { + input: { + normal: number + cached?: number + } + output: { + normal: number + } + } + /** + * Type-level description of which provider options this model supports. + */ + providerOptions?: TProviderOptions + /** + * Type-level description of which tool capabilities this model supports. + */ + toolCapabilities?: TToolCapabilities + /** + * Type-level description of which message/input capabilities this model supports. + */ + messageCapabilities?: TMessageCapabilities +} +const CLAUDE_SONNET_4_5 = { + name: 'claude-sonnet-4-5', + id: 'claude-sonnet-4-5-20250929', + context_window: 200_000, + max_output_tokens: 64_000, + knowledge_cutoff: '2025-09-29', + pricing: { + input: { + normal: 3, + }, + output: { + normal: 15, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_HAIKU_4_5 = { + name: 'claude-haiku-4-5', + id: 'claude-haiku-4-5-20251001', + context_window: 200_000, + max_output_tokens: 64_000, + knowledge_cutoff: '2025-10-01', + pricing: { + input: { + normal: 1, + }, + output: { + normal: 5, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_OPUS_4_1 = { + name: 'claude-opus-4-1', + id: 'claude-opus-4-1-20250805', + context_window: 200_000, + max_output_tokens: 64_000, + knowledge_cutoff: '2025-08-05', + pricing: { + input: { + normal: 15, + }, + output: { + normal: 75, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_OPUS_4_5 = { + name: 'claude-opus-4-5', + id: 'claude-opus-4-5-20251101', + context_window: 200_000, + max_output_tokens: 32_000, + knowledge_cutoff: '2025-11-01', + pricing: { + input: { + normal: 15, + }, + output: { + normal: 75, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_SONNET_4 = { + name: 'claude-sonnet-4', + id: 'claude-sonnet-4-20250514', + context_window: 200_000, + max_output_tokens: 64_000, + knowledge_cutoff: '2025-05-14', + pricing: { + input: { + normal: 3, + }, + output: { + normal: 15, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_SONNET_3_7 = { + name: 'claude-sonnet-3-7', + id: 'claude-3-7-sonnet-20250219', + max_output_tokens: 64_000, + knowledge_cutoff: '2025-05-14', + pricing: { + input: { + normal: 3, + }, + output: { + normal: 15, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_OPUS_4 = { + name: 'claude-opus-4', + id: 'claude-opus-4-20250514', + context_window: 200_000, + max_output_tokens: 32_000, + knowledge_cutoff: '2025-05-14', + pricing: { + input: { + normal: 15, + }, + output: { + normal: 75, + }, + }, + supports: { + extended_thinking: true, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_HAIKU_3_5 = { + name: 'claude-haiku-3-5', + id: 'claude-3-5-haiku-20241022', + context_window: 200_000, + max_output_tokens: 8_000, + knowledge_cutoff: '2025-10-22', + pricing: { + input: { + normal: 0.8, + }, + output: { + normal: 4, + }, + }, + supports: { + extended_thinking: false, + priority_tier: true, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +const CLAUDE_HAIKU_3 = { + name: 'claude-haiku-3', + id: 'claude-3-haiku-20240307', + context_window: 200_000, + max_output_tokens: 4_000, + knowledge_cutoff: '2024-03-07', + pricing: { + input: { + normal: 0.25, + }, + output: { + normal: 1.25, + }, + }, + supports: { + extended_thinking: false, + priority_tier: false, + }, +} as const satisfies ModelMeta< + AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +> + +/* const ANTHROPIC_MODEL_META = { + [CLAUDE_OPUS_4_5.name]: CLAUDE_OPUS_4_5, + [CLAUDE_SONNET_4_5.name]: CLAUDE_SONNET_4_5, + [CLAUDE_HAIKU_4_5.name]: CLAUDE_HAIKU_4_5, + [CLAUDE_OPUS_4_1.name]: CLAUDE_OPUS_4_1, + [CLAUDE_SONNET_4.name]: CLAUDE_SONNET_4, + [CLAUDE_SONNET_3_7.name]: CLAUDE_SONNET_3_7, + [CLAUDE_OPUS_4.name]: CLAUDE_OPUS_4, + [CLAUDE_HAIKU_3_5.name]: CLAUDE_HAIKU_3_5, + [CLAUDE_HAIKU_3.name]: CLAUDE_HAIKU_3, +} as const */ + +/* export type AnthropicModelProviderOptions< + TModel extends keyof AnthropicModelMetaMap, +> = + AnthropicModelMetaMap[TModel] extends ModelMeta< + infer TProviderOptions, + any, + any + > + ? TProviderOptions + : unknown */ + +/* export type AnthropicModelToolCapabilities< + TModel extends keyof AnthropicModelMetaMap, +> = + AnthropicModelMetaMap[TModel] extends ModelMeta< + any, + infer TToolCapabilities, + any + > + ? TToolCapabilities + : unknown + */ +/* export type AnthropicModelMessageCapabilities< + TModel extends keyof AnthropicModelMetaMap, +> = + AnthropicModelMetaMap[TModel] extends ModelMeta< + any, + any, + infer TMessageCapabilities + > + ? TMessageCapabilities + : unknown */ + +export const ANTHROPIC_MODELS = [ + CLAUDE_OPUS_4_5.id, + CLAUDE_SONNET_4_5.id, + CLAUDE_HAIKU_4_5.id, + CLAUDE_OPUS_4_1.id, + CLAUDE_SONNET_4.id, + CLAUDE_SONNET_3_7.id, + CLAUDE_OPUS_4.id, + CLAUDE_HAIKU_3_5.id, + CLAUDE_HAIKU_3.id, +] as const + +// const ANTHROPIC_IMAGE_MODELS = [] as const +// const ANTHROPIC_EMBEDDING_MODELS = [] as const +// const ANTHROPIC_AUDIO_MODELS = [] as const +// const ANTHROPIC_VIDEO_MODELS = [] as const + +/* type AnthropicModel = (typeof ANTHROPIC_MODELS)[number] */ + +// Manual type map for per-model provider options +// Models are differentiated by extended_thinking and priority_tier support +export type AnthropicChatModelProviderOptionsByName = { + // Models with both extended_thinking and priority_tier + [CLAUDE_OPUS_4_5.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + [CLAUDE_SONNET_4_5.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + [CLAUDE_HAIKU_4_5.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + [CLAUDE_OPUS_4_1.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + [CLAUDE_SONNET_4.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + [CLAUDE_SONNET_3_7.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + [CLAUDE_OPUS_4.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + + // Model with priority_tier but NO extended_thinking + [CLAUDE_HAIKU_3_5.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + + // Model with neither extended_thinking nor priority_tier + [CLAUDE_HAIKU_3.id]: AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicStopSequencesOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions +} diff --git a/packages/typescript/ai-anthropic/src/text/text-provider-options.ts b/packages/typescript/ai-anthropic/src/text/text-provider-options.ts index 948b5713a..fc8720268 100644 --- a/packages/typescript/ai-anthropic/src/text/text-provider-options.ts +++ b/packages/typescript/ai-anthropic/src/text/text-provider-options.ts @@ -1,286 +1,205 @@ -import { MessageParam, TextBlockParam, } from "@anthropic-ai/sdk/resources/messages"; -import { AnthropicTool } from "../tools"; -import { BetaContextManagementConfig, BetaToolChoiceAny, BetaToolChoiceAuto, BetaToolChoiceTool } from "@anthropic-ai/sdk/resources/beta/messages/messages"; - -export interface AnthropicContainerOptions { - /** - * Container identifier for reuse across requests. - * Container parameters with skills to be loaded. - */ - container?: { - id: string | null; - /** - * List of skills to load into the container - */ - skills: { - /** - * Between 1-64 characters - */ - skill_id: string; - - type: "anthropic" | "custom"; - /** - * Skill version or latest by default - */ - version?: string - }[] | null - } | null -} - -export interface AnthropicContextManagementOptions { - /** - * Context management configuration. - -This allows you to control how Claude manages context across multiple requests, such as whether to clear function results or not. - */ - context_management?: BetaContextManagementConfig | null -} - -export interface AnthropicMCPOptions { - /** - * MCP servers to be utilized in this request - * Maximum of 20 servers - */ - mcp_servers?: MCPServer[] -} - -export interface AnthropicServiceTierOptions { - /** - * Determines whether to use priority capacity (if available) or standard capacity for this request. - */ - service_tier?: "auto" | "standard_only" -} - -export interface AnthropicStopSequencesOptions { - /** - * Custom text sequences that will cause the model to stop generating. - -Anthropic models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn". - -If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence. - */ - stop_sequences?: string[]; -} - -export interface AnthropicThinkingOptions { - /** - * Configuration for enabling Claude's extended thinking. - -When enabled, responses include thinking content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens limit. - */ - thinking?: { - /** - * Determines how many tokens Claude can use for its internal reasoning process. Larger budgets can enable more thorough analysis for complex problems, improving response quality. - -Must be ≄1024 and less than max_tokens - */ - budget_tokens: number; - - type: "enabled" - } | { - type: "disabled" - } -} - -export interface AnthropicToolChoiceOptions { - tool_choice?: BetaToolChoiceAny | BetaToolChoiceTool | BetaToolChoiceAuto -} - -export interface AnthropicSamplingOptions { - /** - * Only sample from the top K options for each subsequent token. - -Used to remove "long tail" low probability responses. -Recommended for advanced use cases only. You usually only need to use temperature. - -Required range: x >= 0 - */ - top_k?: number; -} - -export type ExternalTextProviderOptions = AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicServiceTierOptions & - AnthropicStopSequencesOptions & - AnthropicThinkingOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions; - -export interface InternalTextProviderOptions extends ExternalTextProviderOptions { - - model: string; - - messages: MessageParam[] - - /** - * The maximum number of tokens to generate before stopping. This parameter only specifies the absolute maximum number of tokens to generate. - * Range x >= 1. - */ - max_tokens: number; - /** - * Whether to incrementally stream the response using server-sent events. - */ - stream?: boolean; - /** - * stem prompt. - - A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. - */ - system?: string | TextBlockParam[] - /** - * Amount of randomness injected into the response. - * Either use this or top_p, but not both. - * Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. - * @default 1.0 - */ - temperature?: number; - - tools?: AnthropicTool[] - - /** - * Use nucleus sampling. - -In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. - */ - top_p?: number; -} - -export const validateTopPandTemperature = (options: InternalTextProviderOptions) => { - if (options.top_p !== null && options.temperature !== undefined) { - throw new Error("You should either set top_p or temperature, but not both."); - } -} - -export interface CacheControl { - type: "ephemeral", - ttl: "5m" | "1h" -} - -export const validateThinking = (options: InternalTextProviderOptions) => { - const thinking = options.thinking; - if (thinking && thinking.type === "enabled") { - if (thinking.budget_tokens < 1024) { - throw new Error("thinking.budget_tokens must be at least 1024."); - } - if (thinking.budget_tokens >= options.max_tokens) { - throw new Error("thinking.budget_tokens must be less than max_tokens."); - } - } -} -export type Citation = (CharacterLocationCitation | PageCitation | ContentBlockCitation | WebSearchResultCitation | RequestSearchResultLocation); - -interface CharacterLocationCitation { - cited_text: string; - /** - * Bigger than 0 - */ - document_index: number; - /** - * Between 1-255 characters - */ - document_title: string | null; - - end_char_index: number; - - start_char_index: number; - - type: "char_location" -} - -interface PageCitation { - cited_text: string; - /** - * Bigger than 0 - */ - document_index: number; - /** - * Between 1-255 characters - */ - document_title: string | null; - - end_page_number: number; - /** - * Has to be bigger than 0 - */ - start_page_number: number; - - type: "page_location" -} - -interface ContentBlockCitation { - cited_text: string; - /** - * Bigger than 0 - */ - document_index: number; - /** - * Between 1-255 characters - */ - document_title: string | null; - - end_block_index: number; - /** - * Has to be bigger than 0 - */ - start_block_index: number; - - type: "content_block_location" -} - -interface WebSearchResultCitation { - cited_text: string; - - encrypted_index: number; - /** - * Between 1-512 characters - */ - title: string | null; - /** - * Required length between 1-2048 characters - */ - url: string - type: "web_search_result_location" -} - -interface RequestSearchResultLocation { - cited_text: string; - - end_block_index: number; - /** - * Has to be bigger than 0 - */ - start_block_index: number; - /** - * Bigger than 0 - */ - search_result_index: number; - - source: string; - /** - * Between 1-512 characters - */ - title: string | null; - - type: "search_result_location" -} - - -interface MCPServer { - name: string; - url: string; - type: "url" - authorization_token?: string | null; - tool_configuration: { - allowed_tools?: string[] | null; - enabled?: boolean | null; - } | null; -} - - - -export const validateMaxTokens = (options: InternalTextProviderOptions) => { - if (options.max_tokens < 1) { - throw new Error("max_tokens must be at least 1."); - } -} \ No newline at end of file +import type { + BetaContextManagementConfig, + BetaToolChoiceAny, + BetaToolChoiceAuto, + BetaToolChoiceTool, +} from '@anthropic-ai/sdk/resources/beta/messages/messages' +import type { AnthropicTool } from '../tools' +import type { + MessageParam, + TextBlockParam, +} from '@anthropic-ai/sdk/resources/messages' + +export interface AnthropicContainerOptions { + /** + * Container identifier for reuse across requests. + * Container parameters with skills to be loaded. + */ + container?: { + id: string | null + /** + * List of skills to load into the container + */ + skills: Array<{ + /** + * Between 1-64 characters + */ + skill_id: string + + type: 'anthropic' | 'custom' + /** + * Skill version or latest by default + */ + version?: string + }> | null + } | null +} + +export interface AnthropicContextManagementOptions { + /** + * Context management configuration. + +This allows you to control how Claude manages context across multiple requests, such as whether to clear function results or not. + */ + context_management?: BetaContextManagementConfig | null +} + +export interface AnthropicMCPOptions { + /** + * MCP servers to be utilized in this request + * Maximum of 20 servers + */ + mcp_servers?: Array +} + +export interface AnthropicServiceTierOptions { + /** + * Determines whether to use priority capacity (if available) or standard capacity for this request. + */ + service_tier?: 'auto' | 'standard_only' +} + +export interface AnthropicStopSequencesOptions { + /** + * Custom text sequences that will cause the model to stop generating. + +Anthropic models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn". + +If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence. + */ + stop_sequences?: Array +} + +export interface AnthropicThinkingOptions { + /** + * Configuration for enabling Claude's extended thinking. + +When enabled, responses include thinking content blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your max_tokens limit. + */ + thinking?: + | { + /** +* Determines how many tokens Claude can use for its internal reasoning process. Larger budgets can enable more thorough analysis for complex problems, improving response quality. + +Must be ≄1024 and less than max_tokens +*/ + budget_tokens: number + + type: 'enabled' + } + | { + type: 'disabled' + } +} + +export interface AnthropicToolChoiceOptions { + tool_choice?: BetaToolChoiceAny | BetaToolChoiceTool | BetaToolChoiceAuto +} + +export interface AnthropicSamplingOptions { + /** + * Only sample from the top K options for each subsequent token. + +Used to remove "long tail" low probability responses. +Recommended for advanced use cases only. You usually only need to use temperature. + +Required range: x >= 0 + */ + top_k?: number +} + +export type ExternalTextProviderOptions = AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicServiceTierOptions & + AnthropicStopSequencesOptions & + AnthropicThinkingOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + +export interface InternalTextProviderOptions + extends ExternalTextProviderOptions { + model: string + + messages: Array + + /** + * The maximum number of tokens to generate before stopping. This parameter only specifies the absolute maximum number of tokens to generate. + * Range x >= 1. + */ + max_tokens: number + /** + * Whether to incrementally stream the response using server-sent events. + */ + stream?: boolean + /** + * stem prompt. + + A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. + */ + system?: string | Array + /** + * Amount of randomness injected into the response. + * Either use this or top_p, but not both. + * Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. + * @default 1.0 + */ + temperature?: number + + tools?: Array + + /** + * Use nucleus sampling. + +In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. + */ + top_p?: number +} + +const validateTopPandTemperature = (options: InternalTextProviderOptions) => { + if (options.top_p !== undefined && options.temperature !== undefined) { + throw new Error('You should either set top_p or temperature, but not both.') + } +} + +export interface CacheControl { + type: 'ephemeral' + ttl: '5m' | '1h' +} + +const validateThinking = (options: InternalTextProviderOptions) => { + const thinking = options.thinking + if (thinking && thinking.type === 'enabled') { + if (thinking.budget_tokens < 1024) { + throw new Error('thinking.budget_tokens must be at least 1024.') + } + if (thinking.budget_tokens >= options.max_tokens) { + throw new Error('thinking.budget_tokens must be less than max_tokens.') + } + } +} + +interface MCPServer { + name: string + url: string + type: 'url' + authorization_token?: string | null + tool_configuration: { + allowed_tools?: Array | null + enabled?: boolean | null + } | null +} + +const validateMaxTokens = (options: InternalTextProviderOptions) => { + if (options.max_tokens < 1) { + throw new Error('max_tokens must be at least 1.') + } +} + +export const validateTextProviderOptions = ( + options: InternalTextProviderOptions, +) => { + validateTopPandTemperature(options) + validateThinking(options) + validateMaxTokens(options) +} diff --git a/packages/typescript/ai-anthropic/src/tools/bash-tool.ts b/packages/typescript/ai-anthropic/src/tools/bash-tool.ts index 2cfc132b6..7afcfa64a 100644 --- a/packages/typescript/ai-anthropic/src/tools/bash-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/bash-tool.ts @@ -1,26 +1,23 @@ -import { BetaToolBash20241022, BetaToolBash20250124 } from "@anthropic-ai/sdk/resources/beta"; -import type { Tool } from "@tanstack/ai"; - -export type BashTool = BetaToolBash20241022 | BetaToolBash20250124 - - - -export function createBashTool(config: BashTool): BashTool { - return config -} - -export function convertBashToolToAdapterFormat(tool: Tool): BashTool { - const metadata = tool.metadata as BashTool; - return metadata -} -export function bashTool(config: BashTool): Tool { - return { - type: "function", - function: { - name: "bash", - description: "", - parameters: {} - }, - metadata: config - } -} \ No newline at end of file +import type { + BetaToolBash20241022, + BetaToolBash20250124, +} from '@anthropic-ai/sdk/resources/beta' +import type { Tool } from '@tanstack/ai' + +export type BashTool = BetaToolBash20241022 | BetaToolBash20250124 + +export function convertBashToolToAdapterFormat(tool: Tool): BashTool { + const metadata = tool.metadata as BashTool + return metadata +} +export function bashTool(config: BashTool): Tool { + return { + type: 'function', + function: { + name: 'bash', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/code-execution-tool.ts b/packages/typescript/ai-anthropic/src/tools/code-execution-tool.ts index 77ec623d3..cb2dc1058 100644 --- a/packages/typescript/ai-anthropic/src/tools/code-execution-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/code-execution-tool.ts @@ -1,26 +1,28 @@ -import { BetaCodeExecutionTool20250522, BetaCodeExecutionTool20250825 } from "@anthropic-ai/sdk/resources/beta"; -import type { Tool } from "@tanstack/ai"; - -export type CodeExecutionTool = BetaCodeExecutionTool20250522 | BetaCodeExecutionTool20250825 - -export function createCodeExecutionTool(config: CodeExecutionTool): CodeExecutionTool { - return config -} - -export function convertCodeExecutionToolToAdapterFormat(tool: Tool): CodeExecutionTool { - const metadata = tool.metadata as CodeExecutionTool - return metadata -} - -export function codeExecutionTool(config: CodeExecutionTool): Tool { - return { - type: "function", - function: { - name: "code_execution", - description: "", - parameters: {} - }, - metadata: config - } -} - +import type { + BetaCodeExecutionTool20250522, + BetaCodeExecutionTool20250825, +} from '@anthropic-ai/sdk/resources/beta' +import type { Tool } from '@tanstack/ai' + +export type CodeExecutionTool = + | BetaCodeExecutionTool20250522 + | BetaCodeExecutionTool20250825 + +export function convertCodeExecutionToolToAdapterFormat( + tool: Tool, +): CodeExecutionTool { + const metadata = tool.metadata as CodeExecutionTool + return metadata +} + +export function codeExecutionTool(config: CodeExecutionTool): Tool { + return { + type: 'function', + function: { + name: 'code_execution', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/computer-use-tool.ts b/packages/typescript/ai-anthropic/src/tools/computer-use-tool.ts index 4e5625795..bdd4110a7 100644 --- a/packages/typescript/ai-anthropic/src/tools/computer-use-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/computer-use-tool.ts @@ -1,28 +1,28 @@ -import { BetaToolComputerUse20241022, BetaToolComputerUse20250124 } from "@anthropic-ai/sdk/resources/beta"; -import type { Tool } from "@tanstack/ai"; - - -export type ComputerUseTool = BetaToolComputerUse20241022 | BetaToolComputerUse20250124 - -export function createComputerUseTool( - config: ComputerUseTool -): ComputerUseTool { - return config -} - -export function convertComputerUseToolToAdapterFormat(tool: Tool): ComputerUseTool { - const metadata = tool.metadata as ComputerUseTool - return metadata -} - -export function computerUseTool(config: ComputerUseTool): Tool { - return { - type: "function", - function: { - name: "computer", - description: "", - parameters: {} - }, - metadata: config - } -} \ No newline at end of file +import type { + BetaToolComputerUse20241022, + BetaToolComputerUse20250124, +} from '@anthropic-ai/sdk/resources/beta' +import type { Tool } from '@tanstack/ai' + +export type ComputerUseTool = + | BetaToolComputerUse20241022 + | BetaToolComputerUse20250124 + +export function convertComputerUseToolToAdapterFormat( + tool: Tool, +): ComputerUseTool { + const metadata = tool.metadata as ComputerUseTool + return metadata +} + +export function computerUseTool(config: ComputerUseTool): Tool { + return { + type: 'function', + function: { + name: 'computer', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/custom-tool.ts b/packages/typescript/ai-anthropic/src/tools/custom-tool.ts index 3818a0da5..26e1021e3 100644 --- a/packages/typescript/ai-anthropic/src/tools/custom-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/custom-tool.ts @@ -1,53 +1,59 @@ -import { CacheControl } from "../text/text-provider-options"; -import type { Tool } from "@tanstack/ai"; - -export interface CustomTool { - /** - * The name of the tool. - */ - name: string; - type: "custom" - /** - * A brief description of what the tool does. Tool descriptions should be as detailed as possible. The more information that the model has about what the tool is and how to use it, the better it will perform. You can use natural language descriptions to reinforce important aspects of the tool input JSON schema. - */ - description: string; - /** - * This defines the shape of the input that your tool accepts and that the model will produce. - */ - input_schema: { - type: "object"; - properties: Record | null; - required?: string[] | null; - } - - cache_control?: CacheControl | null -} - -export function convertCustomToolToAdapterFormat(tool: Tool): CustomTool { - const metadata = (tool.metadata as { cacheControl?: CacheControl | null }) || {}; - return { - name: tool.function.name, - type: "custom", - description: tool.function.description, - input_schema: { - type: "object", - properties: (tool.function.parameters as any)?.properties || null, - required: (tool.function.parameters as any)?.required || null, - }, - cache_control: metadata.cacheControl || null, - }; -} - -export function customTool(name: string, description: string, parameters: Record, cacheControl?: CacheControl | null): Tool { - return { - type: "function", - function: { - name, - description, - parameters - }, - metadata: { - cacheControl - } - } -} \ No newline at end of file +import type { CacheControl } from '../text/text-provider-options' +import type { Tool } from '@tanstack/ai' + +export interface CustomTool { + /** + * The name of the tool. + */ + name: string + type: 'custom' + /** + * A brief description of what the tool does. Tool descriptions should be as detailed as possible. The more information that the model has about what the tool is and how to use it, the better it will perform. You can use natural language descriptions to reinforce important aspects of the tool input JSON schema. + */ + description: string + /** + * This defines the shape of the input that your tool accepts and that the model will produce. + */ + input_schema: { + type: 'object' + properties: Record | null + required?: Array | null + } + + cache_control?: CacheControl | null +} + +export function convertCustomToolToAdapterFormat(tool: Tool): CustomTool { + const metadata = + (tool.metadata as { cacheControl?: CacheControl | null } | undefined) || {} + return { + name: tool.function.name, + type: 'custom', + description: tool.function.description, + input_schema: { + type: 'object', + properties: tool.function.parameters.properties || null, + required: tool.function.parameters.required || null, + }, + cache_control: metadata.cacheControl || null, + } +} + +export function customTool( + name: string, + description: string, + parameters: Record, + cacheControl?: CacheControl | null, +): Tool { + return { + type: 'function', + function: { + name, + description, + parameters, + }, + metadata: { + cacheControl, + }, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/index.ts b/packages/typescript/ai-anthropic/src/tools/index.ts index d66c87112..5012b2b5f 100644 --- a/packages/typescript/ai-anthropic/src/tools/index.ts +++ b/packages/typescript/ai-anthropic/src/tools/index.ts @@ -1,14 +1,13 @@ +import type { BashTool } from './bash-tool' +import type { CodeExecutionTool } from './code-execution-tool' +import type { ComputerUseTool } from './computer-use-tool' +import type { CustomTool } from './custom-tool' +import type { MemoryTool } from './memory-tool' +import type { TextEditorTool } from './text-editor-tool' +import type { WebFetchTool } from './web-fetch-tool' +import type { WebSearchTool } from './web-search-tool' -import { BashTool } from "./bash-tool"; -import { CodeExecutionTool } from "./code-execution-tool"; -import { ComputerUseTool } from "./computer-use-tool"; -import { CustomTool } from "./custom-tool"; -import { MemoryTool } from "./memory-tool"; -import { TextEditorTool } from "./text-editor-tool"; -import { WebFetchTool } from "./web-fetch-tool"; -import { WebSearchTool } from "./web-search-tool"; - -export type AnthropicTool = ( +export type AnthropicTool = | BashTool | CodeExecutionTool | ComputerUseTool @@ -17,7 +16,15 @@ export type AnthropicTool = ( | TextEditorTool | WebFetchTool | WebSearchTool -); // Export individual tool types -export type { BashTool, CodeExecutionTool, ComputerUseTool, CustomTool, MemoryTool, TextEditorTool, WebFetchTool, WebSearchTool }; +export type { + // BashTool, + // CodeExecutionTool, + // ComputerUseTool, + CustomTool, + // MemoryTool, + // TextEditorTool, + // WebFetchTool, + // WebSearchTool, +} diff --git a/packages/typescript/ai-anthropic/src/tools/memory-tool.ts b/packages/typescript/ai-anthropic/src/tools/memory-tool.ts index e45cc3735..3318a48e3 100644 --- a/packages/typescript/ai-anthropic/src/tools/memory-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/memory-tool.ts @@ -1,24 +1,23 @@ -import { BetaMemoryTool20250818 } from "@anthropic-ai/sdk/resources/beta"; -import type { Tool } from "@tanstack/ai"; - -export type MemoryTool = BetaMemoryTool20250818 - - -export function convertMemoryToolToAdapterFormat(tool: Tool): MemoryTool { - const metadata = tool.metadata as MemoryTool - return metadata -} - -export function memoryTool(cacheControl?: MemoryTool): Tool { - return { - type: "function", - function: { - name: "memory", - description: "", - parameters: {} - }, - metadata: { - cacheControl - } - } -} \ No newline at end of file +import type { BetaMemoryTool20250818 } from '@anthropic-ai/sdk/resources/beta' +import type { Tool } from '@tanstack/ai' + +export type MemoryTool = BetaMemoryTool20250818 + +export function convertMemoryToolToAdapterFormat(tool: Tool): MemoryTool { + const metadata = tool.metadata as MemoryTool + return metadata +} + +export function memoryTool(cacheControl?: MemoryTool): Tool { + return { + type: 'function', + function: { + name: 'memory', + description: '', + parameters: {}, + }, + metadata: { + cacheControl, + }, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/text-editor-tool.ts b/packages/typescript/ai-anthropic/src/tools/text-editor-tool.ts index e41f9e4b0..401b09230 100644 --- a/packages/typescript/ai-anthropic/src/tools/text-editor-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/text-editor-tool.ts @@ -1,36 +1,32 @@ -import { ToolTextEditor20250124, ToolTextEditor20250429, ToolTextEditor20250728 } from "@anthropic-ai/sdk/resources/messages"; -import type { CacheControl } from "../text/text-provider-options"; -import type { Tool } from "@tanstack/ai"; - -export type TextEditorTool = ToolTextEditor20250124 | ToolTextEditor20250429 | ToolTextEditor20250728 - -export function createTextEditorTool(config: T): T { - return config -} - -export function convertTextEditorToolToAdapterFormat(tool: Tool): TextEditorTool { - const metadata = tool.metadata as TextEditorTool - return { - ...metadata, - - } -} - -export function textEditorTool(config: T): Tool { - return { - type: "function", - function: { - name: "str_replace_editor", - description: "", - parameters: {} - }, - metadata: config - } -} - -export interface TextEditor { - name: "str_replace_based_edit_tool"; - type: "text_editor_20250728"; - cache_control?: CacheControl | null - max_characters?: number | null; -} \ No newline at end of file +import type { + ToolTextEditor20250124, + ToolTextEditor20250429, + ToolTextEditor20250728, +} from '@anthropic-ai/sdk/resources/messages' +import type { Tool } from '@tanstack/ai' + +export type TextEditorTool = + | ToolTextEditor20250124 + | ToolTextEditor20250429 + | ToolTextEditor20250728 + +export function convertTextEditorToolToAdapterFormat( + tool: Tool, +): TextEditorTool { + const metadata = tool.metadata as TextEditorTool + return { + ...metadata, + } +} + +export function textEditorTool(config: T): Tool { + return { + type: 'function', + function: { + name: 'str_replace_editor', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/tool-converter.ts b/packages/typescript/ai-anthropic/src/tools/tool-converter.ts index 6c3735821..0b3e08f9b 100644 --- a/packages/typescript/ai-anthropic/src/tools/tool-converter.ts +++ b/packages/typescript/ai-anthropic/src/tools/tool-converter.ts @@ -1,61 +1,61 @@ -import type { Tool } from "@tanstack/ai"; -import { AnthropicTool, } from "."; -import { convertBashToolToAdapterFormat } from "./bash-tool"; -import { convertCodeExecutionToolToAdapterFormat } from "./code-execution-tool"; -import { convertComputerUseToolToAdapterFormat } from "./computer-use-tool"; -import { convertCustomToolToAdapterFormat } from "./custom-tool"; -import { convertMemoryToolToAdapterFormat } from "./memory-tool"; -import { convertTextEditorToolToAdapterFormat } from "./text-editor-tool"; -import { convertWebFetchToolToAdapterFormat } from "./web-fetch-tool"; -import { convertWebSearchToolToAdapterFormat } from "./web-search-tool"; - -/** - * Converts standard Tool format to Anthropic-specific tool format - * - * @param tools - Array of standard Tool objects - * @returns Array of Anthropic-specific tool definitions - * - * @example - * ```typescript - * const tools: Tool[] = [{ - * type: "function", - * function: { - * name: "get_weather", - * description: "Get weather for a location", - * parameters: { - * type: "object", - * properties: { location: { type: "string" } }, - * required: ["location"] - * } - * } - * }]; - * - * const anthropicTools = convertToolsToProviderFormat(tools); - * ``` - */ -export function convertToolsToProviderFormat( - tools: TTool[], -): AnthropicTool[] { - return tools.map(tool => { - const name = tool.function.name; - - switch (name) { - case "bash": - return convertBashToolToAdapterFormat(tool); - case "code_execution": - return convertCodeExecutionToolToAdapterFormat(tool); - case "computer": - return convertComputerUseToolToAdapterFormat(tool); - case "memory": - return convertMemoryToolToAdapterFormat(tool); - case "str_replace_editor": - return convertTextEditorToolToAdapterFormat(tool); - case "web_fetch": - return convertWebFetchToolToAdapterFormat(tool); - case "web_search": - return convertWebSearchToolToAdapterFormat(tool); - default: - return convertCustomToolToAdapterFormat(tool); - } - }); -} +import { convertBashToolToAdapterFormat } from './bash-tool' +import { convertCodeExecutionToolToAdapterFormat } from './code-execution-tool' +import { convertComputerUseToolToAdapterFormat } from './computer-use-tool' +import { convertCustomToolToAdapterFormat } from './custom-tool' +import { convertMemoryToolToAdapterFormat } from './memory-tool' +import { convertTextEditorToolToAdapterFormat } from './text-editor-tool' +import { convertWebFetchToolToAdapterFormat } from './web-fetch-tool' +import { convertWebSearchToolToAdapterFormat } from './web-search-tool' +import type { AnthropicTool } from '.' +import type { Tool } from '@tanstack/ai' + +/** + * Converts standard Tool format to Anthropic-specific tool format + * + * @param tools - Array of standard Tool objects + * @returns Array of Anthropic-specific tool definitions + * + * @example + * ```typescript + * const tools: Tool[] = [{ + * type: "function", + * function: { + * name: "get_weather", + * description: "Get weather for a location", + * parameters: { + * type: "object", + * properties: { location: { type: "string" } }, + * required: ["location"] + * } + * } + * }]; + * + * const anthropicTools = convertToolsToProviderFormat(tools); + * ``` + */ +export function convertToolsToProviderFormat( + tools: Array, +): Array { + return tools.map((tool) => { + const name = tool.function.name + + switch (name) { + case 'bash': + return convertBashToolToAdapterFormat(tool) + case 'code_execution': + return convertCodeExecutionToolToAdapterFormat(tool) + case 'computer': + return convertComputerUseToolToAdapterFormat(tool) + case 'memory': + return convertMemoryToolToAdapterFormat(tool) + case 'str_replace_editor': + return convertTextEditorToolToAdapterFormat(tool) + case 'web_fetch': + return convertWebFetchToolToAdapterFormat(tool) + case 'web_search': + return convertWebSearchToolToAdapterFormat(tool) + default: + return convertCustomToolToAdapterFormat(tool) + } + }) +} diff --git a/packages/typescript/ai-anthropic/src/tools/web-fetch-tool.ts b/packages/typescript/ai-anthropic/src/tools/web-fetch-tool.ts index 66bd9f7f9..196eb579f 100644 --- a/packages/typescript/ai-anthropic/src/tools/web-fetch-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/web-fetch-tool.ts @@ -1,39 +1,52 @@ -import { BetaWebFetchTool20250910 } from "@anthropic-ai/sdk/resources/beta"; -import type { CacheControl } from "../text/text-provider-options"; -import type { Tool } from "@tanstack/ai"; - -export type WebFetchTool = BetaWebFetchTool20250910 - - -export function convertWebFetchToolToAdapterFormat(tool: Tool): WebFetchTool { - const metadata = tool.metadata as { allowedDomains?: string[] | null; blockedDomains?: string[] | null; maxUses?: number | null; citations?: { enabled?: boolean } | null; maxContentTokens?: number | null; cacheControl?: CacheControl | null }; - return { - name: "web_fetch", - type: "web_fetch_20250910", - allowed_domains: metadata.allowedDomains, - blocked_domains: metadata.blockedDomains, - max_uses: metadata.maxUses, - citations: metadata.citations, - max_content_tokens: metadata.maxContentTokens, - cache_control: metadata.cacheControl || null, - }; -} - -export function webFetchTool(config?: { allowedDomains?: string[] | null; blockedDomains?: string[] | null; maxUses?: number | null; citations?: { enabled?: boolean } | null; maxContentTokens?: number | null; cacheControl?: CacheControl | null }): Tool { - return { - type: "function", - function: { - name: "web_fetch", - description: "", - parameters: {} - }, - metadata: { - allowedDomains: config?.allowedDomains, - blockedDomains: config?.blockedDomains, - maxUses: config?.maxUses, - citations: config?.citations, - maxContentTokens: config?.maxContentTokens, - cacheControl: config?.cacheControl - } - } -} \ No newline at end of file +import type { BetaWebFetchTool20250910 } from '@anthropic-ai/sdk/resources/beta' +import type { CacheControl } from '../text/text-provider-options' +import type { Tool } from '@tanstack/ai' + +export type WebFetchTool = BetaWebFetchTool20250910 + +export function convertWebFetchToolToAdapterFormat(tool: Tool): WebFetchTool { + const metadata = tool.metadata as { + allowedDomains?: Array | null + blockedDomains?: Array | null + maxUses?: number | null + citations?: { enabled?: boolean } | null + maxContentTokens?: number | null + cacheControl?: CacheControl | null + } + return { + name: 'web_fetch', + type: 'web_fetch_20250910', + allowed_domains: metadata.allowedDomains, + blocked_domains: metadata.blockedDomains, + max_uses: metadata.maxUses, + citations: metadata.citations, + max_content_tokens: metadata.maxContentTokens, + cache_control: metadata.cacheControl || null, + } +} + +export function webFetchTool(config?: { + allowedDomains?: Array | null + blockedDomains?: Array | null + maxUses?: number | null + citations?: { enabled?: boolean } | null + maxContentTokens?: number | null + cacheControl?: CacheControl | null +}): Tool { + return { + type: 'function', + function: { + name: 'web_fetch', + description: '', + parameters: {}, + }, + metadata: { + allowedDomains: config?.allowedDomains, + blockedDomains: config?.blockedDomains, + maxUses: config?.maxUses, + citations: config?.citations, + maxContentTokens: config?.maxContentTokens, + cacheControl: config?.cacheControl, + }, + } +} diff --git a/packages/typescript/ai-anthropic/src/tools/web-search-tool.ts b/packages/typescript/ai-anthropic/src/tools/web-search-tool.ts index e6a832e6e..2aea4a09a 100644 --- a/packages/typescript/ai-anthropic/src/tools/web-search-tool.ts +++ b/packages/typescript/ai-anthropic/src/tools/web-search-tool.ts @@ -1,60 +1,85 @@ -import { WebSearchTool20250305 } from "@anthropic-ai/sdk/resources/messages"; -import { CacheControl } from "../text/text-provider-options"; -import type { Tool } from "@tanstack/ai"; - -export type WebSearchTool = WebSearchTool20250305 - - -export const validateDomains = (tool: WebSearchTool) => { - if (tool.allowed_domains && tool.blocked_domains) { - throw new Error("allowed_domains and blocked_domains cannot be used together."); - } -} - -export const validateUserLocation = (userLocation: WebSearchTool["user_location"]) => { - if (userLocation) { - if (userLocation.city && (userLocation.city.length < 1 || userLocation.city.length > 255)) { - throw new Error("user_location.city must be between 1 and 255 characters."); - } - if (userLocation.country && userLocation.country.length !== 2) { - throw new Error("user_location.country must be exactly 2 characters."); - } - if (userLocation.region && (userLocation.region.length < 1 || userLocation.region.length > 255)) { - throw new Error("user_location.region must be between 1 and 255 characters."); - } - if (userLocation.timezone && (userLocation.timezone.length < 1 || userLocation.timezone.length > 255)) { - throw new Error("user_location.timezone must be between 1 and 255 characters."); - } - } -} - -export function convertWebSearchToolToAdapterFormat(tool: Tool): WebSearchTool { - const metadata = tool.metadata as { allowedDomains?: string[] | null; blockedDomains?: string[] | null; maxUses?: number | null; userLocation?: { type: "approximate"; city?: string | null; country?: string | null; region?: string | null; timezone?: string | null } | null; cacheControl?: CacheControl | null }; - return { - name: "web_search", - type: "web_search_20250305", - allowed_domains: metadata.allowedDomains, - blocked_domains: metadata.blockedDomains, - max_uses: metadata.maxUses, - user_location: metadata.userLocation, - cache_control: metadata.cacheControl || null, - }; -} - -export function webSearchTool(config?: { allowedDomains?: string[] | null; blockedDomains?: string[] | null; maxUses?: number | null; userLocation?: { type: "approximate"; city?: string | null; country?: string | null; region?: string | null; timezone?: string | null } | null; cacheControl?: CacheControl | null }): Tool { - return { - type: "function", - function: { - name: "web_search", - description: "", - parameters: {} - }, - metadata: { - allowedDomains: config?.allowedDomains, - blockedDomains: config?.blockedDomains, - maxUses: config?.maxUses, - userLocation: config?.userLocation, - cacheControl: config?.cacheControl - } - } -} \ No newline at end of file +import type { WebSearchTool20250305 } from '@anthropic-ai/sdk/resources/messages' +import type { CacheControl } from '../text/text-provider-options' +import type { Tool } from '@tanstack/ai' + +export type WebSearchTool = WebSearchTool20250305 + +const validateDomains = (tool: WebSearchTool) => { + if (tool.allowed_domains && tool.blocked_domains) { + throw new Error( + 'allowed_domains and blocked_domains cannot be used together.', + ) + } +} + +const validateUserLocation = (tool: WebSearchTool) => { + const userLocation = tool.user_location + if (userLocation) { + if ( + userLocation.city && + (userLocation.city.length < 1 || userLocation.city.length > 255) + ) { + throw new Error( + 'user_location.city must be between 1 and 255 characters.', + ) + } + if (userLocation.country && userLocation.country.length !== 2) { + throw new Error('user_location.country must be exactly 2 characters.') + } + if ( + userLocation.region && + (userLocation.region.length < 1 || userLocation.region.length > 255) + ) { + throw new Error( + 'user_location.region must be between 1 and 255 characters.', + ) + } + if ( + userLocation.timezone && + (userLocation.timezone.length < 1 || userLocation.timezone.length > 255) + ) { + throw new Error( + 'user_location.timezone must be between 1 and 255 characters.', + ) + } + } +} + +export function convertWebSearchToolToAdapterFormat(tool: Tool): WebSearchTool { + const metadata = tool.metadata as { + allowedDomains?: Array | null + blockedDomains?: Array | null + maxUses?: number | null + userLocation?: { + type: 'approximate' + city?: string | null + country?: string | null + region?: string | null + timezone?: string | null + } | null + cacheControl?: CacheControl | null + } + return { + name: 'web_search', + type: 'web_search_20250305', + allowed_domains: metadata.allowedDomains, + blocked_domains: metadata.blockedDomains, + max_uses: metadata.maxUses, + user_location: metadata.userLocation, + cache_control: metadata.cacheControl || null, + } +} + +export function webSearchTool(config: WebSearchTool): Tool { + validateDomains(config) + validateUserLocation(config) + return { + type: 'function', + function: { + name: 'web_search', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts b/packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts index 1d57f2338..6958f3862 100644 --- a/packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts +++ b/packages/typescript/ai-anthropic/tests/anthropic-adapter.test.ts @@ -1,195 +1,196 @@ -import { describe, it, expect, beforeEach, vi } from "vitest"; -import { chat, type Tool, type StreamChunk } from "@tanstack/ai"; -import { Anthropic, type AnthropicProviderOptions } from "../src/anthropic-adapter"; - -const mocks = vi.hoisted(() => { - const betaMessagesCreate = vi.fn(); - const messagesCreate = vi.fn(); - - const client = { - beta: { - messages: { - create: betaMessagesCreate, - }, - }, - messages: { - create: messagesCreate, - }, - }; - - return { betaMessagesCreate, messagesCreate, client }; -}); - -vi.mock("@anthropic-ai/sdk", () => { - const { client } = mocks; - - class MockAnthropic { - beta = client.beta; - messages = client.messages; - - constructor(_: { apiKey: string }) { } - } - - return { default: MockAnthropic }; -}); - -const createAdapter = () => new Anthropic({ apiKey: "test-key" }); - -const toolArguments = JSON.stringify({ location: "Berlin" }); - -const weatherTool: Tool = { - type: "function", - function: { - name: "lookup_weather", - description: "Return the weather for a city", - parameters: { - type: "object", - properties: { - location: { type: "string" }, - }, - required: ["location"], - }, - }, -}; - -describe("Anthropic adapter option mapping", () => { - beforeEach(() => { - vi.clearAllMocks(); - }); - - it("maps normalized options and Anthropic provider settings", async () => { - // Mock the streaming response - const mockStream = (async function* () { - yield { - type: "content_block_start", - index: 0, - content_block: { type: "text", text: "" }, - }; - yield { - type: "content_block_delta", - index: 0, - delta: { type: "text_delta", text: "It will be sunny" }, - }; - yield { - type: "message_delta", - delta: { stop_reason: "end_turn" }, - usage: { output_tokens: 5 }, - }; - yield { - type: "message_stop", - }; - })(); - - mocks.betaMessagesCreate.mockResolvedValueOnce(mockStream); - - const providerOptions = { - container: { - id: "container-weather", - skills: [{ skill_id: "forecast", type: "custom", version: "1" }], - }, - mcp_servers: [ - { - name: "world-weather", - url: "https://mcp.example.com", - type: "url", - authorization_token: "secret", - tool_configuration: { - allowed_tools: ["lookup_weather"], - enabled: true, - }, - }, - ], - service_tier: "standard_only", - stop_sequences: [""], - thinking: { type: "enabled", budget_tokens: 1500 }, - top_k: 5, - system: "Respond with JSON", - } satisfies AnthropicProviderOptions & { system: string }; - - const adapter = createAdapter(); - - // Consume the stream to trigger the API call - const chunks: StreamChunk[] = []; - for await (const chunk of chat({ - adapter, - model: "claude-3-7-sonnet-20250219", - messages: [ - { role: "system", content: "Keep it structured" }, - { role: "user", content: "What is the forecast?" }, - { - role: "assistant", - content: "Checking", - toolCalls: [ - { - id: "call_weather", - type: "function", - function: { name: "lookup_weather", arguments: toolArguments }, - }, - ], - }, - { role: "tool", toolCallId: "call_weather", content: "{\"temp\":72}" }, - ], - tools: [weatherTool], - options: { - maxTokens: 3000, - temperature: 0.4, - topP: 0.8, - }, - providerOptions, - })) { - chunks.push(chunk); - } - - expect(mocks.betaMessagesCreate).toHaveBeenCalledTimes(1); - const [payload] = mocks.betaMessagesCreate.mock.calls[0]; - - expect(payload).toMatchObject({ - model: "claude-3-7-sonnet-20250219", - max_tokens: 3000, - temperature: 0.4, - top_p: 0.8, - container: providerOptions.container, - mcp_servers: providerOptions.mcp_servers, - service_tier: providerOptions.service_tier, - stop_sequences: providerOptions.stop_sequences, - thinking: providerOptions.thinking, - top_k: providerOptions.top_k, - system: providerOptions.system, - }); - expect(payload.stream).toBe(true); - - expect(payload.messages).toEqual([ - { - role: "user", - content: "What is the forecast?", - }, - { - role: "assistant", - content: [ - { type: "text", text: "Checking" }, - { - type: "tool_use", - id: "call_weather", - name: "lookup_weather", - input: { location: "Berlin" }, - }, - ], - }, - { - role: "user", - content: [ - { - type: "tool_result", - tool_use_id: "call_weather", - content: "{\"temp\":72}", - }, - ], - }, - ]); - - expect(payload.tools?.[0]).toMatchObject({ - name: "lookup_weather", - type: "custom", - }); - }); -}); +import { describe, it, expect, beforeEach, vi } from 'vitest' +import { chat, type Tool, type StreamChunk } from '@tanstack/ai' +import { + Anthropic, + type AnthropicProviderOptions, +} from '../src/anthropic-adapter' + +const mocks = vi.hoisted(() => { + const betaMessagesCreate = vi.fn() + const messagesCreate = vi.fn() + + const client = { + beta: { + messages: { + create: betaMessagesCreate, + }, + }, + messages: { + create: messagesCreate, + }, + } + + return { betaMessagesCreate, messagesCreate, client } +}) + +vi.mock('@anthropic-ai/sdk', () => { + const { client } = mocks + + class MockAnthropic { + beta = client.beta + messages = client.messages + + constructor(_: { apiKey: string }) {} + } + + return { default: MockAnthropic } +}) + +const createAdapter = () => new Anthropic({ apiKey: 'test-key' }) + +const toolArguments = JSON.stringify({ location: 'Berlin' }) + +const weatherTool: Tool = { + type: 'function', + function: { + name: 'lookup_weather', + description: 'Return the weather for a city', + parameters: { + type: 'object', + properties: { + location: { type: 'string' }, + }, + required: ['location'], + }, + }, +} + +describe('Anthropic adapter option mapping', () => { + beforeEach(() => { + vi.clearAllMocks() + }) + + it('maps normalized options and Anthropic provider settings', async () => { + // Mock the streaming response + const mockStream = (async function* () { + yield { + type: 'content_block_start', + index: 0, + content_block: { type: 'text', text: '' }, + } + yield { + type: 'content_block_delta', + index: 0, + delta: { type: 'text_delta', text: 'It will be sunny' }, + } + yield { + type: 'message_delta', + delta: { stop_reason: 'end_turn' }, + usage: { output_tokens: 5 }, + } + yield { + type: 'message_stop', + } + })() + + mocks.betaMessagesCreate.mockResolvedValueOnce(mockStream) + + const providerOptions = { + container: { + id: 'container-weather', + skills: [{ skill_id: 'forecast', type: 'custom', version: '1' }], + }, + mcp_servers: [ + { + name: 'world-weather', + url: 'https://mcp.example.com', + type: 'url', + authorization_token: 'secret', + tool_configuration: { + allowed_tools: ['lookup_weather'], + enabled: true, + }, + }, + ], + service_tier: 'standard_only', + stop_sequences: [''], + thinking: { type: 'enabled', budget_tokens: 1500 }, + top_k: 5, + system: 'Respond with JSON', + } satisfies AnthropicProviderOptions & { system: string } + + const adapter = createAdapter() + + // Consume the stream to trigger the API call + const chunks: StreamChunk[] = [] + for await (const chunk of chat({ + adapter, + model: 'claude-3-7-sonnet-20250219', + messages: [ + { role: 'system', content: 'Keep it structured' }, + { role: 'user', content: 'What is the forecast?' }, + { + role: 'assistant', + content: 'Checking', + toolCalls: [ + { + id: 'call_weather', + type: 'function', + function: { name: 'lookup_weather', arguments: toolArguments }, + }, + ], + }, + { role: 'tool', toolCallId: 'call_weather', content: '{"temp":72}' }, + ], + tools: [weatherTool], + options: { + maxTokens: 3000, + temperature: 0.4, + }, + providerOptions, + })) { + chunks.push(chunk) + } + + expect(mocks.betaMessagesCreate).toHaveBeenCalledTimes(1) + const [payload] = mocks.betaMessagesCreate.mock.calls[0] + + expect(payload).toMatchObject({ + model: 'claude-3-7-sonnet-20250219', + max_tokens: 3000, + temperature: 0.4, + container: providerOptions.container, + mcp_servers: providerOptions.mcp_servers, + service_tier: providerOptions.service_tier, + stop_sequences: providerOptions.stop_sequences, + thinking: providerOptions.thinking, + top_k: providerOptions.top_k, + system: providerOptions.system, + }) + expect(payload.stream).toBe(true) + + expect(payload.messages).toEqual([ + { + role: 'user', + content: 'What is the forecast?', + }, + { + role: 'assistant', + content: [ + { type: 'text', text: 'Checking' }, + { + type: 'tool_use', + id: 'call_weather', + name: 'lookup_weather', + input: { location: 'Berlin' }, + }, + ], + }, + { + role: 'user', + content: [ + { + type: 'tool_result', + tool_use_id: 'call_weather', + content: '{"temp":72}', + }, + ], + }, + ]) + + expect(payload.tools?.[0]).toMatchObject({ + name: 'lookup_weather', + type: 'custom', + }) + }) +}) diff --git a/packages/typescript/ai-anthropic/tests/model-meta.test.ts b/packages/typescript/ai-anthropic/tests/model-meta.test.ts index 92571fceb..4c1f4f3e8 100644 --- a/packages/typescript/ai-anthropic/tests/model-meta.test.ts +++ b/packages/typescript/ai-anthropic/tests/model-meta.test.ts @@ -1,297 +1,466 @@ -import { describe, it, expectTypeOf } from "vitest"; -import type { - AnthropicChatModelProviderOptionsByName, -} from "../src/model-meta"; -import type { - AnthropicContainerOptions, - AnthropicContextManagementOptions, - AnthropicMCPOptions, - AnthropicServiceTierOptions, - AnthropicStopSequencesOptions, - AnthropicThinkingOptions, - AnthropicToolChoiceOptions, - AnthropicSamplingOptions, -} from "../src/text/text-provider-options"; - -/** - * Type assertion tests for Anthropic model provider options. - * - * These tests verify that: - * 1. Models with extended_thinking support have AnthropicThinkingOptions in their provider options - * 2. Models without extended_thinking support do NOT have AnthropicThinkingOptions - * 3. Models with priority_tier support have AnthropicServiceTierOptions in their provider options - * 4. Models without priority_tier support do NOT have AnthropicServiceTierOptions - * 5. All models have base options (container, context management, MCP, stop sequences, tool choice, sampling) - */ - -// Base options that ALL chat models should have -type BaseOptions = AnthropicContainerOptions & - AnthropicContextManagementOptions & - AnthropicMCPOptions & - AnthropicStopSequencesOptions & - AnthropicToolChoiceOptions & - AnthropicSamplingOptions; - -describe("Anthropic Model Provider Options Type Assertions", () => { - describe("Models WITH extended_thinking support", () => { - it("claude-sonnet-4-5-20250929 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-sonnet-4-5-20250929"]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have service tier options (priority_tier support) - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - - // Verify specific properties exist - expectTypeOf().toHaveProperty("thinking"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("top_k"); - }); - - it("claude-haiku-4-5-20251001 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-haiku-4-5-20251001"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("claude-opus-4-1-20250805 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-opus-4-1-20250805"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("claude-sonnet-4-20250514 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-sonnet-4-20250514"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("claude-3-7-sonnet-20250219 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-3-7-sonnet-20250219"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("claude-opus-4-20250514 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-opus-4-20250514"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("claude-opus-4-5-20251101 should support thinking options", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-opus-4-5-20251101"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - - // Verify specific properties exist - expectTypeOf().toHaveProperty("thinking"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("top_k"); - }); - }); - - describe("Models WITHOUT extended_thinking support", () => { - it("claude-3-5-haiku-20241022 should NOT have thinking options but SHOULD have service tier", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-3-5-haiku-20241022"]; - - // Should NOT have thinking options - expectTypeOf().not.toExtend(); - - // Should have service tier options (priority_tier support) - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - - // Verify service_tier exists but thinking does not - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("top_k"); - }); - - it("claude-3-haiku-20240307 should NOT have thinking options AND NOT have service tier", () => { - type Options = AnthropicChatModelProviderOptionsByName["claude-3-haiku-20240307"]; - - // Should NOT have thinking options - expectTypeOf().not.toExtend(); - - // Should NOT have service tier options (no priority_tier support) - expectTypeOf().not.toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - - // Verify base properties exist - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("top_k"); - }); - }); - - describe("Provider options type completeness", () => { - it("AnthropicChatModelProviderOptionsByName should have entries for all chat models", () => { - type Keys = keyof AnthropicChatModelProviderOptionsByName; - - expectTypeOf<"claude-opus-4-5-20251101">().toExtend(); - expectTypeOf<"claude-sonnet-4-5-20250929">().toExtend(); - expectTypeOf<"claude-haiku-4-5-20251001">().toExtend(); - expectTypeOf<"claude-opus-4-1-20250805">().toExtend(); - expectTypeOf<"claude-sonnet-4-20250514">().toExtend(); - expectTypeOf<"claude-3-7-sonnet-20250219">().toExtend(); - expectTypeOf<"claude-opus-4-20250514">().toExtend(); - expectTypeOf<"claude-3-5-haiku-20241022">().toExtend(); - expectTypeOf<"claude-3-haiku-20240307">().toExtend(); - }); - }); - - describe("Detailed property type assertions", () => { - it("all models should have container options", () => { - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - expectTypeOf().toHaveProperty("container"); - }); - - it("all models should have context management options", () => { - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - expectTypeOf().toHaveProperty("context_management"); - }); - - it("all models should have MCP options", () => { - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - expectTypeOf().toHaveProperty("mcp_servers"); - }); - - it("all models should have stop sequences options", () => { - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - expectTypeOf().toHaveProperty("stop_sequences"); - }); - - it("all models should have tool choice options", () => { - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - }); - - it("all models should have sampling options (top_k)", () => { - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - expectTypeOf().toHaveProperty("top_k"); - }); - }); - - describe("Type discrimination between model categories", () => { - it("models with extended_thinking should extend AnthropicThinkingOptions", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("models without extended_thinking should NOT extend AnthropicThinkingOptions", () => { - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - }); - - it("models with priority_tier should extend AnthropicServiceTierOptions", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("models without priority_tier should NOT extend AnthropicServiceTierOptions", () => { - expectTypeOf().not.toExtend(); - }); - - it("all models should extend base options", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); -}); +import { describe, it, expectTypeOf } from 'vitest' +import type { AnthropicChatModelProviderOptionsByName } from '../src/model-meta' +import type { + AnthropicContainerOptions, + AnthropicContextManagementOptions, + AnthropicMCPOptions, + AnthropicServiceTierOptions, + AnthropicStopSequencesOptions, + AnthropicThinkingOptions, + AnthropicToolChoiceOptions, + AnthropicSamplingOptions, +} from '../src/text/text-provider-options' + +/** + * Type assertion tests for Anthropic model provider options. + * + * These tests verify that: + * 1. Models with extended_thinking support have AnthropicThinkingOptions in their provider options + * 2. Models without extended_thinking support do NOT have AnthropicThinkingOptions + * 3. Models with priority_tier support have AnthropicServiceTierOptions in their provider options + * 4. Models without priority_tier support do NOT have AnthropicServiceTierOptions + * 5. All models have base options (container, context management, MCP, stop sequences, tool choice, sampling) + */ + +// Base options that ALL chat models should have +type BaseOptions = AnthropicContainerOptions & + AnthropicContextManagementOptions & + AnthropicMCPOptions & + AnthropicStopSequencesOptions & + AnthropicToolChoiceOptions & + AnthropicSamplingOptions + +describe('Anthropic Model Provider Options Type Assertions', () => { + describe('Models WITH extended_thinking support', () => { + it('claude-sonnet-4-5-20250929 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have service tier options (priority_tier support) + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + + // Verify specific properties exist + expectTypeOf().toHaveProperty('thinking') + expectTypeOf().toHaveProperty('service_tier') + expectTypeOf().toHaveProperty('container') + expectTypeOf().toHaveProperty('context_management') + expectTypeOf().toHaveProperty('mcp_servers') + expectTypeOf().toHaveProperty('stop_sequences') + expectTypeOf().toHaveProperty('tool_choice') + expectTypeOf().toHaveProperty('top_k') + }) + + it('claude-haiku-4-5-20251001 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('claude-opus-4-1-20250805 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('claude-sonnet-4-20250514 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('claude-3-7-sonnet-20250219 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('claude-opus-4-20250514 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('claude-opus-4-5-20251101 should support thinking options', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + + // Verify specific properties exist + expectTypeOf().toHaveProperty('thinking') + expectTypeOf().toHaveProperty('service_tier') + expectTypeOf().toHaveProperty('container') + expectTypeOf().toHaveProperty('context_management') + expectTypeOf().toHaveProperty('mcp_servers') + expectTypeOf().toHaveProperty('stop_sequences') + expectTypeOf().toHaveProperty('tool_choice') + expectTypeOf().toHaveProperty('top_k') + }) + }) + + describe('Models WITHOUT extended_thinking support', () => { + it('claude-3-5-haiku-20241022 should NOT have thinking options but SHOULD have service tier', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + + // Should NOT have thinking options + expectTypeOf().not.toExtend() + + // Should have service tier options (priority_tier support) + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + + // Verify service_tier exists but thinking does not + expectTypeOf().toHaveProperty('service_tier') + expectTypeOf().toHaveProperty('container') + expectTypeOf().toHaveProperty('context_management') + expectTypeOf().toHaveProperty('mcp_servers') + expectTypeOf().toHaveProperty('stop_sequences') + expectTypeOf().toHaveProperty('tool_choice') + expectTypeOf().toHaveProperty('top_k') + }) + + it('claude-3-haiku-20240307 should NOT have thinking options AND NOT have service tier', () => { + type Options = + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + + // Should NOT have thinking options + expectTypeOf().not.toExtend() + + // Should NOT have service tier options (no priority_tier support) + expectTypeOf().not.toExtend() + + // Should have base options + expectTypeOf().toExtend() + + // Verify base properties exist + expectTypeOf().toHaveProperty('container') + expectTypeOf().toHaveProperty('context_management') + expectTypeOf().toHaveProperty('mcp_servers') + expectTypeOf().toHaveProperty('stop_sequences') + expectTypeOf().toHaveProperty('tool_choice') + expectTypeOf().toHaveProperty('top_k') + }) + }) + + describe('Provider options type completeness', () => { + it('AnthropicChatModelProviderOptionsByName should have entries for all chat models', () => { + type Keys = keyof AnthropicChatModelProviderOptionsByName + + expectTypeOf<'claude-opus-4-5-20251101'>().toExtend() + expectTypeOf<'claude-sonnet-4-5-20250929'>().toExtend() + expectTypeOf<'claude-haiku-4-5-20251001'>().toExtend() + expectTypeOf<'claude-opus-4-1-20250805'>().toExtend() + expectTypeOf<'claude-sonnet-4-20250514'>().toExtend() + expectTypeOf<'claude-3-7-sonnet-20250219'>().toExtend() + expectTypeOf<'claude-opus-4-20250514'>().toExtend() + expectTypeOf<'claude-3-5-haiku-20241022'>().toExtend() + expectTypeOf<'claude-3-haiku-20240307'>().toExtend() + }) + }) + + describe('Detailed property type assertions', () => { + it('all models should have container options', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toHaveProperty('container') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toHaveProperty('container') + }) + + it('all models should have context management options', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toHaveProperty('context_management') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toHaveProperty('context_management') + }) + + it('all models should have MCP options', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toHaveProperty('mcp_servers') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toHaveProperty('mcp_servers') + }) + + it('all models should have stop sequences options', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toHaveProperty('stop_sequences') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toHaveProperty('stop_sequences') + }) + + it('all models should have tool choice options', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toHaveProperty('tool_choice') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toHaveProperty('tool_choice') + }) + + it('all models should have sampling options (top_k)', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toHaveProperty('top_k') + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toHaveProperty('top_k') + }) + }) + + describe('Type discrimination between model categories', () => { + it('models with extended_thinking should extend AnthropicThinkingOptions', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toExtend() + }) + + it('models without extended_thinking should NOT extend AnthropicThinkingOptions', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().not.toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().not.toExtend() + }) + + it('models with priority_tier should extend AnthropicServiceTierOptions', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toExtend() + }) + + it('models without priority_tier should NOT extend AnthropicServiceTierOptions', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().not.toExtend() + }) + + it('all models should extend base options', () => { + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-5-20251101'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-5-20250929'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-haiku-4-5-20251001'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-1-20250805'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-sonnet-4-20250514'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-7-sonnet-20250219'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-opus-4-20250514'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-5-haiku-20241022'] + >().toExtend() + expectTypeOf< + AnthropicChatModelProviderOptionsByName['claude-3-haiku-20240307'] + >().toExtend() + }) + }) +}) diff --git a/packages/typescript/ai-anthropic/tsconfig.json b/packages/typescript/ai-anthropic/tsconfig.json index 204ca8d3f..e5e872741 100644 --- a/packages/typescript/ai-anthropic/tsconfig.json +++ b/packages/typescript/ai-anthropic/tsconfig.json @@ -1,10 +1,8 @@ { "extends": "../../../tsconfig.json", "compilerOptions": { - "outDir": "dist", - "rootDir": "src" + "outDir": "dist" }, - "include": ["src/**/*.ts", "src/**/*.tsx"], - "exclude": ["node_modules", "dist", "**/*.config.ts"], - "references": [{ "path": "../ai" }] + "include": ["vite.config.ts", "./src"], + "exclude": ["node_modules", "dist", "**/*.config.ts"] } diff --git a/packages/typescript/ai-anthropic/tsdown.config.ts b/packages/typescript/ai-anthropic/tsdown.config.ts deleted file mode 100644 index c6316dbdc..000000000 --- a/packages/typescript/ai-anthropic/tsdown.config.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { defineConfig } from "tsdown"; - -export default defineConfig({ - entry: ["./src/index.ts"], - format: ["esm"], - unbundle: true, - dts: true, - sourcemap: true, - clean: true, - minify: false, - external: ["@anthropic-ai/sdk"], -}); diff --git a/packages/typescript/ai-anthropic/vite.config.ts b/packages/typescript/ai-anthropic/vite.config.ts new file mode 100644 index 000000000..5f9b720cb --- /dev/null +++ b/packages/typescript/ai-anthropic/vite.config.ts @@ -0,0 +1,36 @@ +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/config/vite' +import packageJson from './package.json' + +const config = defineConfig({ + test: { + name: packageJson.name, + dir: './', + watch: false, + + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: ['./src/index.ts'], + srcDir: './src', + }), +) diff --git a/packages/typescript/ai-anthropic/vitest.config.ts b/packages/typescript/ai-anthropic/vitest.config.ts deleted file mode 100644 index 8fa8bfb9e..000000000 --- a/packages/typescript/ai-anthropic/vitest.config.ts +++ /dev/null @@ -1,22 +0,0 @@ -import { defineConfig } from "vitest/config"; - -export default defineConfig({ - test: { - globals: true, - environment: "node", - include: ["tests/**/*.test.ts"], - coverage: { - provider: "v8", - reporter: ["text", "json", "html", "lcov"], - exclude: [ - "node_modules/", - "dist/", - "tests/", - "**/*.test.ts", - "**/*.config.ts", - "**/types.ts", - ], - include: ["src/**/*.ts"], - }, - }, -}); diff --git a/packages/typescript/ai-client/ARCHITECTURE.md b/packages/typescript/ai-client/ARCHITECTURE.md index dc96f2eb3..1a0e48eb8 100644 --- a/packages/typescript/ai-client/ARCHITECTURE.md +++ b/packages/typescript/ai-client/ARCHITECTURE.md @@ -73,12 +73,13 @@ interface ConnectionAdapter { connect( messages: UIMessage[] | ModelMessage[], data?: Record, - abortSignal?: AbortSignal - ): AsyncIterable; + abortSignal?: AbortSignal, + ): AsyncIterable } ``` This design works with: + - Async generators - Any object with `[Symbol.asyncIterator]` - Fetch API Response bodies (via connection adapters) @@ -266,8 +267,8 @@ interface ConnectionAdapter { connect( messages: any[], data?: Record, - abortSignal?: AbortSignal // Abort signal from ChatClient for cancellation - ): AsyncIterable; + abortSignal?: AbortSignal, // Abort signal from ChatClient for cancellation + ): AsyncIterable } ``` @@ -417,16 +418,16 @@ class MyParser implements StreamParser { function createWebSocketAdapter(url: string): ConnectionAdapter { return { async *connect(messages, data, abortSignal) { - const ws = new WebSocket(url); - + const ws = new WebSocket(url) + if (abortSignal) { - abortSignal.addEventListener("abort", () => ws.close()); + abortSignal.addEventListener('abort', () => ws.close()) } - + // Yield chunks as they arrive // ... }, - }; + } } ``` @@ -435,13 +436,23 @@ function createWebSocketAdapter(url: string): ConnectionAdapter { ```typescript const processor = new StreamProcessor({ handlers: { - onTextUpdate: (content) => { /* ... */ }, - onToolCallStart: (idx, id, name) => { /* ... */ }, - onToolCallDelta: (idx, args) => { /* ... */ }, - onToolCallComplete: (idx, id, name, args) => { /* ... */ }, - onStreamEnd: (content, toolCalls) => { /* ... */ }, + onTextUpdate: (content) => { + /* ... */ + }, + onToolCallStart: (idx, id, name) => { + /* ... */ + }, + onToolCallDelta: (idx, args) => { + /* ... */ + }, + onToolCallComplete: (idx, id, name, args) => { + /* ... */ + }, + onStreamEnd: (content, toolCalls) => { + /* ... */ + }, }, -}); +}) ``` ## Performance Considerations @@ -578,4 +589,3 @@ StreamParser catches āœ… **Performance** - Efficient state management and updates āœ… **Type Safety** - Full TypeScript support āœ… **Developer Experience** - Simple by default, powerful when needed - diff --git a/packages/typescript/ai-client/README.md b/packages/typescript/ai-client/README.md index ebcc41dc7..7c4143074 100644 --- a/packages/typescript/ai-client/README.md +++ b/packages/typescript/ai-client/README.md @@ -1,701 +1,104 @@ -# @tanstack/ai-client - -Framework-agnostic headless client for TanStack AI chat functionality. - -## Overview - -`@tanstack/ai-client` provides a headless `ChatClient` class that manages chat state and streaming AI interactions without any framework dependencies. This makes it ideal for: - -- Building custom framework integrations -- Server-side usage -- Testing and automation -- Any JavaScript/TypeScript environment - -**Note:** The backend should use `@tanstack/ai`'s `chat()` method which **automatically handles tool execution in a loop**. The client receives tool execution events via the stream. - -## Installation - -```bash -pnpm add @tanstack/ai-client -# or -npm install @tanstack/ai-client -# or -yarn add @tanstack/ai-client -``` - -## Basic Usage - -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -// Create a client instance -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, - onLoadingChange: (isLoading) => { - console.log("Loading state:", isLoading); - }, - onErrorChange: (error) => { - console.log("Error:", error); - }, -}); - -// Send a message -await client.sendMessage("Hello, AI!"); - -// Get current messages -const messages = client.getMessages(); - -// Append a message manually -await client.append({ - role: "user", - content: "Another message", -}); - -// Reload the last response -await client.reload(); - -// Stop the current response -client.stop(); - -// Clear all messages -client.clear(); -``` - -## Connection Adapters - -Connection adapters provide a flexible way to connect to different types of streaming backends. - -### `fetchServerSentEvents(url, options?)` - -For Server-Sent Events (SSE) format - the standard for HTTP streaming: - -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat", { - headers: { - "Authorization": "Bearer token", - "X-Custom-Header": "value" - }, - credentials: "include", // "omit" | "same-origin" | "include" - }), -}); - -await client.sendMessage("Hello!"); -``` - -**Use when:** Your backend uses `toStreamResponse()` from `@tanstack/ai` - -**Format expected:** Server-Sent Events with `data:` prefix -``` -data: {"type":"content","delta":"Hello","content":"Hello",...} -data: {"type":"content","delta":" world","content":"Hello world",...} -data: {"type":"done","finishReason":"stop",...} -data: [DONE] -``` - -### `fetchHttpStream(url, options?)` - -For raw HTTP streaming with newline-delimited JSON: - -```typescript -import { ChatClient, fetchHttpStream } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchHttpStream("/api/chat", { - headers: { "Authorization": "Bearer token" } - }), -}); - -await client.sendMessage("Hello!"); -``` - -**Use when:** Your backend streams newline-delimited JSON directly - -**Format expected:** Newline-delimited JSON -``` -{"type":"content","delta":"Hello","content":"Hello",...} -{"type":"content","delta":" world","content":"Hello world",...} -{"type":"done","finishReason":"stop",...} -``` - -### `stream(factory)` - -For direct async iterables - use with server functions or in-memory streams: - -```typescript -import { ChatClient, stream } from "@tanstack/ai-client"; -import { chat } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; - -const client = new ChatClient({ - connection: stream((messages, data) => { - // Return an async iterable directly - return chat({ - adapter: openai(), - model: "gpt-4o", - messages, - }); - }), -}); - -await client.sendMessage("Hello!"); -``` - -**Use when:** -- TanStack Start server functions -- Direct access to streaming functions -- Testing with mock streams - -**Benefits:** -- āœ… No HTTP overhead -- āœ… Perfect for server components -- āœ… Easy to test with mocks - -### Custom Adapters - -You can create custom connection adapters for special scenarios: - -```typescript -import type { ConnectionAdapter } from "@tanstack/ai-client"; - -// Example: WebSocket connection adapter -function createWebSocketAdapter(url: string): ConnectionAdapter { - return { - async *connect(messages, data, abortSignal) { - const ws = new WebSocket(url); - - // Handle abort signal - if (abortSignal) { - abortSignal.addEventListener("abort", () => { - ws.close(); - }); - } - - return new Promise((resolve, reject) => { - ws.onopen = () => { - ws.send(JSON.stringify({ messages, data })); - }; - - ws.onmessage = (event) => { - // Check if aborted before processing - if (abortSignal?.aborted) { - ws.close(); - return; - } - - const chunk = JSON.parse(event.data); - // Yield chunks as they arrive - }; - - ws.onerror = (error) => reject(error); - ws.onclose = () => resolve(); - }); - }, - }; -} - -// Use it -const client = new ChatClient({ - connection: createWebSocketAdapter("wss://api.example.com/chat"), -}); -``` - -## Stream Processor - -The stream processor provides configurable text chunking strategies to control UI update frequency and improve user experience. - -### Default Behavior - -By default, `ChatClient` uses immediate chunking (every chunk updates the UI): - -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, -}); - -await client.sendMessage("Hello!"); -``` - -### Using Chunk Strategies - -#### Punctuation Strategy - -Update the UI only when punctuation is encountered (smoother for reading): - -```typescript -import { - ChatClient, - fetchServerSentEvents, - PunctuationStrategy, -} from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - streamProcessor: { - chunkStrategy: new PunctuationStrategy(), - }, - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, -}); - -await client.sendMessage("Tell me a story."); -``` - -#### Batch Strategy - -Update the UI every N chunks (reduces update frequency): - -```typescript -import { - ChatClient, - fetchServerSentEvents, - BatchStrategy, -} from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - streamProcessor: { - chunkStrategy: new BatchStrategy(10), // Update every 10 chunks - }, - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, -}); - -await client.sendMessage("Explain quantum physics."); -``` - -#### Combining Strategies - -Use `CompositeStrategy` to combine multiple strategies (OR logic): - -```typescript -import { - ChatClient, - fetchServerSentEvents, - CompositeStrategy, - PunctuationStrategy, - BatchStrategy, -} from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - streamProcessor: { - chunkStrategy: new CompositeStrategy([ - new PunctuationStrategy(), // Update on punctuation - new BatchStrategy(20), // OR every 20 chunks - ]), - }, - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, -}); -``` - -#### Custom Chunk Strategy - -Create your own strategy for fine-grained control: - -```typescript -import { - ChatClient, - fetchServerSentEvents, - type ChunkStrategy, -} from "@tanstack/ai-client"; - -class CustomStrategy implements ChunkStrategy { - private wordCount = 0; - - shouldEmit(chunk: string, accumulated: string): boolean { - // Count words in the chunk - const words = chunk.split(/\s+/).filter((w) => w.length > 0); - this.wordCount += words.length; - - // Emit every 5 words - if (this.wordCount >= 5) { - this.wordCount = 0; - return true; - } - return false; - } - - reset(): void { - this.wordCount = 0; - } -} - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - streamProcessor: { - chunkStrategy: new CustomStrategy(), - }, - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, -}); -``` - -### Built-in Strategies - -| Strategy | When it Emits | Best For | -| ------------------------ | ----------------------------------------- | ---------------------------- | -| `ImmediateStrategy` | Every chunk | Default, real-time feel | -| `PunctuationStrategy` | When chunk contains `. , ! ? ; :` | Natural reading flow | -| `BatchStrategy(N)` | Every N chunks | Reducing update frequency | -| `WordBoundaryStrategy` | When chunk ends with whitespace | Preventing word cuts | -| `DebounceStrategy(ms)` | After ms of silence | High-frequency streams | -| `CompositeStrategy([])` | When ANY sub-strategy emits (OR) | Combining multiple rules | -| Custom `ChunkStrategy` | Your custom `shouldEmit()` logic | Fine-grained control | - -### Parallel Tool Calls - -The stream processor automatically handles multiple parallel tool calls: - -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; - -const client = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - streamProcessor: { - // Use any chunk strategy - }, - onMessagesChange: (messages) => { - const lastMessage = messages[messages.length - 1]; - if (lastMessage.toolCalls) { - console.log("Tool calls in progress:", lastMessage.toolCalls); - // Can have multiple tool calls streaming simultaneously! - } - }, -}); - -await client.sendMessage("Get weather in Paris and Tokyo"); -``` - -### Custom Stream Parser - -For handling non-standard stream formats: - -```typescript -import { - ChatClient, - stream, - type StreamParser, - type StreamChunk, -} from "@tanstack/ai-client"; - -class CustomParser implements StreamParser { - async *parse(source: AsyncIterable): AsyncIterable { - for await (const chunk of source) { - // Custom parsing logic for your stream format - if (chunk.message) { - yield { - type: "text", - content: chunk.message, - }; - } - - if (chunk.tool) { - yield { - type: "tool-call-delta", - toolCallIndex: chunk.tool.index, - toolCall: { - id: chunk.tool.id, - function: { - name: chunk.tool.name, - arguments: chunk.tool.args, - }, - }, - }; - } - } - } -} - -const client = new ChatClient({ - connection: stream(async (messages) => { - // Your custom stream source - return customStreamGenerator(messages); - }), - streamProcessor: { - parser: new CustomParser(), - }, - onMessagesChange: (messages) => { - console.log("Messages updated:", messages); - }, -}); -``` - -## Working with Streams Directly - -Connection adapters return async iterables of `StreamChunk` objects, which you can iterate over directly if needed: - -```typescript -import type { StreamChunk } from '@tanstack/ai'; -import { fetchServerSentEvents } from '@tanstack/ai-client'; - -const connection = fetchServerSentEvents('/api/chat'); - -// Get the stream directly -const stream = connection.connect(messages, data); - -// Iterate over chunks -for await (const chunk of stream) { - if (chunk.type === 'content') { - console.log('Content:', chunk.content); - } else if (chunk.type === 'tool_call') { - console.log('Tool call:', chunk.toolCall); - } -} -``` - -### Custom Connection Adapter Example - -You can create custom connection adapters for any transport protocol. Here's a WebSocket example: - -```typescript -import type { ConnectionAdapter, StreamChunk } from '@tanstack/ai-client'; - -function createWebSocketAdapter(url: string): ConnectionAdapter { - return { - async *connect(messages, data, abortSignal) { - const ws = new WebSocket(url); - - // Handle abort signal - if (abortSignal) { - abortSignal.addEventListener("abort", () => { - ws.close(); - }); - } - - // Wait for connection - await new Promise((resolve, reject) => { - ws.onopen = resolve; - ws.onerror = reject; - }); - - // Send messages - ws.send(JSON.stringify({ messages, data })); - - // Yield chunks as they arrive - const queue: StreamChunk[] = []; - let resolver: ((chunk: StreamChunk | null) => void) | null = null; - - ws.onmessage = (event) => { - try { - const chunk: StreamChunk = JSON.parse(event.data); - if (abortSignal?.aborted) { - ws.close(); - return; - } - if (resolver) { - resolver(chunk); - resolver = null; - } else { - queue.push(chunk); - } - } catch (error) { - console.error('Failed to parse WebSocket message:', error); - } - }; - - ws.onclose = () => { - if (resolver) { - resolver(null); - } - }; - - try { - while (true) { - if (queue.length > 0) { - yield queue.shift()!; - } else { - const chunk = await new Promise((resolve) => { - resolver = resolve; - }); - if (chunk === null) break; - yield chunk; - } - } - } finally { - ws.close(); - } - }, - }; -} - -// Use it -const client = new ChatClient({ - connection: createWebSocketAdapter("wss://api.example.com/chat"), -}); -``` - -## API Reference - -### `ChatClient` - -The main class for managing chat interactions. - -#### Constructor Options - -```typescript -interface ChatClientOptions { - // Connection adapter (required) - connection: ConnectionAdapter; - - // Initial messages - initialMessages?: UIMessage[]; - - // Unique chat identifier - id?: string; - - // Callbacks - onResponse?: (response: Response) => void | Promise; - onChunk?: (chunk: StreamChunk) => void; - onFinish?: (message: UIMessage) => void; - onError?: (error: Error) => void; - onMessagesChange?: (messages: UIMessage[]) => void; - onLoadingChange?: (isLoading: boolean) => void; - onErrorChange?: (error: Error | undefined) => void; - - // Stream processor configuration - streamProcessor?: { - chunkStrategy?: ChunkStrategy; - parser?: StreamParser; - }; - - // Request configuration (for legacy api option) - api?: string; - headers?: Record | Headers; - body?: Record; - credentials?: "omit" | "same-origin" | "include"; - fetch?: typeof fetch; -} -``` - -#### Methods - -- `sendMessage(content: string): Promise` - Send a text message -- `append(message: Message | UIMessage): Promise` - Append any message -- `reload(): Promise` - Reload the last assistant response -- `stop(): void` - Stop the current streaming response -- `clear(): void` - Clear all messages -- `getMessages(): UIMessage[]` - Get current messages -- `getIsLoading(): boolean` - Get loading state -- `getError(): Error | undefined` - Get current error -- `setMessagesManually(messages: UIMessage[]): void` - Manually set messages - -## Framework Integration - -This package is used by framework-specific packages like `@tanstack/ai-react`, which provide hooks and components for their respective frameworks. - -### Example: Custom React Hook - -```typescript -import { ChatClient, fetchServerSentEvents } from "@tanstack/ai-client"; -import { useState, useRef, useCallback } from "react"; - -function useCustomChat(options) { - const [messages, setMessages] = useState([]); - const [isLoading, setIsLoading] = useState(false); - - const clientRef = useRef(null); - - if (!clientRef.current) { - clientRef.current = new ChatClient({ - connection: fetchServerSentEvents("/api/chat"), - ...options, - onMessagesChange: setMessages, - onLoadingChange: setIsLoading, - }); - } - - const sendMessage = useCallback((content) => { - return clientRef.current.sendMessage(content); - }, []); - - return { messages, isLoading, sendMessage }; -} -``` - -### With React - -All connection adapters work seamlessly with `useChat`: - -```typescript -import { useChat } from "@tanstack/ai-react"; -import { fetchServerSentEvents, fetchHttpStream, stream } from "@tanstack/ai-client"; - -// SSE connection -function ChatSSE() { - const chat = useChat({ - connection: fetchServerSentEvents("/api/chat"), - }); - - return ; -} - -// HTTP stream connection -function ChatHTTP() { - const chat = useChat({ - connection: fetchHttpStream("/api/chat"), - }); +
+ +
+ +
+ +
+ + + + + + + + + +
+ + + +
- return ; -} - -// Direct stream connection (server functions) -function ChatDirect() { - const chat = useChat({ - connection: stream((messages) => myServerFunction({ messages })), - }); - - return ; -} -``` - -## Backend Example - -Your backend should use `@tanstack/ai`'s `chat()` method with automatic tool execution: - -```typescript -import { chat, toStreamResponse } from "@tanstack/ai"; -import { openai } from "@tanstack/ai-openai"; - -export async function POST(request: Request) { - const { messages } = await request.json(); - - // chat() automatically executes tools in a loop - const stream = chat({ - adapter: openai(), - model: "gpt-4o", - messages, - tools: [weatherTool], // Tools are auto-executed when called - agentLoopStrategy: maxIterations(5), // Control loop behavior - }); - - // Stream includes tool_call and tool_result chunks - return toStreamResponse(stream); -} -``` - -The client will receive: - -- `content` chunks - text from the model -- `tool_call` chunks - when model calls a tool (auto-executed by SDK) -- `tool_result` chunks - results from tool execution (auto-emitted by SDK) -- `done` chunk - conversation complete - -## License - -MIT +### [Become a Sponsor!](https://github.com/sponsors/tannerlinsley/) +
+ +# TanStack AI + +A powerful, type-safe AI SDK for building AI-powered applications. + +- Provider-agnostic adapters (OpenAI, Anthropic, Gemini, Ollama, etc.) +- Chat completion, streaming, and agent loop strategies +- Headless chat state management with adapters (SSE, HTTP stream, custom) +- Type-safe tools with server/client execution + +### Read the docs → + +## Get Involved + +- We welcome issues and pull requests! +- Participate in [GitHub discussions](https://github.com/TanStack/ai/discussions) +- Chat with the community on [Discord](https://discord.com/invite/WrRKjPJ) +- See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions + +## Partners + + + + + + +
+ + + + + CodeRabbit + + + + + + + + Cloudflare + + +
+ +
+AI & you? +

+We're looking for TanStack AI Partners to join our mission! Partner with us to push the boundaries of TanStack AI and build amazing things together. +

+LET'S CHAT +
+ +## Explore the TanStack Ecosystem + +- TanStack Config – Tooling for JS/TS packages +- TanStack DB – Reactive sync client store +- TanStack Devtools – Unified devtools panel +- TanStack Form – Type‑safe form state +- TanStack Pacer – Debouncing, throttling, batching +- TanStack Query – Async state & caching +- TanStack Ranger – Range & slider primitives +- TanStack Router – Type‑safe routing, caching & URL state +- TanStack Start – Full‑stack SSR & streaming +- TanStack Store – Reactive data store +- TanStack Table – Headless datagrids +- TanStack Virtual – Virtualized rendering + +… and more at TanStack.com Ā» + + diff --git a/packages/typescript/ai-client/eslint.config.js b/packages/typescript/ai-client/eslint.config.js new file mode 100644 index 000000000..c3d273991 --- /dev/null +++ b/packages/typescript/ai-client/eslint.config.js @@ -0,0 +1,9 @@ +import rootConfig from '../../../eslint.config.js' + +/** @type {import('eslint').Linter.Config[]} */ +export default [ + ...rootConfig, + { + rules: {}, + }, +] diff --git a/packages/typescript/ai-client/package.json b/packages/typescript/ai-client/package.json index ac0f27d95..bd16433d6 100644 --- a/packages/typescript/ai-client/package.json +++ b/packages/typescript/ai-client/package.json @@ -9,13 +9,21 @@ "url": "git+https://github.com/TanStack/ai.git", "directory": "packages/typescript/ai-client" }, + "keywords": [ + "ai", + "client", + "headless", + "tanstack", + "chat", + "streaming" + ], "type": "module", - "module": "./dist/index.js", - "types": "./dist/index.d.ts", + "module": "./dist/esm/index.js", + "types": "./dist/esm/index.d.ts", "exports": { ".": { - "types": "./dist/index.d.ts", - "import": "./dist/index.js" + "types": "./dist/esm/index.d.ts", + "import": "./dist/esm/index.js" } }, "files": [ @@ -23,32 +31,21 @@ "src" ], "scripts": { - "build": "tsdown", - "dev": "tsdown --watch", - "test": "vitest run", - "test:watch": "vitest", - "test:coverage": "vitest run --coverage", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" + "build": "vite build", + "clean": "premove ./build ./dist", + "lint:fix": "eslint ./src --fix", + "test:build": "publint --strict", + "test:eslint": "eslint ./src", + "test:lib": "vitest", + "test:lib:dev": "pnpm test:lib --watch", + "test:types": "tsc" }, - "keywords": [ - "ai", - "client", - "headless", - "tanstack", - "chat", - "streaming" - ], "dependencies": { "@tanstack/ai": "workspace:*", "partial-json": "^0.1.7" }, "devDependencies": { - "@types/node": "^22.10.2", - "@vitest/coverage-v8": "4.0.13", - "tsdown": "^0.15.9", - "typescript": "^5.7.2", - "vitest": "^4.0.13" + "@vitest/coverage-v8": "4.0.14", + "vite": "^7.2.4" } -} \ No newline at end of file +} diff --git a/packages/typescript/ai-client/src/chat-client.ts b/packages/typescript/ai-client/src/chat-client.ts index 0eca7ee83..2a06c8b55 100644 --- a/packages/typescript/ai-client/src/chat-client.ts +++ b/packages/typescript/ai-client/src/chat-client.ts @@ -1,63 +1,61 @@ -import type { ModelMessage } from "@tanstack/ai"; -import type { UIMessage, ToolCallPart, ChatClientOptions } from "./types"; -import type { ConnectionAdapter } from "./connection-adapters"; -import { StreamProcessor } from "./stream/processor"; -import type { ChunkStrategy, StreamParser } from "./stream/types"; +import { StreamProcessor } from './stream/processor' import { - uiMessageToModelMessages, normalizeToUIMessage, -} from "./message-converters"; + uiMessageToModelMessages, +} from './message-converters' import { updateTextPart, - updateToolCallPart, - updateToolResultPart, + updateThinkingPart, updateToolCallApproval, + updateToolCallApprovalResponse, + updateToolCallPart, updateToolCallState, updateToolCallWithOutput, - updateToolCallApprovalResponse, - updateThinkingPart, -} from "./message-updaters"; -import { - ChatClientEventEmitter, - DefaultChatClientEventEmitter, -} from "./events"; + updateToolResultPart, +} from './message-updaters' +import { DefaultChatClientEventEmitter } from './events' +import type { ModelMessage } from '@tanstack/ai' +import type { ChatClientOptions, ToolCallPart, UIMessage } from './types' +import type { ConnectionAdapter } from './connection-adapters' +import type { ChunkStrategy, StreamParser } from './stream/types' +import type { ChatClientEventEmitter } from './events' export class ChatClient { - private messages: UIMessage[] = []; - private isLoading: boolean = false; - private error: Error | undefined = undefined; - private connection: ConnectionAdapter; - private uniqueId: string; - private body?: Record; + private messages: Array = [] + private isLoading = false + private error: Error | undefined = undefined + private connection: ConnectionAdapter + private uniqueId: string + private body?: Record private streamProcessorConfig?: { - chunkStrategy?: ChunkStrategy; - parser?: StreamParser; - }; - private abortController: AbortController | null = null; - private events: ChatClientEventEmitter; + chunkStrategy?: ChunkStrategy + parser?: StreamParser + } + private abortController: AbortController | null = null + private events: ChatClientEventEmitter private callbacks: { - onResponse: (response?: Response) => void | Promise; - onChunk: (chunk: any) => void; - onFinish: (message: UIMessage) => void; - onError: (error: Error) => void; - onMessagesChange: (messages: UIMessage[]) => void; - onLoadingChange: (isLoading: boolean) => void; - onErrorChange: (error: Error | undefined) => void; + onResponse: (response?: Response) => void | Promise + onChunk: (chunk: any) => void + onFinish: (message: UIMessage) => void + onError: (error: Error) => void + onMessagesChange: (messages: Array) => void + onLoadingChange: (isLoading: boolean) => void + onErrorChange: (error: Error | undefined) => void onToolCall?: (args: { - toolCallId: string; - toolName: string; - input: any; - }) => Promise; - }; + toolCallId: string + toolName: string + input: any + }) => Promise + } constructor(options: ChatClientOptions) { - this.uniqueId = options.id || this.generateUniqueId("chat"); - this.messages = options.initialMessages || []; - this.body = options.body; - this.connection = options.connection; - this.streamProcessorConfig = options.streamProcessor || {}; - this.events = new DefaultChatClientEventEmitter(this.uniqueId); + this.uniqueId = options.id || this.generateUniqueId('chat') + this.messages = options.initialMessages || [] + this.body = options.body + this.connection = options.connection + this.streamProcessorConfig = options.streamProcessor || {} + this.events = new DefaultChatClientEventEmitter(this.uniqueId) this.callbacks = { onResponse: options.onResponse || (() => {}), @@ -68,50 +66,50 @@ export class ChatClient { onLoadingChange: options.onLoadingChange || (() => {}), onErrorChange: options.onErrorChange || (() => {}), onToolCall: options.onToolCall, - }; + } - this.events.clientCreated(this.messages.length); + this.events.clientCreated(this.messages.length) } private generateUniqueId(prefix: string): string { - return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}`; + return `${prefix}-${Date.now()}-${Math.random().toString(36).substring(7)}` } private generateMessageId(): string { - return this.generateUniqueId(this.uniqueId); + return this.generateUniqueId(this.uniqueId) } - private setMessages(messages: UIMessage[]): void { - this.messages = messages; - this.callbacks.onMessagesChange(messages); + private setMessages(messages: Array): void { + this.messages = messages + this.callbacks.onMessagesChange(messages) } private setIsLoading(isLoading: boolean): void { - this.isLoading = isLoading; - this.callbacks.onLoadingChange(isLoading); - this.events.loadingChanged(isLoading); + this.isLoading = isLoading + this.callbacks.onLoadingChange(isLoading) + this.events.loadingChanged(isLoading) } private setError(error: Error | undefined): void { - this.error = error; - this.callbacks.onErrorChange(error); - this.events.errorChanged(error?.message || null); + this.error = error + this.callbacks.onErrorChange(error) + this.events.errorChanged(error?.message || null) } private async processStream(source: AsyncIterable): Promise { - const assistantMessageId = this.generateMessageId(); + const assistantMessageId = this.generateMessageId() const assistantMessage: UIMessage = { id: assistantMessageId, - role: "assistant", + role: 'assistant', parts: [], createdAt: new Date(), - }; + } // Add the assistant message placeholder - this.setMessages([...this.messages, assistantMessage]); + this.setMessages([...this.messages, assistantMessage]) // Always use the new StreamProcessor - return this.processStreamWithProcessor(source, assistantMessageId); + return this.processStreamWithProcessor(source, assistantMessageId) } /** @@ -119,21 +117,21 @@ export class ChatClient { */ private async processStreamWithProcessor( source: AsyncIterable, - assistantMessageId: string + assistantMessageId: string, ): Promise { // Collect raw chunks for debugging - const rawChunks: any[] = []; - const streamId = this.generateUniqueId("stream"); + const rawChunks: Array = [] + const streamId = this.generateUniqueId('stream') const processor = new StreamProcessor({ chunkStrategy: this.streamProcessorConfig?.chunkStrategy, parser: this.streamProcessorConfig?.parser, handlers: { onTextUpdate: (content) => { - this.events.textUpdated(streamId, assistantMessageId, content); + this.events.textUpdated(streamId, assistantMessageId, content) this.setMessages( - updateTextPart(this.messages, assistantMessageId, content) - ); + updateTextPart(this.messages, assistantMessageId, content), + ) }, onToolCallStateChange: (_index, id, name, state, args) => { this.events.toolCallStateChanged( @@ -142,8 +140,8 @@ export class ChatClient { id, name, state, - args - ); + args, + ) // Update or create tool call part with state this.setMessages( @@ -152,8 +150,8 @@ export class ChatClient { name, arguments: args, state, - }) - ); + }), + ) }, onToolResultStateChange: (toolCallId, content, state, error) => { this.events.toolResultStateChanged( @@ -161,8 +159,8 @@ export class ChatClient { toolCallId, content, state, - error - ); + error, + ) // Update or create tool result part this.setMessages( @@ -172,23 +170,18 @@ export class ChatClient { toolCallId, content, state, - error - ) - ); + error, + ), + ) }, - onApprovalRequested: async ( - toolCallId, - toolName, - input, - approvalId - ) => { + onApprovalRequested: (toolCallId, toolName, input, approvalId) => { this.events.approvalRequested( assistantMessageId, toolCallId, toolName, input, - approvalId - ); + approvalId, + ) // Update tool call part to show it needs approval this.setMessages( @@ -196,9 +189,9 @@ export class ChatClient { this.messages, assistantMessageId, toolCallId, - approvalId - ) - ); + approvalId, + ), + ) }, onToolInputAvailable: async (toolCallId, toolName, input) => { // If onToolCall callback exists, execute immediately @@ -208,23 +201,23 @@ export class ChatClient { toolCallId, toolName, input, - }); + }) // Add result and trigger auto-send await this.addToolResult({ toolCallId, tool: toolName, output, - state: "output-available", - }); + state: 'output-available', + }) } catch (error: any) { await this.addToolResult({ toolCallId, tool: toolName, output: null, - state: "output-error", + state: 'output-error', errorText: error.message, - }); + }) } } else { // No callback - just mark as input-complete (UI should handle) @@ -233,170 +226,170 @@ export class ChatClient { this.messages, assistantMessageId, toolCallId, - "input-complete" - ) - ); + 'input-complete', + ), + ) } }, onThinkingUpdate: (content) => { - this.events.textUpdated(streamId, assistantMessageId, content); + this.events.textUpdated(streamId, assistantMessageId, content) this.setMessages( - updateThinkingPart(this.messages, assistantMessageId, content) - ); + updateThinkingPart(this.messages, assistantMessageId, content), + ) }, onStreamEnd: () => { // Stream finished - parts are already updated }, }, - }); + }) // Wrap source to collect raw chunks const wrappedSource = async function* (this: ChatClient) { for await (const chunk of source) { - rawChunks.push(chunk); - this.callbacks.onChunk(chunk); - yield chunk; + rawChunks.push(chunk) + this.callbacks.onChunk(chunk) + yield chunk } - }.call(this); + }.call(this) - await processor.process(wrappedSource); + await processor.process(wrappedSource) const finalMessage = this.messages.find( - (msg) => msg.id === assistantMessageId - ); + (msg) => msg.id === assistantMessageId, + ) return ( finalMessage || { id: assistantMessageId, - role: "assistant", + role: 'assistant', parts: [], createdAt: new Date(), } - ); + ) } async append(message: UIMessage | ModelMessage): Promise { // Normalize message to UIMessage with guaranteed id and createdAt const uiMessage = normalizeToUIMessage(message, () => - this.generateMessageId() - ); + this.generateMessageId(), + ) // Emit message appended event - this.events.messageAppended(uiMessage); + this.events.messageAppended(uiMessage) // Add message immediately - this.setMessages([...this.messages, uiMessage]); - this.setIsLoading(true); - this.setError(undefined); + this.setMessages([...this.messages, uiMessage]) + this.setIsLoading(true) + this.setError(undefined) // Create abort controller for this request - this.abortController = new AbortController(); + this.abortController = new AbortController() try { // Convert UIMessages to ModelMessages for connection adapter - const modelMessages: ModelMessage[] = []; + const modelMessages: Array = [] for (const msg of this.messages) { - modelMessages.push(...uiMessageToModelMessages(msg)); + modelMessages.push(...uiMessageToModelMessages(msg)) } // Call onResponse callback (no Response object for non-fetch adapters) - await this.callbacks.onResponse(); + await this.callbacks.onResponse() // Connect and get stream from connection adapter, passing abort signal const stream = this.connection.connect( modelMessages, this.body, - this.abortController.signal - ); + this.abortController.signal, + ) - const assistantMessage = await this.processStream(stream); + const assistantMessage = await this.processStream(stream) // Call onFinish callback - this.callbacks.onFinish(assistantMessage); + this.callbacks.onFinish(assistantMessage) } catch (err) { if (err instanceof Error) { - if (err.name === "AbortError") { + if (err.name === 'AbortError') { // Request was aborted, ignore - return; + return } - this.setError(err); - this.callbacks.onError(err); + this.setError(err) + this.callbacks.onError(err) } } finally { - this.abortController = null; - this.setIsLoading(false); + this.abortController = null + this.setIsLoading(false) } } async sendMessage(content: string): Promise { if (!content.trim() || this.isLoading) { - return; + return } const userMessage: UIMessage = { id: this.generateMessageId(), - role: "user", - parts: [{ type: "text", content: content.trim() }], + role: 'user', + parts: [{ type: 'text', content: content.trim() }], createdAt: new Date(), - }; + } - this.events.messageSent(userMessage.id, content.trim()); + this.events.messageSent(userMessage.id, content.trim()) - await this.append(userMessage); + await this.append(userMessage) } async reload(): Promise { - if (this.messages.length === 0) return; + if (this.messages.length === 0) return // Find the last user message const lastUserMessageIndex = this.messages.findLastIndex( - (m: UIMessage) => m.role === "user" - ); + (m: UIMessage) => m.role === 'user', + ) - if (lastUserMessageIndex === -1) return; + if (lastUserMessageIndex === -1) return - this.events.reloaded(lastUserMessageIndex); + this.events.reloaded(lastUserMessageIndex) // Remove all messages after the last user message - const messagesToKeep = this.messages.slice(0, lastUserMessageIndex + 1); - this.setMessages(messagesToKeep); + const messagesToKeep = this.messages.slice(0, lastUserMessageIndex + 1) + this.setMessages(messagesToKeep) // Resend the last user message - await this.append(this.messages[lastUserMessageIndex]); + await this.append(this.messages[lastUserMessageIndex]!) } stop(): void { if (this.abortController) { - this.abortController.abort(); - this.abortController = null; + this.abortController.abort() + this.abortController = null } - this.setIsLoading(false); - this.events.stopped(); + this.setIsLoading(false) + this.events.stopped() } clear(): void { - this.setMessages([]); - this.setError(undefined); - this.events.messagesCleared(); + this.setMessages([]) + this.setError(undefined) + this.events.messagesCleared() } /** * Add the result of a client-side tool execution */ async addToolResult(result: { - toolCallId: string; - tool: string; - output: any; - state?: "output-available" | "output-error"; - errorText?: string; + toolCallId: string + tool: string + output: any + state?: 'output-available' | 'output-error' + errorText?: string }): Promise { this.events.toolResultAdded( result.toolCallId, result.tool, result.output, - result.state || "output-available" - ); + result.state || 'output-available', + ) // Update the tool call part with the output this.setMessages( @@ -404,15 +397,15 @@ export class ChatClient { this.messages, result.toolCallId, result.output, - result.state === "output-error" ? "input-complete" : undefined, - result.errorText - ) - ); + result.state === 'output-error' ? 'input-complete' : undefined, + result.errorText, + ), + ) // Check if we should auto-send if (this.shouldAutoSend()) { // Continue the flow without adding a new message - await this.continueFlow(); + await this.continueFlow() } } @@ -423,31 +416,31 @@ export class ChatClient { for (const msg of this.messages) { const toolCallPart = msg.parts.find( (p): p is ToolCallPart => - p.type === "tool-call" && p.approval?.id === approvalId - ) as ToolCallPart | undefined; + p.type === 'tool-call' && p.approval?.id === approvalId, + ) if (toolCallPart) { - return toolCallPart.id; + return toolCallPart.id } } - return undefined; + return undefined } /** * Respond to a tool approval request */ async addToolApprovalResponse(response: { - id: string; // approval.id, not toolCallId - approved: boolean; + id: string // approval.id, not toolCallId + approved: boolean }): Promise { - const foundToolCallId = this.findToolCallIdByApprovalId(response.id); + const foundToolCallId = this.findToolCallIdByApprovalId(response.id) if (foundToolCallId) { this.events.toolApprovalResponded( response.id, foundToolCallId, - response.approved - ); + response.approved, + ) } // Find and update the tool call part with approval decision @@ -455,14 +448,14 @@ export class ChatClient { updateToolCallApprovalResponse( this.messages, response.id, - response.approved - ) - ); + response.approved, + ), + ) // Check if we should auto-send if (this.shouldAutoSend()) { // Continue the flow without adding a new message - await this.continueFlow(); + await this.continueFlow() } } @@ -470,19 +463,19 @@ export class ChatClient { * Continue the agent flow with current messages (for approvals/tool results) */ private async continueFlow(): Promise { - if (this.isLoading) return; + if (this.isLoading) return // Create abort controller for this request - this.abortController = new AbortController(); + this.abortController = new AbortController() try { - this.setIsLoading(true); - this.setError(undefined); + this.setIsLoading(true) + this.setError(undefined) // Convert UIMessages to ModelMessages for connection adapter - const modelMessages: ModelMessage[] = []; + const modelMessages: Array = [] for (const msg of this.messages) { - modelMessages.push(...uiMessageToModelMessages(msg)); + modelMessages.push(...uiMessageToModelMessages(msg)) } // Process the current conversation state, passing abort signal @@ -490,19 +483,19 @@ export class ChatClient { this.connection.connect( modelMessages, this.body, - this.abortController.signal - ) - ); + this.abortController.signal, + ), + ) } catch (err: any) { - if (err instanceof Error && err.name === "AbortError") { + if (err instanceof Error && err.name === 'AbortError') { // Request was aborted, ignore - return; + return } - this.setError(err); - this.callbacks.onError(err); + this.setError(err) + this.callbacks.onError(err) } finally { - this.abortController = null; - this.setIsLoading(false); + this.abortController = null + this.setIsLoading(false) } } @@ -511,38 +504,38 @@ export class ChatClient { */ private shouldAutoSend(): boolean { const lastAssistant = this.messages.findLast( - (m: UIMessage) => m.role === "assistant" - ); + (m: UIMessage) => m.role === 'assistant', + ) - if (!lastAssistant) return false; + if (!lastAssistant) return false const toolParts = lastAssistant.parts.filter( - (p): p is ToolCallPart => p.type === "tool-call" - ); + (p): p is ToolCallPart => p.type === 'tool-call', + ) - if (toolParts.length === 0) return false; + if (toolParts.length === 0) return false // All tool calls must be in a terminal state return toolParts.every( (part) => - part.state === "approval-responded" || - (part.output !== undefined && !part.approval) // Has output and no approval needed - ); + part.state === 'approval-responded' || + (part.output !== undefined && !part.approval), // Has output and no approval needed + ) } - getMessages(): UIMessage[] { - return this.messages; + getMessages(): Array { + return this.messages } getIsLoading(): boolean { - return this.isLoading; + return this.isLoading } getError(): Error | undefined { - return this.error; + return this.error } - setMessagesManually(messages: UIMessage[]): void { - this.setMessages(messages); + setMessagesManually(messages: Array): void { + this.setMessages(messages) } } diff --git a/packages/typescript/ai-client/src/connection-adapters.ts b/packages/typescript/ai-client/src/connection-adapters.ts index 2bc59d95e..d02059042 100644 --- a/packages/typescript/ai-client/src/connection-adapters.ts +++ b/packages/typescript/ai-client/src/connection-adapters.ts @@ -1,24 +1,24 @@ -import type { StreamChunk, ModelMessage } from "@tanstack/ai"; -import type { UIMessage } from "./types"; -import { convertMessagesToModelMessages } from "./message-converters"; +import { convertMessagesToModelMessages } from './message-converters' +import type { ModelMessage, StreamChunk } from '@tanstack/ai' +import type { UIMessage } from './types' /** * Merge custom headers into request headers */ function mergeHeaders( - customHeaders?: Record | Headers + customHeaders?: Record | Headers, ): Record { if (!customHeaders) { - return {}; + return {} } if (customHeaders instanceof Headers) { - const result: Record = {}; + const result: Record = {} customHeaders.forEach((value, key) => { - result[key] = value; - }); - return result; + result[key] = value + }) + return result } - return customHeaders; + return customHeaders } /** @@ -26,40 +26,41 @@ function mergeHeaders( */ async function* readStreamLines( reader: ReadableStreamDefaultReader, - abortSignal?: AbortSignal + abortSignal?: AbortSignal, ): AsyncGenerator { try { - const decoder = new TextDecoder(); - let buffer = ""; + const decoder = new TextDecoder() + let buffer = '' + // eslint-disable-next-line @typescript-eslint/no-unnecessary-condition while (true) { // Check if aborted before reading if (abortSignal?.aborted) { - break; + break } - const { done, value } = await reader.read(); - if (done) break; + const { done, value } = await reader.read() + if (done) break - buffer += decoder.decode(value, { stream: true }); - const lines = buffer.split("\n"); + buffer += decoder.decode(value, { stream: true }) + const lines = buffer.split('\n') // Keep the last incomplete line in the buffer - buffer = lines.pop() || ""; + buffer = lines.pop() || '' for (const line of lines) { if (line.trim()) { - yield line; + yield line } } } // Process any remaining data in the buffer if (buffer.trim()) { - yield buffer; + yield buffer } } finally { - reader.releaseLock(); + reader.releaseLock() } } @@ -73,20 +74,20 @@ export interface ConnectionAdapter { * @param data - Additional data to send * @param abortSignal - Optional abort signal for request cancellation */ - connect( - messages: UIMessage[] | ModelMessage[], + connect: ( + messages: Array | Array, data?: Record, - abortSignal?: AbortSignal - ): AsyncIterable; + abortSignal?: AbortSignal, + ) => AsyncIterable } /** * Options for fetch-based connection adapters */ export interface FetchConnectionOptions { - headers?: Record | Headers; - credentials?: RequestCredentials; - signal?: AbortSignal; + headers?: Record | Headers + credentials?: RequestCredentials + signal?: AbortSignal } /** @@ -107,53 +108,53 @@ export interface FetchConnectionOptions { */ export function fetchServerSentEvents( url: string, - options: FetchConnectionOptions = {} + options: FetchConnectionOptions = {}, ): ConnectionAdapter { return { async *connect(messages, data, abortSignal) { - const modelMessages = convertMessagesToModelMessages(messages); + const modelMessages = convertMessagesToModelMessages(messages) const requestHeaders: Record = { - "Content-Type": "application/json", + 'Content-Type': 'application/json', ...mergeHeaders(options.headers), - }; + } const response = await fetch(url, { - method: "POST", + method: 'POST', headers: requestHeaders, body: JSON.stringify({ messages: modelMessages, data }), - credentials: options.credentials || "same-origin", + credentials: options.credentials || 'same-origin', signal: abortSignal || options.signal, - }); + }) if (!response.ok) { throw new Error( - `HTTP error! status: ${response.status} ${response.statusText}` - ); + `HTTP error! status: ${response.status} ${response.statusText}`, + ) } // Parse Server-Sent Events format - const reader = response.body?.getReader(); + const reader = response.body?.getReader() if (!reader) { - throw new Error("Response body is not readable"); + throw new Error('Response body is not readable') } for await (const line of readStreamLines(reader, abortSignal)) { // Handle Server-Sent Events format - const data = line.startsWith("data: ") ? line.slice(6) : line; + const data = line.startsWith('data: ') ? line.slice(6) : line - if (data === "[DONE]") continue; + if (data === '[DONE]') continue try { - const parsed: StreamChunk = JSON.parse(data); - yield parsed; + const parsed: StreamChunk = JSON.parse(data) + yield parsed } catch (parseError) { // Skip non-JSON lines or malformed chunks - console.warn("Failed to parse SSE chunk:", data); + console.warn('Failed to parse SSE chunk:', data) } } }, - }; + } } /** @@ -174,48 +175,48 @@ export function fetchServerSentEvents( */ export function fetchHttpStream( url: string, - options: FetchConnectionOptions = {} + options: FetchConnectionOptions = {}, ): ConnectionAdapter { return { async *connect(messages, data, abortSignal) { // Convert UIMessages to ModelMessages if needed - const modelMessages = convertMessagesToModelMessages(messages); + const modelMessages = convertMessagesToModelMessages(messages) const requestHeaders: Record = { - "Content-Type": "application/json", + 'Content-Type': 'application/json', ...mergeHeaders(options.headers), - }; + } const response = await fetch(url, { - method: "POST", + method: 'POST', headers: requestHeaders, body: JSON.stringify({ messages: modelMessages, data }), - credentials: options.credentials || "same-origin", + credentials: options.credentials || 'same-origin', signal: abortSignal || options.signal, - }); + }) if (!response.ok) { throw new Error( - `HTTP error! status: ${response.status} ${response.statusText}` - ); + `HTTP error! status: ${response.status} ${response.statusText}`, + ) } // Parse raw HTTP stream (newline-delimited JSON) - const reader = response.body?.getReader(); + const reader = response.body?.getReader() if (!reader) { - throw new Error("Response body is not readable"); + throw new Error('Response body is not readable') } for await (const line of readStreamLines(reader, abortSignal)) { try { - const parsed: StreamChunk = JSON.parse(line); - yield parsed; + const parsed: StreamChunk = JSON.parse(line) + yield parsed } catch (parseError) { - console.warn("Failed to parse HTTP stream chunk:", line); + console.warn('Failed to parse HTTP stream chunk:', line) } } }, - }; + } } /** @@ -234,14 +235,14 @@ export function fetchHttpStream( */ export function stream( streamFactory: ( - messages: ModelMessage[], - data?: Record - ) => AsyncIterable + messages: Array, + data?: Record, + ) => AsyncIterable, ): ConnectionAdapter { return { async *connect(messages, data) { - const modelMessages = convertMessagesToModelMessages(messages); - yield* streamFactory(modelMessages, data); + const modelMessages = convertMessagesToModelMessages(messages) + yield* streamFactory(modelMessages, data) }, - }; + } } diff --git a/packages/typescript/ai-client/src/events.ts b/packages/typescript/ai-client/src/events.ts index 321677966..646676c79 100644 --- a/packages/typescript/ai-client/src/events.ts +++ b/packages/typescript/ai-client/src/events.ts @@ -1,14 +1,14 @@ -import { aiEventClient } from "@tanstack/ai/event-client"; -import type { UIMessage } from "./types"; +import { aiEventClient } from '@tanstack/ai/event-client' +import type { UIMessage } from './types' /** * Abstract base class for ChatClient event emission */ export abstract class ChatClientEventEmitter { - protected clientId: string; + protected clientId: string constructor(clientId: string) { - this.clientId = clientId; + this.clientId = clientId } /** @@ -17,51 +17,47 @@ export abstract class ChatClientEventEmitter { */ protected abstract emitEvent( eventName: string, - data?: Record - ): void; + data?: Record, + ): void /** * Emit client created event */ clientCreated(initialMessageCount: number): void { - this.emitEvent("client:created", { + this.emitEvent('client:created', { initialMessageCount, - }); + }) } /** * Emit loading state changed event */ loadingChanged(isLoading: boolean): void { - this.emitEvent("client:loading-changed", { isLoading }); + this.emitEvent('client:loading-changed', { isLoading }) } /** * Emit error state changed event */ errorChanged(error: string | null): void { - this.emitEvent("client:error-changed", { + this.emitEvent('client:error-changed', { error, - }); + }) } /** * Emit text update events (combines processor and client events) */ - textUpdated( - streamId: string, - messageId: string, - content: string - ): void { - this.emitEvent("processor:text-updated", { + textUpdated(streamId: string, messageId: string, content: string): void { + this.emitEvent('processor:text-updated', { streamId, content, - }); + }) - this.emitEvent("client:assistant-message-updated", { + this.emitEvent('client:assistant-message-updated', { messageId, content, - }); + }) } /** @@ -73,23 +69,23 @@ export abstract class ChatClientEventEmitter { toolCallId: string, toolName: string, state: string, - args: string + args: string, ): void { - this.emitEvent("processor:tool-call-state-changed", { + this.emitEvent('processor:tool-call-state-changed', { streamId, toolCallId, toolName, state, arguments: args, - }); + }) - this.emitEvent("client:tool-call-updated", { + this.emitEvent('client:tool-call-updated', { messageId, toolCallId, toolName, state, arguments: args, - }); + }) } /** @@ -100,15 +96,15 @@ export abstract class ChatClientEventEmitter { toolCallId: string, content: string, state: string, - error?: string + error?: string, ): void { - this.emitEvent("processor:tool-result-state-changed", { + this.emitEvent('processor:tool-result-state-changed', { streamId, toolCallId, content, state, error, - }); + }) } /** @@ -119,15 +115,15 @@ export abstract class ChatClientEventEmitter { toolCallId: string, toolName: string, input: any, - approvalId: string + approvalId: string, ): void { - this.emitEvent("client:approval-requested", { + this.emitEvent('client:approval-requested', { messageId, toolCallId, toolName, input, approvalId, - }); + }) } /** @@ -135,49 +131,49 @@ export abstract class ChatClientEventEmitter { */ messageAppended(uiMessage: UIMessage): void { const contentPreview = uiMessage.parts - .filter((p) => p.type === "text") + .filter((p) => p.type === 'text') .map((p) => (p as any).content) - .join(" ") - .substring(0, 100); + .join(' ') + .substring(0, 100) - this.emitEvent("client:message-appended", { + this.emitEvent('client:message-appended', { messageId: uiMessage.id, role: uiMessage.role, contentPreview, - }); + }) } /** * Emit message sent event */ messageSent(messageId: string, content: string): void { - this.emitEvent("client:message-sent", { + this.emitEvent('client:message-sent', { messageId, content, - }); + }) } /** * Emit reloaded event */ reloaded(fromMessageIndex: number): void { - this.emitEvent("client:reloaded", { + this.emitEvent('client:reloaded', { fromMessageIndex, - }); + }) } /** * Emit stopped event */ stopped(): void { - this.emitEvent("client:stopped"); + this.emitEvent('client:stopped') } /** * Emit messages cleared event */ messagesCleared(): void { - this.emitEvent("client:messages-cleared"); + this.emitEvent('client:messages-cleared') } /** @@ -187,14 +183,14 @@ export abstract class ChatClientEventEmitter { toolCallId: string, toolName: string, output: any, - state: string + state: string, ): void { - this.emitEvent("tool:result-added", { + this.emitEvent('tool:result-added', { toolCallId, toolName, output, state, - }); + }) } /** @@ -203,13 +199,13 @@ export abstract class ChatClientEventEmitter { toolApprovalResponded( approvalId: string, toolCallId: string, - approved: boolean + approved: boolean, ): void { - this.emitEvent("tool:approval-responded", { + this.emitEvent('tool:approval-responded', { approvalId, toolCallId, approved, - }); + }) } } @@ -220,24 +216,20 @@ export class DefaultChatClientEventEmitter extends ChatClientEventEmitter { /** * Emit an event with automatic clientId and timestamp for client/tool events */ - protected emitEvent( - eventName: string, - data?: Record - ): void { + protected emitEvent(eventName: string, data?: Record): void { // For client:* and tool:* events, automatically add clientId and timestamp - if (eventName.startsWith("client:") || eventName.startsWith("tool:")) { + if (eventName.startsWith('client:') || eventName.startsWith('tool:')) { aiEventClient.emit(eventName as any, { ...data, clientId: this.clientId, timestamp: Date.now(), - }); + }) } else { // For other events (e.g., processor:*), just add timestamp aiEventClient.emit(eventName as any, { ...data, timestamp: Date.now(), - }); + }) } } } - diff --git a/packages/typescript/ai-client/src/index.ts b/packages/typescript/ai-client/src/index.ts index 7f4f356d2..8fd7f223a 100644 --- a/packages/typescript/ai-client/src/index.ts +++ b/packages/typescript/ai-client/src/index.ts @@ -1,4 +1,4 @@ -export { ChatClient } from "./chat-client"; +export { ChatClient } from './chat-client' export type { // Core message types UIMessage, @@ -11,14 +11,14 @@ export type { // Client configuration types ChatClientOptions, ChatRequestBody, -} from "./types"; +} from './types' export { fetchServerSentEvents, fetchHttpStream, stream, type ConnectionAdapter, type FetchConnectionOptions, -} from "./connection-adapters"; +} from './connection-adapters' export { StreamProcessor, ImmediateStrategy, @@ -34,15 +34,15 @@ export { type StreamProcessorOptions, type StreamProcessorHandlers, type InternalToolCallState, -} from "./stream/index"; +} from './stream/index' export { uiMessageToModelMessages, modelMessageToUIMessage, modelMessagesToUIMessages, -} from "./message-converters"; +} from './message-converters' export { parsePartialJSON, PartialJSONParser, defaultJSONParser, type JSONParser, -} from "./loose-json-parser"; +} from './loose-json-parser' diff --git a/packages/typescript/ai-client/src/loose-json-parser.ts b/packages/typescript/ai-client/src/loose-json-parser.ts index 52d48e1c2..58f9b6e80 100644 --- a/packages/typescript/ai-client/src/loose-json-parser.ts +++ b/packages/typescript/ai-client/src/loose-json-parser.ts @@ -1,4 +1,4 @@ -import { parse as parsePartialJSONLib } from "partial-json"; +import { parse as parsePartialJSONLib } from 'partial-json' /** * JSON Parser interface - allows for custom parser implementations @@ -9,7 +9,7 @@ export interface JSONParser { * @param jsonString - The JSON string to parse * @returns The parsed object, or undefined if parsing fails */ - parse(jsonString: string): any; + parse: (jsonString: string) => any } /** @@ -23,16 +23,16 @@ export class PartialJSONParser implements JSONParser { * @returns The parsed object, or undefined if parsing fails */ parse(jsonString: string): any { - if (!jsonString || jsonString.trim() === "") { - return undefined; + if (!jsonString || jsonString.trim() === '') { + return undefined } try { - return parsePartialJSONLib(jsonString); + return parsePartialJSONLib(jsonString) } catch (error) { // If partial parsing fails, return undefined // This is expected during early streaming when we have very little data - return undefined; + return undefined } } } @@ -40,7 +40,7 @@ export class PartialJSONParser implements JSONParser { /** * Default parser instance */ -export const defaultJSONParser = new PartialJSONParser(); +export const defaultJSONParser = new PartialJSONParser() /** * Parse partial JSON string (convenience function) @@ -48,6 +48,5 @@ export const defaultJSONParser = new PartialJSONParser(); * @returns The parsed object, or undefined if parsing fails */ export function parsePartialJSON(jsonString: string): any { - return defaultJSONParser.parse(jsonString); + return defaultJSONParser.parse(jsonString) } - diff --git a/packages/typescript/ai-client/src/message-converters.ts b/packages/typescript/ai-client/src/message-converters.ts index ef2cdd117..5595e9800 100644 --- a/packages/typescript/ai-client/src/message-converters.ts +++ b/packages/typescript/ai-client/src/message-converters.ts @@ -1,29 +1,29 @@ -import type { ModelMessage } from "@tanstack/ai"; +import type { ModelMessage } from '@tanstack/ai' import type { - UIMessage, MessagePart, TextPart, ToolCallPart, ToolResultPart, -} from "./types"; + UIMessage, +} from './types' /** * Convert UIMessages or ModelMessages to ModelMessages */ export function convertMessagesToModelMessages( - messages: UIMessage[] | ModelMessage[] -): ModelMessage[] { - const modelMessages: ModelMessage[] = []; + messages: Array, +): Array { + const modelMessages: Array = [] for (const msg of messages) { - if ("parts" in msg) { + if ('parts' in msg) { // UIMessage - convert to ModelMessages - modelMessages.push(...uiMessageToModelMessages(msg as UIMessage)); + modelMessages.push(...uiMessageToModelMessages(msg)) } else { // Already ModelMessage - modelMessages.push(msg as ModelMessage); + modelMessages.push(msg) } } - return modelMessages; + return modelMessages } /** @@ -37,78 +37,80 @@ export function convertMessagesToModelMessages( * @param uiMessage - The UIMessage to convert * @returns An array of ModelMessages (may be multiple if tool results are present) */ -export function uiMessageToModelMessages(uiMessage: UIMessage): ModelMessage[] { - const messages: ModelMessage[] = []; +export function uiMessageToModelMessages( + uiMessage: UIMessage, +): Array { + const messages: Array = [] // Separate parts by type // Note: thinking parts are UI-only and not included in ModelMessages - const textParts: TextPart[] = []; - const toolCallParts: ToolCallPart[] = []; - const toolResultParts: ToolResultPart[] = []; + const textParts: Array = [] + const toolCallParts: Array = [] + const toolResultParts: Array = [] for (const part of uiMessage.parts) { - if (part.type === "text") { - textParts.push(part); - } else if (part.type === "tool-call") { - toolCallParts.push(part); - } else if (part.type === "tool-result") { - toolResultParts.push(part); + if (part.type === 'text') { + textParts.push(part) + } else if (part.type === 'tool-call') { + toolCallParts.push(part) + } else if (part.type === 'tool-result') { + toolResultParts.push(part) } // thinking parts are skipped - they're UI-only } // Build the main message (system, user, or assistant) - const content = textParts.map((p) => p.content).join("") || null; + const content = textParts.map((p) => p.content).join('') || null const toolCalls = toolCallParts.length > 0 ? toolCallParts .filter( (p) => - p.state === "input-complete" || - p.state === "approval-responded" || - p.output !== undefined // Include if has output (client tool result) + p.state === 'input-complete' || + p.state === 'approval-responded' || + p.output !== undefined, // Include if has output (client tool result) ) .map((p) => ({ id: p.id, - type: "function" as const, + type: 'function' as const, function: { name: p.name, arguments: p.arguments, }, })) - : undefined; + : undefined // Create the main message - if (uiMessage.role !== "assistant" || content || !toolCalls) { + if (uiMessage.role !== 'assistant' || content || !toolCalls) { messages.push({ role: uiMessage.role, content, ...(toolCalls && toolCalls.length > 0 && { toolCalls }), - }); - } else if (toolCalls && toolCalls.length > 0) { + }) + } else if (toolCalls.length > 0) { // Assistant message with only tool calls messages.push({ - role: "assistant", + role: 'assistant', content, toolCalls, - }); + }) } // Add tool result messages (only completed ones) for (const toolResultPart of toolResultParts) { if ( - toolResultPart.state === "complete" || - toolResultPart.state === "error" + toolResultPart.state === 'complete' || + toolResultPart.state === 'error' ) { messages.push({ - role: "tool", + role: 'tool', content: toolResultPart.content, toolCallId: toolResultPart.toolCallId, - }); + }) } } - return messages; + return messages } /** @@ -125,46 +127,46 @@ export function uiMessageToModelMessages(uiMessage: UIMessage): ModelMessage[] { */ export function modelMessageToUIMessage( modelMessage: ModelMessage, - id?: string + id?: string, ): UIMessage { - const parts: MessagePart[] = []; + const parts: Array = [] // Handle content if (modelMessage.content) { parts.push({ - type: "text", + type: 'text', content: modelMessage.content, - }); + }) } // Handle tool calls if (modelMessage.toolCalls && modelMessage.toolCalls.length > 0) { for (const toolCall of modelMessage.toolCalls) { parts.push({ - type: "tool-call", + type: 'tool-call', id: toolCall.id, name: toolCall.function.name, arguments: toolCall.function.arguments, - state: "input-complete", // Model messages have complete arguments - }); + state: 'input-complete', // Model messages have complete arguments + }) } } // Handle tool results (when role is "tool") - if (modelMessage.role === "tool" && modelMessage.toolCallId) { + if (modelMessage.role === 'tool' && modelMessage.toolCallId) { parts.push({ - type: "tool-result", + type: 'tool-result', toolCallId: modelMessage.toolCallId, - content: modelMessage.content || "", - state: "complete", - }); + content: modelMessage.content || '', + state: 'complete', + }) } return { id: id || generateMessageId(), - role: modelMessage.role === "tool" ? "assistant" : modelMessage.role, + role: modelMessage.role === 'tool' ? 'assistant' : modelMessage.role, parts, - }; + } } /** @@ -176,46 +178,44 @@ export function modelMessageToUIMessage( * @returns Array of UIMessages */ export function modelMessagesToUIMessages( - modelMessages: ModelMessage[] -): UIMessage[] { - const uiMessages: UIMessage[] = []; - let currentAssistantMessage: UIMessage | null = null; + modelMessages: Array, +): Array { + const uiMessages: Array = [] + let currentAssistantMessage: UIMessage | null = null - for (let i = 0; i < modelMessages.length; i++) { - const msg = modelMessages[i]; - - if (msg.role === "tool") { + for (const msg of modelMessages) { + if (msg.role === 'tool') { // Tool result - merge into the last assistant message if possible if ( currentAssistantMessage && - currentAssistantMessage.role === "assistant" + currentAssistantMessage.role === 'assistant' ) { currentAssistantMessage.parts.push({ - type: "tool-result", + type: 'tool-result', toolCallId: msg.toolCallId!, - content: msg.content || "", - state: "complete", - }); + content: msg.content || '', + state: 'complete', + }) } else { // No assistant message to merge into, create a standalone one - const toolResultUIMessage = modelMessageToUIMessage(msg); - uiMessages.push(toolResultUIMessage); + const toolResultUIMessage = modelMessageToUIMessage(msg) + uiMessages.push(toolResultUIMessage) } } else { // Regular message - const uiMessage = modelMessageToUIMessage(msg); - uiMessages.push(uiMessage); + const uiMessage = modelMessageToUIMessage(msg) + uiMessages.push(uiMessage) // Track assistant messages for potential tool result merging - if (msg.role === "assistant") { - currentAssistantMessage = uiMessage; + if (msg.role === 'assistant') { + currentAssistantMessage = uiMessage } else { - currentAssistantMessage = null; + currentAssistantMessage = null } } } - return uiMessages; + return uiMessages } /** @@ -228,21 +228,21 @@ export function modelMessagesToUIMessages( */ export function normalizeToUIMessage( message: UIMessage | ModelMessage, - generateId: () => string + generateId: () => string, ): UIMessage { - if ("parts" in message) { + if ('parts' in message) { // Already a UIMessage return { ...message, id: message.id || generateId(), createdAt: message.createdAt || new Date(), - }; + } } else { // ModelMessage - convert to UIMessage return { ...modelMessageToUIMessage(message, generateId()), createdAt: new Date(), - }; + } } } @@ -250,5 +250,5 @@ export function normalizeToUIMessage( * Generate a unique message ID */ function generateMessageId(): string { - return `msg-${Date.now()}-${Math.random().toString(36).substring(7)}`; + return `msg-${Date.now()}-${Math.random().toString(36).substring(7)}` } diff --git a/packages/typescript/ai-client/src/message-updaters.ts b/packages/typescript/ai-client/src/message-updaters.ts index d0558d669..564bf3de0 100644 --- a/packages/typescript/ai-client/src/message-updaters.ts +++ b/packages/typescript/ai-client/src/message-updaters.ts @@ -1,46 +1,45 @@ import type { - UIMessage, - MessagePart, - ToolCallPart, - ToolResultPart, ThinkingPart, + ToolCallPart, ToolCallState, + ToolResultPart, ToolResultState, -} from "./types"; + UIMessage, +} from './types' /** * Update or add a text part to a message, ensuring tool calls come before text. * Text parts are always placed at the end (after tool calls). */ export function updateTextPart( - messages: UIMessage[], + messages: Array, messageId: string, - content: string -): UIMessage[] { + content: string, +): Array { return messages.map((msg) => { if (msg.id !== messageId) { - return msg; + return msg } - let parts = [...msg.parts]; - const textPartIndex = parts.findIndex((p) => p.type === "text"); + let parts = [...msg.parts] + const textPartIndex = parts.findIndex((p) => p.type === 'text') // Always add/update text part at the end (after tool calls) if (textPartIndex >= 0) { - parts[textPartIndex] = { type: "text", content }; + parts[textPartIndex] = { type: 'text', content } } else { // Remove existing parts temporarily to ensure order - const toolCallParts = parts.filter((p) => p.type === "tool-call"); + const toolCallParts = parts.filter((p) => p.type === 'tool-call') const otherParts = parts.filter( - (p) => p.type !== "tool-call" && p.type !== "text" - ); + (p) => p.type !== 'tool-call' && p.type !== 'text', + ) // Rebuild: tool calls first, then other parts, then text - parts = [...toolCallParts, ...otherParts, { type: "text", content }]; + parts = [...toolCallParts, ...otherParts, { type: 'text', content }] } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** @@ -48,147 +47,147 @@ export function updateTextPart( * Tool calls are inserted before any text parts. */ export function updateToolCallPart( - messages: UIMessage[], + messages: Array, messageId: string, toolCall: { - id: string; - name: string; - arguments: string; - state: ToolCallState; - } -): UIMessage[] { + id: string + name: string + arguments: string + state: ToolCallState + }, +): Array { return messages.map((msg) => { if (msg.id !== messageId) { - return msg; + return msg } - let parts = [...msg.parts]; + const parts = [...msg.parts] // Find by ID, not index! const existingPartIndex = parts.findIndex( - (p): p is ToolCallPart => p.type === "tool-call" && p.id === toolCall.id - ); + (p): p is ToolCallPart => p.type === 'tool-call' && p.id === toolCall.id, + ) const toolCallPart: ToolCallPart = { - type: "tool-call", + type: 'tool-call', id: toolCall.id, name: toolCall.name, arguments: toolCall.arguments, state: toolCall.state, - }; + } if (existingPartIndex >= 0) { // Update existing tool call - parts[existingPartIndex] = toolCallPart; + parts[existingPartIndex] = toolCallPart } else { // Insert tool call before any text parts - const textPartIndex = parts.findIndex((p) => p.type === "text"); + const textPartIndex = parts.findIndex((p) => p.type === 'text') if (textPartIndex >= 0) { - parts.splice(textPartIndex, 0, toolCallPart); + parts.splice(textPartIndex, 0, toolCallPart) } else { - parts.push(toolCallPart); + parts.push(toolCallPart) } } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** * Update or add a tool result part to a message. */ export function updateToolResultPart( - messages: UIMessage[], + messages: Array, messageId: string, toolCallId: string, content: string, state: ToolResultState, - error?: string -): UIMessage[] { + error?: string, +): Array { return messages.map((msg) => { if (msg.id !== messageId) { - return msg; + return msg } - const parts = [...msg.parts]; + const parts = [...msg.parts] const resultPartIndex = parts.findIndex( (p): p is ToolResultPart => - p.type === "tool-result" && p.toolCallId === toolCallId - ); + p.type === 'tool-result' && p.toolCallId === toolCallId, + ) const toolResultPart: ToolResultPart = { - type: "tool-result", + type: 'tool-result', toolCallId, content, state, ...(error && { error }), - }; + } if (resultPartIndex >= 0) { - parts[resultPartIndex] = toolResultPart; + parts[resultPartIndex] = toolResultPart } else { - parts.push(toolResultPart); + parts.push(toolResultPart) } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** * Update a tool call part with approval request metadata. */ export function updateToolCallApproval( - messages: UIMessage[], + messages: Array, messageId: string, toolCallId: string, - approvalId: string -): UIMessage[] { + approvalId: string, +): Array { return messages.map((msg) => { if (msg.id !== messageId) { - return msg; + return msg } - const parts = [...msg.parts]; + const parts = [...msg.parts] const toolCallPart = parts.find( - (p): p is ToolCallPart => p.type === "tool-call" && p.id === toolCallId - ) as ToolCallPart | undefined; + (p): p is ToolCallPart => p.type === 'tool-call' && p.id === toolCallId, + ) if (toolCallPart) { - toolCallPart.state = "approval-requested"; + toolCallPart.state = 'approval-requested' toolCallPart.approval = { id: approvalId, needsApproval: true, - }; + } } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** * Update a tool call part's state (e.g., to "input-complete"). */ export function updateToolCallState( - messages: UIMessage[], + messages: Array, messageId: string, toolCallId: string, - state: ToolCallState -): UIMessage[] { + state: ToolCallState, +): Array { return messages.map((msg) => { if (msg.id !== messageId) { - return msg; + return msg } - const parts = [...msg.parts]; + const parts = [...msg.parts] const toolCallPart = parts.find( - (p): p is ToolCallPart => p.type === "tool-call" && p.id === toolCallId - ) as ToolCallPart | undefined; + (p): p is ToolCallPart => p.type === 'tool-call' && p.id === toolCallId, + ) if (toolCallPart) { - toolCallPart.state = state; + toolCallPart.state = state } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** @@ -196,29 +195,29 @@ export function updateToolCallState( * Searches all messages to find the tool call by ID. */ export function updateToolCallWithOutput( - messages: UIMessage[], + messages: Array, toolCallId: string, output: any, state?: ToolCallState, - errorText?: string -): UIMessage[] { + errorText?: string, +): Array { return messages.map((msg) => { - const parts = [...msg.parts]; + const parts = [...msg.parts] const toolCallPart = parts.find( - (p): p is ToolCallPart => p.type === "tool-call" && p.id === toolCallId - ) as ToolCallPart | undefined; + (p): p is ToolCallPart => p.type === 'tool-call' && p.id === toolCallId, + ) if (toolCallPart) { - toolCallPart.output = errorText ? { error: errorText } : output; + toolCallPart.output = errorText ? { error: errorText } : output if (state) { - toolCallPart.state = state; + toolCallPart.state = state } else { - toolCallPart.state = "input-complete"; + toolCallPart.state = 'input-complete' } } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** @@ -226,24 +225,24 @@ export function updateToolCallWithOutput( * Searches all messages to find the tool call by approval ID. */ export function updateToolCallApprovalResponse( - messages: UIMessage[], + messages: Array, approvalId: string, - approved: boolean -): UIMessage[] { + approved: boolean, +): Array { return messages.map((msg) => { - const parts = [...msg.parts]; + const parts = [...msg.parts] const toolCallPart = parts.find( (p): p is ToolCallPart => - p.type === "tool-call" && p.approval?.id === approvalId - ) as ToolCallPart | undefined; + p.type === 'tool-call' && p.approval?.id === approvalId, + ) if (toolCallPart && toolCallPart.approval) { - toolCallPart.approval.approved = approved; - toolCallPart.state = "approval-responded"; + toolCallPart.approval.approved = approved + toolCallPart.state = 'approval-responded' } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } /** @@ -251,37 +250,37 @@ export function updateToolCallApprovalResponse( * Thinking parts are typically placed before text parts. */ export function updateThinkingPart( - messages: UIMessage[], + messages: Array, messageId: string, - content: string -): UIMessage[] { + content: string, +): Array { return messages.map((msg) => { if (msg.id !== messageId) { - return msg; + return msg } - let parts = [...msg.parts]; - const thinkingPartIndex = parts.findIndex((p) => p.type === "thinking"); + const parts = [...msg.parts] + const thinkingPartIndex = parts.findIndex((p) => p.type === 'thinking') const thinkingPart: ThinkingPart = { - type: "thinking", + type: 'thinking', content, - }; + } if (thinkingPartIndex >= 0) { // Update existing thinking part - parts[thinkingPartIndex] = thinkingPart; + parts[thinkingPartIndex] = thinkingPart } else { // Insert thinking part before text parts (but after tool calls) - const textPartIndex = parts.findIndex((p) => p.type === "text"); + const textPartIndex = parts.findIndex((p) => p.type === 'text') if (textPartIndex >= 0) { - parts.splice(textPartIndex, 0, thinkingPart); + parts.splice(textPartIndex, 0, thinkingPart) } else { // No text part, add at end - parts.push(thinkingPart); + parts.push(thinkingPart) } } - return { ...msg, parts }; - }); + return { ...msg, parts } + }) } diff --git a/packages/typescript/ai-client/src/stream/chunk-strategies.ts b/packages/typescript/ai-client/src/stream/chunk-strategies.ts index 833e5fc09..145aaa35a 100644 --- a/packages/typescript/ai-client/src/stream/chunk-strategies.ts +++ b/packages/typescript/ai-client/src/stream/chunk-strategies.ts @@ -4,14 +4,14 @@ * Strategies for controlling when text updates are emitted to the UI */ -import type { ChunkStrategy } from "./types"; +import type { ChunkStrategy } from './types' /** * Immediate Strategy - emit on every chunk (default behavior) */ export class ImmediateStrategy implements ChunkStrategy { shouldEmit(_chunk: string, _accumulated: string): boolean { - return true; + return true } } @@ -20,10 +20,10 @@ export class ImmediateStrategy implements ChunkStrategy { * Useful for natural text flow in UI */ export class PunctuationStrategy implements ChunkStrategy { - private punctuation = /[.,!?;:\n]/; + private punctuation = /[.,!?;:\n]/ shouldEmit(chunk: string, _accumulated: string): boolean { - return this.punctuation.test(chunk); + return this.punctuation.test(chunk) } } @@ -32,21 +32,21 @@ export class PunctuationStrategy implements ChunkStrategy { * Useful for reducing UI update frequency */ export class BatchStrategy implements ChunkStrategy { - private chunkCount = 0; + private chunkCount = 0 constructor(private batchSize: number = 5) {} shouldEmit(_chunk: string, _accumulated: string): boolean { - this.chunkCount++; + this.chunkCount++ if (this.chunkCount >= this.batchSize) { - this.chunkCount = 0; - return true; + this.chunkCount = 0 + return true } - return false; + return false } reset(): void { - this.chunkCount = 0; + this.chunkCount = 0 } } @@ -57,7 +57,7 @@ export class BatchStrategy implements ChunkStrategy { export class WordBoundaryStrategy implements ChunkStrategy { shouldEmit(chunk: string, _accumulated: string): boolean { // Emit if chunk ends with whitespace - return /\s$/.test(chunk); + return /\s$/.test(chunk) } } @@ -66,14 +66,14 @@ export class WordBoundaryStrategy implements ChunkStrategy { * Emits if ANY strategy says to emit */ export class CompositeStrategy implements ChunkStrategy { - constructor(private strategies: ChunkStrategy[]) {} + constructor(private strategies: Array) {} shouldEmit(chunk: string, accumulated: string): boolean { - return this.strategies.some((s) => s.shouldEmit(chunk, accumulated)); + return this.strategies.some((s) => s.shouldEmit(chunk, accumulated)) } reset(): void { - this.strategies.forEach((s) => s.reset?.()); + this.strategies.forEach((s) => s.reset?.()) } } @@ -82,29 +82,29 @@ export class CompositeStrategy implements ChunkStrategy { * Useful for reducing jitter in fast streams */ export class DebounceStrategy implements ChunkStrategy { - private timeoutId: NodeJS.Timeout | null = null; - private shouldEmitNow = false; + private timeoutId: NodeJS.Timeout | null = null + private shouldEmitNow = false constructor(private delayMs: number = 100) {} shouldEmit(_chunk: string, _accumulated: string): boolean { if (this.timeoutId) { - clearTimeout(this.timeoutId); + clearTimeout(this.timeoutId) } - this.shouldEmitNow = false; + this.shouldEmitNow = false this.timeoutId = setTimeout(() => { - this.shouldEmitNow = true; - }, this.delayMs); + this.shouldEmitNow = true + }, this.delayMs) - return this.shouldEmitNow; + return this.shouldEmitNow } reset(): void { if (this.timeoutId) { - clearTimeout(this.timeoutId); - this.timeoutId = null; + clearTimeout(this.timeoutId) + this.timeoutId = null } - this.shouldEmitNow = false; + this.shouldEmitNow = false } } diff --git a/packages/typescript/ai-client/src/stream/index.ts b/packages/typescript/ai-client/src/stream/index.ts index 81ad4f4bf..2fc9bf596 100644 --- a/packages/typescript/ai-client/src/stream/index.ts +++ b/packages/typescript/ai-client/src/stream/index.ts @@ -7,7 +7,7 @@ * - Types: All stream processing types */ -export { StreamProcessor } from "./processor"; +export { StreamProcessor } from './processor' export { ImmediateStrategy, PunctuationStrategy, @@ -15,7 +15,7 @@ export { WordBoundaryStrategy, CompositeStrategy, DebounceStrategy, -} from "./chunk-strategies"; +} from './chunk-strategies' export type { StreamChunk, ProcessedEvent, @@ -24,4 +24,4 @@ export type { StreamProcessorOptions, StreamProcessorHandlers, InternalToolCallState, -} from "./types"; +} from './types' diff --git a/packages/typescript/ai-client/src/stream/processor.ts b/packages/typescript/ai-client/src/stream/processor.ts index 10e6aa172..e9adc8d39 100644 --- a/packages/typescript/ai-client/src/stream/processor.ts +++ b/packages/typescript/ai-client/src/stream/processor.ts @@ -8,17 +8,17 @@ * - Custom stream parsers */ +import { defaultJSONParser } from '../loose-json-parser' +import { ImmediateStrategy } from './chunk-strategies' import type { - StreamChunk, - StreamProcessorOptions, - StreamProcessorHandlers, - InternalToolCallState, ChunkStrategy, + InternalToolCallState, + StreamChunk, StreamParser, -} from "./types"; -import type { ToolCallState, ToolResultState } from "../types"; -import { ImmediateStrategy } from "./chunk-strategies"; -import { defaultJSONParser } from "../loose-json-parser"; + StreamProcessorHandlers, + StreamProcessorOptions, +} from './types' +import type { ToolCallState, ToolResultState } from '../types' /** * Default parser - converts adapter StreamChunk format to processor format @@ -30,65 +30,65 @@ class DefaultStreamParser implements StreamParser { for await (const chunk of stream) { // Pass through known processor format chunks if ( - chunk.type === "text" || - chunk.type === "tool-call-delta" || - chunk.type === "done" || - chunk.type === "approval-requested" || - chunk.type === "tool-input-available" || - chunk.type === "thinking" + chunk.type === 'text' || + chunk.type === 'tool-call-delta' || + chunk.type === 'done' || + chunk.type === 'approval-requested' || + chunk.type === 'tool-input-available' || + chunk.type === 'thinking' ) { - yield chunk as StreamChunk; - continue; + yield chunk as StreamChunk + continue } // Convert adapter format: "content" or "content delta" to "text" if ( - chunk.type === "content" && + chunk.type === 'content' && (chunk.content !== undefined || chunk.delta !== undefined) ) { yield { - type: "text", - content: (chunk as any).content, - delta: (chunk as any).delta, - }; + type: 'text', + content: chunk.content, + delta: chunk.delta, + } } // Convert adapter format: "tool_result" to processor format - if (chunk.type === "tool_result" || chunk.type === "tool-result") { + if (chunk.type === 'tool_result' || chunk.type === 'tool-result') { // Tool result chunks have toolCallId and content at the top level - const toolCallId = (chunk as any).toolCallId; - const content = (chunk as any).content; - const error = (chunk as any).error; + const toolCallId = chunk.toolCallId + const content = chunk.content + const error = chunk.error if (toolCallId !== undefined) { yield { - type: "tool-result", + type: 'tool-result', toolCallId, - content: content || "", + content: content || '', error, - }; + } } } // Convert adapter format: "tool_call" to "tool-call-delta" if ( - (chunk.type === "tool_call" || chunk.type === "tool-call-delta") && + (chunk.type === 'tool_call' || chunk.type === 'tool-call-delta') && chunk.toolCall ) { yield { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: chunk.index ?? chunk.toolCallIndex, toolCall: chunk.toolCall, - }; + } } // Convert adapter format: "thinking" chunks - if (chunk.type === "thinking") { + if (chunk.type === 'thinking') { yield { - type: "thinking", - content: (chunk as any).content, - delta: (chunk as any).delta, - }; + type: 'thinking', + content: chunk.content, + delta: chunk.delta, + } } } } @@ -108,50 +108,50 @@ class DefaultStreamParser implements StreamParser { * 3. Stream ends */ export class StreamProcessor { - private chunkStrategy: ChunkStrategy; - private parser: StreamParser; - private handlers: StreamProcessorHandlers; - private jsonParser: { parse(jsonString: string): any }; + private chunkStrategy: ChunkStrategy + private parser: StreamParser + private handlers: StreamProcessorHandlers + private jsonParser: { parse: (jsonString: string) => any } // State - private textContent: string = ""; - private lastEmittedText: string = ""; - private thinkingContent: string = ""; - private toolCalls: Map = new Map(); // Track by ID, not index - private toolCallOrder: string[] = []; // Track order of tool call IDs + private textContent = '' + private lastEmittedText = '' + private thinkingContent = '' + private toolCalls: Map = new Map() // Track by ID, not index + private toolCallOrder: Array = [] // Track order of tool call IDs constructor(options: StreamProcessorOptions) { - this.chunkStrategy = options.chunkStrategy || new ImmediateStrategy(); - this.parser = options.parser || new DefaultStreamParser(); - this.handlers = options.handlers; - this.jsonParser = options.jsonParser || defaultJSONParser; + this.chunkStrategy = options.chunkStrategy || new ImmediateStrategy() + this.parser = options.parser || new DefaultStreamParser() + this.handlers = options.handlers + this.jsonParser = options.jsonParser || defaultJSONParser } /** * Process a stream and emit events through handlers */ async process(stream: AsyncIterable): Promise<{ - content: string; - toolCalls?: any[]; + content: string + toolCalls?: Array }> { // Reset state - this.reset(); + this.reset() // Parse and process each chunk - const parsedStream = this.parser.parse(stream); + const parsedStream = this.parser.parse(stream) for await (const chunk of parsedStream) { - this.processChunk(chunk); + this.processChunk(chunk) } // Stream ended - finalize everything - this.finalizeStream(); + this.finalizeStream() - const toolCalls = this.getCompletedToolCalls(); + const toolCalls = this.getCompletedToolCalls() return { content: this.textContent, toolCalls: toolCalls.length > 0 ? toolCalls : undefined, - }; + } } /** @@ -159,52 +159,52 @@ export class StreamProcessor { */ private processChunk(chunk: StreamChunk): void { switch (chunk.type) { - case "text": - this.handleTextChunk(chunk.content, chunk.delta); - break; + case 'text': + this.handleTextChunk(chunk.content, chunk.delta) + break - case "tool-call-delta": - this.handleToolCallDelta(chunk.toolCallIndex!, chunk.toolCall!); - break; + case 'tool-call-delta': + this.handleToolCallDelta(chunk.toolCallIndex!, chunk.toolCall!) + break - case "done": + case 'done': // Response finished - complete any remaining tool calls - this.completeAllToolCalls(); - break; + this.completeAllToolCalls() + break - case "tool-result": + case 'tool-result': // Handle tool result chunk if (chunk.toolCallId && chunk.content !== undefined) { - const state: ToolResultState = chunk.error ? "error" : "complete"; + const state: ToolResultState = chunk.error ? 'error' : 'complete' this.handlers.onToolResultStateChange?.( chunk.toolCallId, - chunk.content || "", + chunk.content || '', state, - chunk.error - ); + chunk.error, + ) } - break; + break - case "approval-requested": + case 'approval-requested': this.handlers.onApprovalRequested?.( chunk.toolCallId!, chunk.toolName!, - chunk.input!, - chunk.approval!.id - ); - break; + chunk.input, + chunk.approval!.id, + ) + break - case "tool-input-available": + case 'tool-input-available': this.handlers.onToolInputAvailable?.( chunk.toolCallId!, chunk.toolName!, - chunk.input! - ); - break; + chunk.input, + ) + break - case "thinking": - this.handleThinkingChunk(chunk.content, chunk.delta); - break; + case 'thinking': + this.handleThinkingChunk(chunk.content, chunk.delta) + break } } @@ -219,40 +219,40 @@ export class StreamProcessor { */ private handleTextChunk(content?: string, delta?: string): void { // Text arriving means all current tool calls are complete - this.completeAllToolCalls(); + this.completeAllToolCalls() - const previous = this.textContent ?? ""; - let nextText = previous; + const previous = this.textContent + let nextText = previous // ALWAYS prefer delta - adapters should send deltas, not accumulated content // The processor maintains its own accumulation state - if (delta !== undefined && delta !== "") { - nextText = previous + delta; - } else if (content !== undefined && content !== "") { + if (delta !== undefined && delta !== '') { + nextText = previous + delta + } else if (content !== undefined && content !== '') { // Fallback: use content only if delta is not provided (backwards compatibility) // If it starts with what we have, it's an extension/update if (content.startsWith(previous)) { - nextText = content; + nextText = content } else if (previous.startsWith(content)) { // Previous is longer (shouldn't happen with proper adapters, but handle gracefully) - nextText = previous; + nextText = previous } else { // No overlap - append (shouldn't happen with proper adapters) - nextText = previous + content; + nextText = previous + content } } - this.textContent = nextText; + this.textContent = nextText // Use delta for chunk strategy if available, otherwise use content or empty string // This allows chunk strategies to make decisions based on the incremental change - const chunkPortion = delta ?? content ?? ""; + const chunkPortion = delta ?? content ?? '' const shouldEmit = this.chunkStrategy.shouldEmit( chunkPortion, - this.textContent - ); + this.textContent, + ) if (shouldEmit && this.textContent !== this.lastEmittedText) { - this.emitTextUpdate(); + this.emitTextUpdate() } } @@ -260,17 +260,17 @@ export class StreamProcessor { * Handle a tool call delta chunk */ private handleToolCallDelta( - index: number, - toolCall: { id: string; function: { name: string; arguments: string } } + _index: number, + toolCall: { id: string; function: { name: string; arguments: string } }, ): void { - const toolCallId = toolCall.id; - const existingToolCall = this.toolCalls.get(toolCallId); + const toolCallId = toolCall.id + const existingToolCall = this.toolCalls.get(toolCallId) if (!existingToolCall) { // New tool call starting const initialState: ToolCallState = toolCall.function.arguments - ? "input-streaming" - : "awaiting-input"; + ? 'input-streaming' + : 'awaiting-input' const newToolCall: InternalToolCallState = { id: toolCall.id, @@ -278,27 +278,27 @@ export class StreamProcessor { arguments: toolCall.function.arguments, state: initialState, parsedArguments: undefined, - }; + } // Try to parse the arguments if (toolCall.function.arguments) { newToolCall.parsedArguments = this.jsonParser.parse( - toolCall.function.arguments - ); + toolCall.function.arguments, + ) } - this.toolCalls.set(toolCallId, newToolCall); - this.toolCallOrder.push(toolCallId); // Track order + this.toolCalls.set(toolCallId, newToolCall) + this.toolCallOrder.push(toolCallId) // Track order // Get actual index for this tool call (based on order) - const actualIndex = this.toolCallOrder.indexOf(toolCallId); + const actualIndex = this.toolCallOrder.indexOf(toolCallId) // Emit lifecycle event this.handlers.onToolCallStart?.( actualIndex, toolCall.id, - toolCall.function.name - ); + toolCall.function.name, + ) // Emit state change event this.handlers.onToolCallStateChange?.( @@ -307,34 +307,34 @@ export class StreamProcessor { toolCall.function.name, initialState, toolCall.function.arguments, - newToolCall.parsedArguments - ); + newToolCall.parsedArguments, + ) // Emit initial delta if (toolCall.function.arguments) { this.handlers.onToolCallDelta?.( actualIndex, - toolCall.function.arguments - ); + toolCall.function.arguments, + ) } } else { // Continuing existing tool call - const wasAwaitingInput = existingToolCall.state === "awaiting-input"; + const wasAwaitingInput = existingToolCall.state === 'awaiting-input' - existingToolCall.arguments += toolCall.function.arguments; + existingToolCall.arguments += toolCall.function.arguments // Update state if (wasAwaitingInput && toolCall.function.arguments) { - existingToolCall.state = "input-streaming"; + existingToolCall.state = 'input-streaming' } // Try to parse the updated arguments existingToolCall.parsedArguments = this.jsonParser.parse( - existingToolCall.arguments - ); + existingToolCall.arguments, + ) // Get actual index for this tool call - const actualIndex = this.toolCallOrder.indexOf(toolCallId); + const actualIndex = this.toolCallOrder.indexOf(toolCallId) // Emit state change event this.handlers.onToolCallStateChange?.( @@ -343,15 +343,15 @@ export class StreamProcessor { existingToolCall.name, existingToolCall.state, existingToolCall.arguments, - existingToolCall.parsedArguments - ); + existingToolCall.parsedArguments, + ) // Emit delta if (toolCall.function.arguments) { this.handlers.onToolCallDelta?.( actualIndex, - toolCall.function.arguments - ); + toolCall.function.arguments, + ) } } } @@ -361,11 +361,11 @@ export class StreamProcessor { */ private completeAllToolCalls(): void { this.toolCalls.forEach((toolCall, id) => { - if (toolCall.state !== "input-complete") { - const index = this.toolCallOrder.indexOf(id); - this.completeToolCall(index, toolCall); + if (toolCall.state !== 'input-complete') { + const index = this.toolCallOrder.indexOf(id) + this.completeToolCall(index, toolCall) } - }); + }) } /** @@ -373,38 +373,38 @@ export class StreamProcessor { */ private completeToolCall( index: number, - toolCall: InternalToolCallState + toolCall: InternalToolCallState, ): void { - toolCall.state = "input-complete"; + toolCall.state = 'input-complete' // Try final parse - toolCall.parsedArguments = this.jsonParser.parse(toolCall.arguments); + toolCall.parsedArguments = this.jsonParser.parse(toolCall.arguments) // Emit state change event this.handlers.onToolCallStateChange?.( index, toolCall.id, toolCall.name, - "input-complete", + 'input-complete', toolCall.arguments, - toolCall.parsedArguments - ); + toolCall.parsedArguments, + ) // Emit complete event this.handlers.onToolCallComplete?.( index, toolCall.id, toolCall.name, - toolCall.arguments - ); + toolCall.arguments, + ) } /** * Emit pending text update */ private emitTextUpdate(): void { - this.lastEmittedText = this.textContent; - this.handlers.onTextUpdate?.(this.textContent); + this.lastEmittedText = this.textContent + this.handlers.onTextUpdate?.(this.textContent) } /** @@ -412,70 +412,70 @@ export class StreamProcessor { */ private finalizeStream(): void { // Complete any remaining tool calls - this.completeAllToolCalls(); + this.completeAllToolCalls() // Emit any pending text if not already emitted if (this.textContent !== this.lastEmittedText) { - this.emitTextUpdate(); + this.emitTextUpdate() } // Emit stream end - const toolCalls = this.getCompletedToolCalls(); + const toolCalls = this.getCompletedToolCalls() this.handlers.onStreamEnd?.( this.textContent, - toolCalls.length > 0 ? toolCalls : undefined - ); + toolCalls.length > 0 ? toolCalls : undefined, + ) } /** * Get completed tool calls in API format */ - private getCompletedToolCalls(): any[] { + private getCompletedToolCalls(): Array { return Array.from(this.toolCalls.values()) - .filter((tc) => tc.state === "input-complete") + .filter((tc) => tc.state === 'input-complete') .map((tc) => ({ id: tc.id, - type: "function", + type: 'function', function: { name: tc.name, arguments: tc.arguments, }, - })); + })) } /** * Handle a thinking chunk */ private handleThinkingChunk(content?: string, delta?: string): void { - const previous = this.thinkingContent ?? ""; - let nextThinking = previous; + const previous = this.thinkingContent + let nextThinking = previous // Prefer delta over content (same pattern as text chunks) - if (delta !== undefined && delta !== "") { - nextThinking = previous + delta; - } else if (content !== undefined && content !== "") { + if (delta !== undefined && delta !== '') { + nextThinking = previous + delta + } else if (content !== undefined && content !== '') { if (content.startsWith(previous)) { - nextThinking = content; + nextThinking = content } else if (previous.startsWith(content)) { - nextThinking = previous; + nextThinking = previous } else { - nextThinking = previous + content; + nextThinking = previous + content } } - this.thinkingContent = nextThinking; - this.handlers.onThinkingUpdate?.(this.thinkingContent); + this.thinkingContent = nextThinking + this.handlers.onThinkingUpdate?.(this.thinkingContent) } /** * Reset processor state */ private reset(): void { - this.textContent = ""; - this.lastEmittedText = ""; - this.thinkingContent = ""; - this.toolCalls.clear(); - this.toolCallOrder = []; - this.chunkStrategy.reset?.(); + this.textContent = '' + this.lastEmittedText = '' + this.thinkingContent = '' + this.toolCalls.clear() + this.toolCallOrder = [] + this.chunkStrategy.reset?.() } } diff --git a/packages/typescript/ai-client/src/stream/types.ts b/packages/typescript/ai-client/src/stream/types.ts index e3b454c54..2960b3909 100644 --- a/packages/typescript/ai-client/src/stream/types.ts +++ b/packages/typescript/ai-client/src/stream/types.ts @@ -8,68 +8,68 @@ * - Partial JSON parsing for incomplete tool arguments */ -import type { ToolCallState as ToolState, ToolResultState } from "../types"; +import type { ToolResultState, ToolCallState as ToolState } from '../types' /** * Raw events that come from the stream */ export interface StreamChunk { type: - | "text" - | "tool-call-delta" - | "done" - | "approval-requested" - | "tool-input-available" - | "tool-result" - | "thinking"; - content?: string; - delta?: string; - toolCallIndex?: number; + | 'text' + | 'tool-call-delta' + | 'done' + | 'approval-requested' + | 'tool-input-available' + | 'tool-result' + | 'thinking' + content?: string + delta?: string + toolCallIndex?: number toolCall?: { - id: string; + id: string function: { - name: string; - arguments: string; - }; - }; + name: string + arguments: string + } + } // For approval-requested approval?: { - id: string; - needsApproval: boolean; - }; + id: string + needsApproval: boolean + } // For tool-input-available and approval-requested - toolCallId?: string; - toolName?: string; - input?: any; + toolCallId?: string + toolName?: string + input?: any // For tool-result - error?: string; + error?: string } /** * Processed events emitted by the StreamProcessor */ export type ProcessedEvent = - | { type: "text-chunk"; content: string } - | { type: "text-update"; content: string } // Emitted based on chunk strategy + | { type: 'text-chunk'; content: string } + | { type: 'text-update'; content: string } // Emitted based on chunk strategy | { - type: "tool-call-start"; - index: number; - id: string; - name: string; + type: 'tool-call-start' + index: number + id: string + name: string } | { - type: "tool-call-delta"; - index: number; - arguments: string; + type: 'tool-call-delta' + index: number + arguments: string } | { - type: "tool-call-complete"; - index: number; - id: string; - name: string; - arguments: string; + type: 'tool-call-complete' + index: number + id: string + name: string + arguments: string } - | { type: "stream-end"; finalContent: string; toolCalls?: any[] }; + | { type: 'stream-end'; finalContent: string; toolCalls?: Array } /** * Strategy for determining when to emit text updates @@ -81,19 +81,19 @@ export interface ChunkStrategy { * @param accumulated - All text accumulated so far * @returns true if an update should be emitted now */ - shouldEmit(chunk: string, accumulated: string): boolean; + shouldEmit: (chunk: string, accumulated: string) => boolean /** * Optional: Reset strategy state (called when streaming starts) */ - reset?(): void; + reset?: () => void } /** * Handlers for processed stream events */ export interface StreamProcessorHandlers { - onTextUpdate?: (content: string) => void; + onTextUpdate?: (content: string) => void // Enhanced tool call handlers with state tracking onToolCallStateChange?: ( @@ -102,38 +102,38 @@ export interface StreamProcessorHandlers { name: string, state: ToolState, args: string, - parsedArgs?: any - ) => void; + parsedArgs?: any, + ) => void onToolResultStateChange?: ( toolCallId: string, content: string, state: ToolResultState, - error?: string - ) => void; + error?: string, + ) => void // Additional handlers for detailed lifecycle events - onToolCallStart?: (index: number, id: string, name: string) => void; - onToolCallDelta?: (index: number, args: string) => void; + onToolCallStart?: (index: number, id: string, name: string) => void + onToolCallDelta?: (index: number, args: string) => void onToolCallComplete?: ( index: number, id: string, name: string, - args: string - ) => void; + args: string, + ) => void onApprovalRequested?: ( toolCallId: string, toolName: string, input: any, - approvalId: string - ) => void; + approvalId: string, + ) => void onToolInputAvailable?: ( toolCallId: string, toolName: string, - input: any - ) => void; - onThinkingUpdate?: (content: string) => void; - onStreamEnd?: (content: string, toolCalls?: any[]) => void; + input: any, + ) => void + onThinkingUpdate?: (content: string) => void + onStreamEnd?: (content: string, toolCalls?: Array) => void } /** @@ -141,28 +141,28 @@ export interface StreamProcessorHandlers { * Allows users to provide their own parsing logic if needed */ export interface StreamParser { - parse(stream: AsyncIterable): AsyncIterable; + parse: (stream: AsyncIterable) => AsyncIterable } /** * Options for StreamProcessor */ export interface StreamProcessorOptions { - chunkStrategy?: ChunkStrategy; - parser?: StreamParser; - handlers: StreamProcessorHandlers; + chunkStrategy?: ChunkStrategy + parser?: StreamParser + handlers: StreamProcessorHandlers jsonParser?: { - parse(jsonString: string): any; - }; + parse: (jsonString: string) => any + } } /** * Internal state for a tool call being tracked */ export interface InternalToolCallState { - id: string; - name: string; - arguments: string; - state: ToolState; - parsedArguments?: any; // Parsed (potentially incomplete) JSON + id: string + name: string + arguments: string + state: ToolState + parsedArguments?: any // Parsed (potentially incomplete) JSON } diff --git a/packages/typescript/ai-client/src/types.ts b/packages/typescript/ai-client/src/types.ts index e3fc1207a..9db8ed594 100644 --- a/packages/typescript/ai-client/src/types.ts +++ b/packages/typescript/ai-client/src/types.ts @@ -1,77 +1,77 @@ -import type { ModelMessage, StreamChunk } from "@tanstack/ai"; -import type { ConnectionAdapter } from "./connection-adapters"; -import type { ChunkStrategy, StreamParser } from "./stream/types"; +import type { ModelMessage, StreamChunk } from '@tanstack/ai' +import type { ConnectionAdapter } from './connection-adapters' +import type { ChunkStrategy, StreamParser } from './stream/types' /** * Tool call states - track the lifecycle of a tool call */ export type ToolCallState = - | "awaiting-input" // Received start but no arguments yet - | "input-streaming" // Partial arguments received - | "input-complete" // All arguments received - | "approval-requested" // Waiting for user approval - | "approval-responded"; // User has approved/denied + | 'awaiting-input' // Received start but no arguments yet + | 'input-streaming' // Partial arguments received + | 'input-complete' // All arguments received + | 'approval-requested' // Waiting for user approval + | 'approval-responded' // User has approved/denied /** * Tool result states - track the lifecycle of a tool result */ export type ToolResultState = - | "streaming" // Placeholder for future streamed output - | "complete" // Result is complete - | "error"; // Error occurred + | 'streaming' // Placeholder for future streamed output + | 'complete' // Result is complete + | 'error' // Error occurred /** * Message parts - building blocks of UIMessage */ export interface TextPart { - type: "text"; - content: string; + type: 'text' + content: string } export interface ToolCallPart { - type: "tool-call"; - id: string; - name: string; - arguments: string; // JSON string (may be incomplete) - state: ToolCallState; + type: 'tool-call' + id: string + name: string + arguments: string // JSON string (may be incomplete) + state: ToolCallState /** Approval metadata if tool requires user approval */ approval?: { - id: string; // Unique approval ID - needsApproval: boolean; // Always true if present - approved?: boolean; // User's decision (undefined until responded) - }; + id: string // Unique approval ID + needsApproval: boolean // Always true if present + approved?: boolean // User's decision (undefined until responded) + } /** Tool execution output (for client tools or after approval) */ - output?: any; + output?: any } export interface ToolResultPart { - type: "tool-result"; - toolCallId: string; - content: string; - state: ToolResultState; - error?: string; // Error message if state is "error" + type: 'tool-result' + toolCallId: string + content: string + state: ToolResultState + error?: string // Error message if state is "error" } export interface ThinkingPart { - type: "thinking"; - content: string; + type: 'thinking' + content: string } export type MessagePart = | TextPart | ToolCallPart | ToolResultPart - | ThinkingPart; + | ThinkingPart /** * UIMessage - Domain-specific message format optimized for building chat UIs * Contains parts that can be text, tool calls, or tool results */ export interface UIMessage { - id: string; - role: "system" | "user" | "assistant"; - parts: MessagePart[]; - createdAt?: Date; + id: string + role: 'system' | 'user' | 'assistant' + parts: Array + createdAt?: Date } export interface ChatClientOptions { @@ -79,68 +79,68 @@ export interface ChatClientOptions { * Connection adapter for streaming * Use fetchServerSentEvents(), fetchHttpStream(), or stream() to create adapters */ - connection: ConnectionAdapter; + connection: ConnectionAdapter /** * Initial messages to populate the chat */ - initialMessages?: UIMessage[]; + initialMessages?: Array /** * Unique identifier for this chat instance * Used for managing multiple chats */ - id?: string; + id?: string /** * Additional body parameters to send */ - body?: Record; + body?: Record /** * Callback when a response is received */ - onResponse?: (response?: Response) => void | Promise; + onResponse?: (response?: Response) => void | Promise /** * Callback when a stream chunk is received */ - onChunk?: (chunk: StreamChunk) => void; + onChunk?: (chunk: StreamChunk) => void /** * Callback when the response is finished */ - onFinish?: (message: UIMessage) => void; + onFinish?: (message: UIMessage) => void /** * Callback when an error occurs */ - onError?: (error: Error) => void; + onError?: (error: Error) => void /** * Callback when messages change */ - onMessagesChange?: (messages: UIMessage[]) => void; + onMessagesChange?: (messages: Array) => void /** * Callback when loading state changes */ - onLoadingChange?: (isLoading: boolean) => void; + onLoadingChange?: (isLoading: boolean) => void /** * Callback when error state changes */ - onErrorChange?: (error: Error | undefined) => void; + onErrorChange?: (error: Error | undefined) => void /** * Callback when a client-side tool needs to be executed * Tool has no execute function - client must provide the result */ onToolCall?: (args: { - toolCallId: string; - toolName: string; - input: any; - }) => Promise; + toolCallId: string + toolName: string + input: any + }) => Promise /** * Stream processing options (optional) @@ -151,17 +151,17 @@ export interface ChatClientOptions { * Strategy for when to emit text updates * Defaults to ImmediateStrategy (every chunk) */ - chunkStrategy?: ChunkStrategy; + chunkStrategy?: ChunkStrategy /** * Custom stream parser * Override to handle different stream formats */ - parser?: StreamParser; - }; + parser?: StreamParser + } } export interface ChatRequestBody { - messages: ModelMessage[]; - data?: Record; + messages: Array + data?: Record } diff --git a/packages/typescript/ai-client/tests/chat-client-abort.test.ts b/packages/typescript/ai-client/tests/chat-client-abort.test.ts index 7527a4ef4..3b77bedd9 100644 --- a/packages/typescript/ai-client/tests/chat-client-abort.test.ts +++ b/packages/typescript/ai-client/tests/chat-client-abort.test.ts @@ -1,238 +1,317 @@ -import { describe, it, expect, vi, beforeEach } from "vitest"; -import { ChatClient } from "../src/chat-client"; -import type { ConnectionAdapter, StreamChunk } from "../src/connection-adapters"; -import type { UIMessage } from "../src/types"; +import { beforeEach, describe, expect, it, vi } from 'vitest' +import { ChatClient } from '../src/chat-client' +import type { ConnectionAdapter } from '../src/connection-adapters' +import type { StreamChunk } from '@tanstack/ai' -describe("ChatClient - Abort Signal Handling", () => { - let mockAdapter: ConnectionAdapter; - let receivedAbortSignal: AbortSignal | undefined; +describe('ChatClient - Abort Signal Handling', () => { + let mockAdapter: ConnectionAdapter + let receivedAbortSignal: AbortSignal | undefined beforeEach(() => { - receivedAbortSignal = undefined; - + receivedAbortSignal = undefined + mockAdapter = { - async *connect(messages, data, abortSignal) { - receivedAbortSignal = abortSignal; - + // eslint-disable-next-line @typescript-eslint/require-await + async *connect(_messages, _data, abortSignal) { + receivedAbortSignal = abortSignal + // Simulate streaming chunks - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: "Hello", content: "Hello", role: "assistant" }; - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: " World", content: "Hello World", role: "assistant" }; - yield { type: "done", id: "1", model: "test", timestamp: Date.now(), finishReason: "stop" }; + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: 'Hello', + content: 'Hello', + role: 'assistant', + } + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: ' World', + content: 'Hello World', + role: 'assistant', + } + yield { + type: 'done', + id: '1', + model: 'test', + timestamp: Date.now(), + finishReason: 'stop', + } }, - }; - }); + } + }) - it("should create AbortController and pass signal to adapter", async () => { + it('should create AbortController and pass signal to adapter', async () => { const client = new ChatClient({ connection: mockAdapter, - }); + }) const appendPromise = client.append({ - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }); + }) // Wait a bit to ensure connect is called - await new Promise((resolve) => setTimeout(resolve, 10)); + await new Promise((resolve) => setTimeout(resolve, 10)) - expect(receivedAbortSignal).toBeDefined(); - expect(receivedAbortSignal).toBeInstanceOf(AbortSignal); - expect(receivedAbortSignal?.aborted).toBe(false); + expect(receivedAbortSignal).toBeDefined() + expect(receivedAbortSignal).toBeInstanceOf(AbortSignal) + expect(receivedAbortSignal?.aborted).toBe(false) - await appendPromise; - }); + await appendPromise + }) + + it('should abort request when stop() is called', async () => { + let abortControllerRef: AbortController | null = null - it("should abort request when stop() is called", async () => { - let abortControllerRef: AbortController | null = null; - const adapterWithAbort: ConnectionAdapter = { - async *connect(messages, data, abortSignal) { - abortControllerRef = new AbortController(); + async *connect(_messages, _data, abortSignal) { + abortControllerRef = new AbortController() if (abortSignal) { - abortSignal.addEventListener("abort", () => { - abortControllerRef?.abort(); - }); + abortSignal.addEventListener('abort', () => { + abortControllerRef?.abort() + }) } try { - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: "Hello", content: "Hello", role: "assistant" }; + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: 'Hello', + content: 'Hello', + role: 'assistant', + } // Simulate long-running stream - await new Promise((resolve) => setTimeout(resolve, 100)); - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: " World", content: "Hello World", role: "assistant" }; + await new Promise((resolve) => setTimeout(resolve, 100)) + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: ' World', + content: 'Hello World', + role: 'assistant', + } } catch (err) { // Abort errors are expected - if (err instanceof Error && err.name === "AbortError") { - return; + if (err instanceof Error && err.name === 'AbortError') { + return } - throw err; + throw err } }, - }; + } const client = new ChatClient({ connection: adapterWithAbort, - }); + }) const appendPromise = client.append({ - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }); + }) // Wait a bit then stop - await new Promise((resolve) => setTimeout(resolve, 10)); - client.stop(); + await new Promise((resolve) => setTimeout(resolve, 10)) + client.stop() - await appendPromise; + await appendPromise - expect(client.getIsLoading()).toBe(false); - }); + expect(client.getIsLoading()).toBe(false) + }) - it("should preserve partial content when aborted", async () => { - const chunks: StreamChunk[] = []; - let yieldedChunks = 0; + it('should preserve partial content when aborted', async () => { + const chunks: Array = [] + let yieldedChunks = 0 const adapterWithPartial: ConnectionAdapter = { - async *connect(messages, data, abortSignal) { - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: "Hello", content: "Hello", role: "assistant" }; - yieldedChunks++; - + // eslint-disable-next-line @typescript-eslint/require-await + async *connect(_messages, _data, abortSignal) { + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: 'Hello', + content: 'Hello', + role: 'assistant', + } + yieldedChunks++ + if (abortSignal?.aborted) { - return; + return } - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: " World", content: "Hello World", role: "assistant" }; - yieldedChunks++; + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: ' World', + content: 'Hello World', + role: 'assistant', + } + yieldedChunks++ }, - }; + } const client = new ChatClient({ connection: adapterWithPartial, onChunk: (chunk) => { - chunks.push(chunk); + chunks.push(chunk) }, - }); + }) const appendPromise = client.append({ - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }); + }) // Wait for first chunk then abort - await new Promise((resolve) => setTimeout(resolve, 10)); - client.stop(); + await new Promise((resolve) => setTimeout(resolve, 10)) + client.stop() - await appendPromise; + await appendPromise // Should have received at least one chunk before abort - expect(chunks.length).toBeGreaterThan(0); - expect(client.getMessages().length).toBeGreaterThan(0); - }); + expect(chunks.length).toBeGreaterThan(0) + expect(client.getMessages().length).toBeGreaterThan(0) + }) + + it('should handle abort gracefully without throwing error', async () => { + const errorSpy = vi.fn() - it("should handle abort gracefully without throwing error", async () => { - const errorSpy = vi.fn(); - const adapterWithAbort: ConnectionAdapter = { - async *connect(messages, data, abortSignal) { - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: "Hello", content: "Hello", role: "assistant" }; - + // eslint-disable-next-line @typescript-eslint/require-await + async *connect(_messages, _data, abortSignal) { + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: 'Hello', + content: 'Hello', + role: 'assistant', + } + if (abortSignal?.aborted) { - return; + return } }, - }; + } const client = new ChatClient({ connection: adapterWithAbort, onError: errorSpy, - }); + }) const appendPromise = client.append({ - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }); + }) - await new Promise((resolve) => setTimeout(resolve, 10)); - client.stop(); + await new Promise((resolve) => setTimeout(resolve, 10)) + client.stop() - await appendPromise; + await appendPromise // Should not have called onError for abort - expect(errorSpy).not.toHaveBeenCalled(); - expect(client.getError()).toBeUndefined(); - }); + expect(errorSpy).not.toHaveBeenCalled() + expect(client.getError()).toBeUndefined() + }) - it("should set isLoading to false after abort", async () => { + it('should set isLoading to false after abort', async () => { const adapterWithAbort: ConnectionAdapter = { - async *connect(messages, data, abortSignal) { - yield { type: "content", id: "1", model: "test", timestamp: Date.now(), delta: "Hello", content: "Hello", role: "assistant" }; - await new Promise((resolve) => setTimeout(resolve, 50)); + async *connect(_messages, _data, _abortSignal) { + yield { + type: 'content', + id: '1', + model: 'test', + timestamp: Date.now(), + delta: 'Hello', + content: 'Hello', + role: 'assistant', + } + await new Promise((resolve) => setTimeout(resolve, 50)) }, - }; + } const client = new ChatClient({ connection: adapterWithAbort, - }); + }) const appendPromise = client.append({ - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }); + }) - expect(client.getIsLoading()).toBe(true); + expect(client.getIsLoading()).toBe(true) - await new Promise((resolve) => setTimeout(resolve, 10)); - client.stop(); + await new Promise((resolve) => setTimeout(resolve, 10)) + client.stop() - await appendPromise; + await appendPromise - expect(client.getIsLoading()).toBe(false); - }); + expect(client.getIsLoading()).toBe(false) + }) - it("should create new AbortController for each request", async () => { - const abortSignals: AbortSignal[] = []; + it('should create new AbortController for each request', async () => { + const abortSignals: Array = [] const adapter: ConnectionAdapter = { - async *connect(messages, data, abortSignal) { + // eslint-disable-next-line @typescript-eslint/require-await + async *connect(_messages, _data, abortSignal) { if (abortSignal) { - abortSignals.push(abortSignal); + abortSignals.push(abortSignal) + } + yield { + type: 'done', + id: '1', + model: 'test', + timestamp: Date.now(), + finishReason: 'stop', } - yield { type: "done", id: "1", model: "test", timestamp: Date.now(), finishReason: "stop" }; }, - }; + } const client = new ChatClient({ connection: adapter, - }); + }) // First request await client.append({ - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello 1" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello 1' }], createdAt: new Date(), - }); + }) // Second request await client.append({ - id: "user-2", - role: "user", - parts: [{ type: "text", content: "Hello 2" }], + id: 'user-2', + role: 'user', + parts: [{ type: 'text', content: 'Hello 2' }], createdAt: new Date(), - }); + }) - expect(abortSignals.length).toBe(2); + expect(abortSignals.length).toBe(2) // Each should be a different signal instance - expect(abortSignals[0]).not.toBe(abortSignals[1]); - }); -}); - + expect(abortSignals[0]).not.toBe(abortSignals[1]) + }) +}) diff --git a/packages/typescript/ai-client/tests/chat-client.test.ts b/packages/typescript/ai-client/tests/chat-client.test.ts index 3c6081a67..db8afba75 100644 --- a/packages/typescript/ai-client/tests/chat-client.test.ts +++ b/packages/typescript/ai-client/tests/chat-client.test.ts @@ -1,582 +1,580 @@ -import { describe, it, expect, vi } from "vitest"; -import { ChatClient } from "../src/chat-client"; +import { describe, expect, it, vi } from 'vitest' +import { ChatClient } from '../src/chat-client' import { createMockConnectionAdapter, createTextChunks, createToolCallChunks, -} from "./test-utils"; -import type { UIMessage } from "../src/types"; - -describe("ChatClient", () => { - describe("constructor", () => { - it("should create a client with default options", () => { - const adapter = createMockConnectionAdapter(); - const client = new ChatClient({ connection: adapter }); - - expect(client.getMessages()).toEqual([]); - expect(client.getIsLoading()).toBe(false); - expect(client.getError()).toBeUndefined(); - }); - - it("should initialize with provided messages", () => { - const adapter = createMockConnectionAdapter(); - const initialMessages: UIMessage[] = [ +} from './test-utils' +import type { UIMessage } from '../src/types' + +describe('ChatClient', () => { + describe('constructor', () => { + it('should create a client with default options', () => { + const adapter = createMockConnectionAdapter() + const client = new ChatClient({ connection: adapter }) + + expect(client.getMessages()).toEqual([]) + expect(client.getIsLoading()).toBe(false) + expect(client.getError()).toBeUndefined() + }) + + it('should initialize with provided messages', () => { + const adapter = createMockConnectionAdapter() + const initialMessages: Array = [ { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), }, - ]; + ] const client = new ChatClient({ connection: adapter, initialMessages, - }); + }) - expect(client.getMessages()).toEqual(initialMessages); - }); + expect(client.getMessages()).toEqual(initialMessages) + }) - it("should use provided id or generate one", async () => { + it('should use provided id or generate one', async () => { const adapter = createMockConnectionAdapter({ - chunks: createTextChunks("Response"), - }); + chunks: createTextChunks('Response'), + }) const client1 = new ChatClient({ connection: adapter, - id: "custom-id", - }); + id: 'custom-id', + }) const client2 = new ChatClient({ connection: adapter, - }); + }) // Message IDs are generated using the client's uniqueId as prefix // Format: `${this.uniqueId}-${Date.now()}-${random}` // So we can verify the custom ID is used by checking message ID format - await client1.sendMessage("Test"); - await client2.sendMessage("Test"); + await client1.sendMessage('Test') + await client2.sendMessage('Test') - const messages1 = client1.getMessages(); - const messages2 = client2.getMessages(); + const messages1 = client1.getMessages() + const messages2 = client2.getMessages() // Both should have messages - expect(messages1.length).toBeGreaterThan(0); - expect(messages2.length).toBeGreaterThan(0); + expect(messages1.length).toBeGreaterThan(0) + expect(messages2.length).toBeGreaterThan(0) // Message IDs from client1 should start with "custom-id-" - const client1MessageId = messages1[0].id; - expect(client1MessageId).toMatch(/^custom-id-/); + const client1MessageId = messages1[0]?.id + expect(client1MessageId).toMatch(/^custom-id-/) // Message IDs from client2 should NOT start with "custom-id-" // (they'll have a generated ID like "chat-...") - const client2MessageId = messages2[0].id; - expect(client2MessageId).not.toMatch(/^custom-id-/); - expect(client2MessageId).toMatch(/^chat-/); - }); - }); + const client2MessageId = messages2[0]?.id + expect(client2MessageId).not.toMatch(/^custom-id-/) + expect(client2MessageId).toMatch(/^chat-/) + }) + }) - describe("sendMessage", () => { - it("should send a message and append it", async () => { - const chunks = createTextChunks("Hello, world!"); - const adapter = createMockConnectionAdapter({ chunks }); + describe('sendMessage', () => { + it('should send a message and append it', async () => { + const chunks = createTextChunks('Hello, world!') + const adapter = createMockConnectionAdapter({ chunks }) - const client = new ChatClient({ connection: adapter }); + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - const messages = client.getMessages(); - expect(messages.length).toBeGreaterThan(0); - expect(messages[0].role).toBe("user"); - expect(messages[0].parts[0]).toEqual({ - type: "text", - content: "Hello", - }); - }); + const messages = client.getMessages() + expect(messages.length).toBeGreaterThan(0) + expect(messages[0]?.role).toBe('user') + expect(messages[0]?.parts[0]).toEqual({ + type: 'text', + content: 'Hello', + }) + }) - it("should create and return assistant message from stream chunks", async () => { - const chunks = createTextChunks("Hello, world!"); - const adapter = createMockConnectionAdapter({ chunks }); + it('should create and return assistant message from stream chunks', async () => { + const chunks = createTextChunks('Hello, world!') + const adapter = createMockConnectionAdapter({ chunks }) - const client = new ChatClient({ connection: adapter }); + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - const messages = client.getMessages(); + const messages = client.getMessages() // Should have both user and assistant messages - expect(messages.length).toBeGreaterThanOrEqual(2); + expect(messages.length).toBeGreaterThanOrEqual(2) // Find the assistant message created from chunks - const assistantMessage = messages.find((m) => m.role === "assistant"); - expect(assistantMessage).toBeDefined(); + const assistantMessage = messages.find((m) => m.role === 'assistant') + expect(assistantMessage).toBeDefined() if (assistantMessage) { // Verify the assistant message is readable and has content - expect(assistantMessage.id).toBeTruthy(); - expect(assistantMessage.createdAt).toBeInstanceOf(Date); - expect(assistantMessage.parts.length).toBeGreaterThan(0); + expect(assistantMessage.id).toBeTruthy() + expect(assistantMessage.createdAt).toBeInstanceOf(Date) + expect(assistantMessage.parts.length).toBeGreaterThan(0) // Verify it has text content from the chunks - const textPart = assistantMessage.parts.find((p) => p.type === "text"); - expect(textPart).toBeDefined(); - if (textPart && textPart.type === "text") { - expect(textPart.content).toBe("Hello, world!"); + const textPart = assistantMessage.parts.find((p) => p.type === 'text') + expect(textPart).toBeDefined() + if (textPart) { + expect(textPart.content).toBe('Hello, world!') } } - }); + }) - it("should not send empty messages", async () => { - const adapter = createMockConnectionAdapter(); - const client = new ChatClient({ connection: adapter }); + it('should not send empty messages', async () => { + const adapter = createMockConnectionAdapter() + const client = new ChatClient({ connection: adapter }) - await client.sendMessage(""); - await client.sendMessage(" "); + await client.sendMessage('') + await client.sendMessage(' ') - expect(client.getMessages().length).toBe(0); - }); + expect(client.getMessages().length).toBe(0) + }) - it("should not send message while loading", async () => { + it('should not send message while loading', async () => { const adapter = createMockConnectionAdapter({ - chunks: createTextChunks("Response"), + chunks: createTextChunks('Response'), chunkDelay: 100, - }); - const client = new ChatClient({ connection: adapter }); + }) + const client = new ChatClient({ connection: adapter }) - const promise1 = client.sendMessage("First"); - const promise2 = client.sendMessage("Second"); + const promise1 = client.sendMessage('First') + const promise2 = client.sendMessage('Second') - await Promise.all([promise1, promise2]); + await Promise.all([promise1, promise2]) // Should only have one user message since second was blocked - const userMessages = client - .getMessages() - .filter((m) => m.role === "user"); - expect(userMessages.length).toBe(1); - }); - }); - - describe("append", () => { - it("should append a UIMessage", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + const userMessages = client.getMessages().filter((m) => m.role === 'user') + expect(userMessages.length).toBe(1) + }) + }) + + describe('append', () => { + it('should append a UIMessage', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) const message: UIMessage = { - id: "user-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'user-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }; + } - await client.append(message); + await client.append(message) - const messages = client.getMessages(); - expect(messages.length).toBeGreaterThan(0); - expect(messages[0].id).toBe("user-1"); - }); + const messages = client.getMessages() + expect(messages.length).toBeGreaterThan(0) + expect(messages[0]?.id).toBe('user-1') + }) - it("should convert and append a ModelMessage", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + it('should convert and append a ModelMessage', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) await client.append({ - role: "user", - content: "Hello from model", - }); - - const messages = client.getMessages(); - expect(messages.length).toBeGreaterThan(0); - expect(messages[0].role).toBe("user"); - expect(messages[0].parts[0]).toEqual({ - type: "text", - content: "Hello from model", - }); - }); - - it("should generate id and createdAt if missing", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + role: 'user', + content: 'Hello from model', + }) + + const messages = client.getMessages() + expect(messages.length).toBeGreaterThan(0) + expect(messages[0]?.role).toBe('user') + expect(messages[0]?.parts[0]).toEqual({ + type: 'text', + content: 'Hello from model', + }) + }) + + it('should generate id and createdAt if missing', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) const message: UIMessage = { - id: "", - role: "user", - parts: [{ type: "text", content: "Hello" }], - }; + id: '', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], + } - await client.append(message); + await client.append(message) - const messages = client.getMessages(); - expect(messages[0].id).toBeTruthy(); - expect(messages[0].createdAt).toBeInstanceOf(Date); - }); - }); + const messages = client.getMessages() + expect(messages[0]?.id).toBeTruthy() + expect(messages[0]?.createdAt).toBeInstanceOf(Date) + }) + }) - describe("reload", () => { - it("should reload from last user message", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + describe('reload', () => { + it('should reload from last user message', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("First"); - await client.sendMessage("Second"); + await client.sendMessage('First') + await client.sendMessage('Second') - await client.reload(); + await client.reload() // After reload, messages after the last user message are removed // Then the last user message is resent, which triggers a new assistant response - const messagesAfter = client.getMessages(); + const messagesAfter = client.getMessages() // Should have the same user messages, plus a new assistant response - const userMessagesAfter = messagesAfter.filter((m) => m.role === "user"); - expect(userMessagesAfter.length).toBeGreaterThanOrEqual(2); + const userMessagesAfter = messagesAfter.filter((m) => m.role === 'user') + expect(userMessagesAfter.length).toBeGreaterThanOrEqual(2) // The last user message should match what was resent const lastUserMessageAfter = - userMessagesAfter[userMessagesAfter.length - 1]; - expect(lastUserMessageAfter.parts[0]).toEqual({ - type: "text", - content: "Second", - }); - }); + userMessagesAfter[userMessagesAfter.length - 1] + expect(lastUserMessageAfter?.parts[0]).toEqual({ + type: 'text', + content: 'Second', + }) + }) - it("should do nothing if no user messages", async () => { - const adapter = createMockConnectionAdapter(); - const client = new ChatClient({ connection: adapter }); + it('should do nothing if no user messages', async () => { + const adapter = createMockConnectionAdapter() + const client = new ChatClient({ connection: adapter }) - await client.reload(); + await client.reload() - expect(client.getMessages().length).toBe(0); - }); + expect(client.getMessages().length).toBe(0) + }) - it("should do nothing if messages array is empty", async () => { - const adapter = createMockConnectionAdapter(); - const client = new ChatClient({ connection: adapter }); + it('should do nothing if messages array is empty', async () => { + const adapter = createMockConnectionAdapter() + const client = new ChatClient({ connection: adapter }) - await client.reload(); + await client.reload() - expect(client.getMessages().length).toBe(0); - }); - }); + expect(client.getMessages().length).toBe(0) + }) + }) - describe("stop", () => { - it("should stop loading and abort request", async () => { - const chunks = createTextChunks("Long response that takes time"); + describe('stop', () => { + it('should stop loading and abort request', async () => { + const chunks = createTextChunks('Long response that takes time') const adapter = createMockConnectionAdapter({ chunks, chunkDelay: 50, - }); - const client = new ChatClient({ connection: adapter }); + }) + const client = new ChatClient({ connection: adapter }) const appendPromise = client.append({ - role: "user", - content: "Hello", - }); + role: 'user', + content: 'Hello', + }) // Wait a bit then stop - await new Promise((resolve) => setTimeout(resolve, 10)); - client.stop(); + await new Promise((resolve) => setTimeout(resolve, 10)) + client.stop() - await appendPromise; + await appendPromise - expect(client.getIsLoading()).toBe(false); - }); - }); + expect(client.getIsLoading()).toBe(false) + }) + }) - describe("clear", () => { - it("should clear all messages", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + describe('clear', () => { + it('should clear all messages', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - expect(client.getMessages().length).toBeGreaterThan(0); + expect(client.getMessages().length).toBeGreaterThan(0) - client.clear(); + client.clear() - expect(client.getMessages().length).toBe(0); - expect(client.getError()).toBeUndefined(); - }); - }); + expect(client.getMessages().length).toBe(0) + expect(client.getError()).toBeUndefined() + }) + }) - describe("callbacks", () => { - it("should call onMessagesChange when messages update", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const onMessagesChange = vi.fn(); + describe('callbacks', () => { + it('should call onMessagesChange when messages update', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const onMessagesChange = vi.fn() const client = new ChatClient({ connection: adapter, onMessagesChange, - }); + }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - expect(onMessagesChange).toHaveBeenCalled(); - expect(onMessagesChange.mock.calls.length).toBeGreaterThan(0); - }); + expect(onMessagesChange).toHaveBeenCalled() + expect(onMessagesChange.mock.calls.length).toBeGreaterThan(0) + }) - it("should call onLoadingChange when loading state changes", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const onLoadingChange = vi.fn(); + it('should call onLoadingChange when loading state changes', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const onLoadingChange = vi.fn() const client = new ChatClient({ connection: adapter, onLoadingChange, - }); + }) - const promise = client.sendMessage("Hello"); + const promise = client.sendMessage('Hello') // Should be called with true - expect(onLoadingChange).toHaveBeenCalledWith(true); + expect(onLoadingChange).toHaveBeenCalledWith(true) - await promise; + await promise // Should be called with false - expect(onLoadingChange).toHaveBeenCalledWith(false); - }); + expect(onLoadingChange).toHaveBeenCalledWith(false) + }) - it("should call onChunk for each chunk", async () => { - const chunks = createTextChunks("Hello"); - const adapter = createMockConnectionAdapter({ chunks }); - const onChunk = vi.fn(); + it('should call onChunk for each chunk', async () => { + const chunks = createTextChunks('Hello') + const adapter = createMockConnectionAdapter({ chunks }) + const onChunk = vi.fn() const client = new ChatClient({ connection: adapter, onChunk, - }); + }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - expect(onChunk).toHaveBeenCalled(); - expect(onChunk.mock.calls.length).toBeGreaterThan(0); - }); + expect(onChunk).toHaveBeenCalled() + expect(onChunk.mock.calls.length).toBeGreaterThan(0) + }) - it("should call onFinish when stream completes", async () => { - const chunks = createTextChunks("Response"); - const adapter = createMockConnectionAdapter({ chunks }); - const onFinish = vi.fn(); + it('should call onFinish when stream completes', async () => { + const chunks = createTextChunks('Response') + const adapter = createMockConnectionAdapter({ chunks }) + const onFinish = vi.fn() const client = new ChatClient({ connection: adapter, onFinish, - }); + }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - expect(onFinish).toHaveBeenCalled(); - const finishCall = onFinish.mock.calls[0][0]; - expect(finishCall.role).toBe("assistant"); - }); + expect(onFinish).toHaveBeenCalled() + const finishCall = onFinish.mock.calls[0]?.[0] + expect(finishCall?.role).toBe('assistant') + }) - it("should call onError when error occurs", async () => { - const error = new Error("Connection failed"); + it('should call onError when error occurs', async () => { + const error = new Error('Connection failed') const adapter = createMockConnectionAdapter({ shouldError: true, error, - }); - const onError = vi.fn(); + }) + const onError = vi.fn() const client = new ChatClient({ connection: adapter, onError, - }); + }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - expect(onError).toHaveBeenCalledWith(error); - expect(client.getError()).toBe(error); - }); - }); + expect(onError).toHaveBeenCalledWith(error) + expect(client.getError()).toBe(error) + }) + }) - describe("tool calls", () => { - it("should handle tool calls from stream", async () => { + describe('tool calls', () => { + it('should handle tool calls from stream', async () => { const chunks = createToolCallChunks([ - { id: "tool-1", name: "get_weather", arguments: '{"city": "NYC"}' }, - ]); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + { id: 'tool-1', name: 'get_weather', arguments: '{"city": "NYC"}' }, + ]) + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("What's the weather?"); + await client.sendMessage("What's the weather?") - const messages = client.getMessages(); - const assistantMessage = messages.find((m) => m.role === "assistant"); + const messages = client.getMessages() + const assistantMessage = messages.find((m) => m.role === 'assistant') - expect(assistantMessage).toBeDefined(); + expect(assistantMessage).toBeDefined() if (assistantMessage) { const toolCallPart = assistantMessage.parts.find( - (p) => p.type === "tool-call" - ); - expect(toolCallPart).toBeDefined(); - if (toolCallPart && toolCallPart.type === "tool-call") { - expect(toolCallPart.name).toBe("get_weather"); + (p) => p.type === 'tool-call', + ) + expect(toolCallPart).toBeDefined() + if (toolCallPart) { + expect(toolCallPart.name).toBe('get_weather') } } - }); + }) - it("should execute tool call when onToolCall callback provided", async () => { + it('should execute tool call when onToolCall callback provided', async () => { const chunks = createToolCallChunks([ - { id: "tool-1", name: "test_tool", arguments: '{"x": 1}' }, - ]); - const adapter = createMockConnectionAdapter({ chunks }); - const onToolCall = vi.fn().mockResolvedValue({ result: "success" }); + { id: 'tool-1', name: 'test_tool', arguments: '{"x": 1}' }, + ]) + const adapter = createMockConnectionAdapter({ chunks }) + const onToolCall = vi.fn().mockResolvedValue({ result: 'success' }) const client = new ChatClient({ connection: adapter, onToolCall, - }); + }) - await client.sendMessage("Test"); + await client.sendMessage('Test') - expect(onToolCall).toHaveBeenCalled(); - const call = onToolCall.mock.calls[0][0]; - expect(call.toolName).toBe("test_tool"); - expect(call.input).toEqual({ x: 1 }); - }); + expect(onToolCall).toHaveBeenCalled() + const call = onToolCall.mock.calls[0]?.[0] + expect(call.toolName).toBe('test_tool') + expect(call.input).toEqual({ x: 1 }) + }) - it("should handle tool call errors", async () => { - const toolCallId = "tool-1"; + it('should handle tool call errors', async () => { + const toolCallId = 'tool-1' const chunks = createToolCallChunks([ - { id: toolCallId, name: "test_tool", arguments: '{"x": 1}' }, - ]); - const adapter = createMockConnectionAdapter({ chunks }); + { id: toolCallId, name: 'test_tool', arguments: '{"x": 1}' }, + ]) + const adapter = createMockConnectionAdapter({ chunks }) // Capture the tool call ID from the callback - let capturedToolCallId: string | undefined; - const onToolCall = vi.fn().mockImplementation(async (args) => { - capturedToolCallId = args.toolCallId; - throw new Error("Tool execution failed"); - }); + let capturedToolCallId: string | undefined + const onToolCall = vi.fn().mockImplementation((args) => { + capturedToolCallId = args.toolCallId + throw new Error('Tool execution failed') + }) const client = new ChatClient({ connection: adapter, onToolCall, - }); + }) - await client.sendMessage("Test"); + await client.sendMessage('Test') - expect(onToolCall).toHaveBeenCalled(); - expect(capturedToolCallId).toBe(toolCallId); + expect(onToolCall).toHaveBeenCalled() + expect(capturedToolCallId).toBe(toolCallId) // Wait for async operations to complete (addToolResult is async) // Need to wait for the stream to finish and addToolResult to complete - await new Promise((resolve) => setTimeout(resolve, 200)); + await new Promise((resolve) => setTimeout(resolve, 200)) // Should have tool call with error output - const messages = client.getMessages(); - const assistantMessage = messages.find((m) => m.role === "assistant"); - expect(assistantMessage).toBeDefined(); + const messages = client.getMessages() + const assistantMessage = messages.find((m) => m.role === 'assistant') + expect(assistantMessage).toBeDefined() if (assistantMessage) { // Find any tool call part const allToolCalls = assistantMessage.parts.filter( - (p) => p.type === "tool-call" - ); - expect(allToolCalls.length).toBeGreaterThan(0); + (p) => p.type === 'tool-call', + ) + expect(allToolCalls.length).toBeGreaterThan(0) // Find the tool call part by the captured ID const toolCallPart = allToolCalls.find( - (p) => p.type === "tool-call" && p.id === capturedToolCallId - ); + (p) => p.id === capturedToolCallId, + ) // The tool call part should exist - expect(toolCallPart).toBeDefined(); + expect(toolCallPart).toBeDefined() - if (toolCallPart && toolCallPart.type === "tool-call") { + if (toolCallPart) { // After error, output should be set with error object // Note: The output might be set asynchronously, so we check if it exists // If it doesn't exist yet, the error handling still worked (onToolCall was called) if (toolCallPart.output !== undefined) { expect(toolCallPart.output).toEqual({ - error: "Tool execution failed", - }); + error: 'Tool execution failed', + }) } else { // Output not set yet, but error was handled (onToolCall was called with error) // This is acceptable - the error was caught and handled - expect(onToolCall).toHaveBeenCalled(); + expect(onToolCall).toHaveBeenCalled() } } } - }); - }); + }) + }) - describe("addToolResult", () => { - it("should add tool result and update message", async () => { + describe('addToolResult', () => { + it('should add tool result and update message', async () => { const chunks = createToolCallChunks([ - { id: "tool-1", name: "test_tool", arguments: "{}" }, - ]); - const adapter = createMockConnectionAdapter({ chunks }); - const client = new ChatClient({ connection: adapter }); + { id: 'tool-1', name: 'test_tool', arguments: '{}' }, + ]) + const adapter = createMockConnectionAdapter({ chunks }) + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("Test"); + await client.sendMessage('Test') // Find the tool call - const messages = client.getMessages(); - const assistantMessage = messages.find((m) => m.role === "assistant"); + const messages = client.getMessages() + const assistantMessage = messages.find((m) => m.role === 'assistant') const toolCallPart = assistantMessage?.parts.find( - (p) => p.type === "tool-call" - ); + (p) => p.type === 'tool-call', + ) - if (toolCallPart && toolCallPart.type === "tool-call") { + if (toolCallPart) { await client.addToolResult({ toolCallId: toolCallPart.id, tool: toolCallPart.name, - output: { result: "success" }, - }); + output: { result: 'success' }, + }) // Tool call should have output - const updatedMessages = client.getMessages(); + const updatedMessages = client.getMessages() const updatedAssistant = updatedMessages.find( - (m) => m.role === "assistant" - ); + (m) => m.role === 'assistant', + ) const updatedToolCall = updatedAssistant?.parts.find( - (p) => p.type === "tool-call" && p.id === toolCallPart.id - ); + (p) => p.type === 'tool-call' && p.id === toolCallPart.id, + ) - if (updatedToolCall && updatedToolCall.type === "tool-call") { - expect(updatedToolCall.output).toEqual({ result: "success" }); + if (updatedToolCall && updatedToolCall.type === 'tool-call') { + expect(updatedToolCall.output).toEqual({ result: 'success' }) } } - }); - }); + }) + }) - describe("error handling", () => { - it("should set error state on connection failure", async () => { - const error = new Error("Network error"); + describe('error handling', () => { + it('should set error state on connection failure', async () => { + const error = new Error('Network error') const adapter = createMockConnectionAdapter({ shouldError: true, error, - }); - const client = new ChatClient({ connection: adapter }); + }) + const client = new ChatClient({ connection: adapter }) - await client.sendMessage("Hello"); + await client.sendMessage('Hello') - expect(client.getError()).toBe(error); - }); + expect(client.getError()).toBe(error) + }) - it("should clear error on successful request", async () => { + it('should clear error on successful request', async () => { const errorAdapter = createMockConnectionAdapter({ shouldError: true, - error: new Error("First error"), - }); + error: new Error('First error'), + }) const successAdapter = createMockConnectionAdapter({ - chunks: createTextChunks("Success"), - }); + chunks: createTextChunks('Success'), + }) - const client = new ChatClient({ connection: errorAdapter }); + const client = new ChatClient({ connection: errorAdapter }) - await client.sendMessage("Fail"); - expect(client.getError()).toBeDefined(); + await client.sendMessage('Fail') + expect(client.getError()).toBeDefined() // @ts-ignore - Replace adapter for second request - client.connection = successAdapter; + client.connection = successAdapter - await client.sendMessage("Success"); - expect(client.getError()).toBeUndefined(); - }); - }); -}); + await client.sendMessage('Success') + expect(client.getError()).toBeUndefined() + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/connection-adapters-abort.test.ts b/packages/typescript/ai-client/tests/connection-adapters-abort.test.ts index cc130778b..6ff18355a 100644 --- a/packages/typescript/ai-client/tests/connection-adapters-abort.test.ts +++ b/packages/typescript/ai-client/tests/connection-adapters-abort.test.ts @@ -1,262 +1,271 @@ -import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"; -import { fetchServerSentEvents, fetchHttpStream } from "../src/connection-adapters"; -import type { StreamChunk } from "@tanstack/ai"; +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest' +import { + fetchHttpStream, + fetchServerSentEvents, +} from '../src/connection-adapters' +import type { StreamChunk } from '@tanstack/ai' -describe("Connection Adapters - Abort Signal Handling", () => { - let originalFetch: typeof fetch; - let fetchMock: ReturnType; +describe('Connection Adapters - Abort Signal Handling', () => { + let originalFetch: typeof fetch + let fetchMock: ReturnType beforeEach(() => { - originalFetch = global.fetch; - fetchMock = vi.fn(); - global.fetch = fetchMock; - }); + originalFetch = global.fetch + fetchMock = vi.fn() + // @ts-ignore - we're mocking fetch here + global.fetch = fetchMock + }) afterEach(() => { - global.fetch = originalFetch; - vi.clearAllMocks(); - }); + global.fetch = originalFetch + vi.clearAllMocks() + }) - describe("fetchServerSentEvents", () => { - it("should pass abortSignal to fetch", async () => { - const abortController = new AbortController(); - const abortSignal = abortController.signal; + describe('fetchServerSentEvents', () => { + it('should pass abortSignal to fetch', async () => { + const abortController = new AbortController() + const abortSignal = abortController.signal const mockResponse = { ok: true, body: { getReader: () => ({ - read: async () => ({ done: true, value: undefined }), + read: () => ({ done: true, value: undefined }), releaseLock: vi.fn(), }), }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); + const adapter = fetchServerSentEvents('/api/chat') const generator = adapter.connect( - [{ role: "user", content: "Hello" }], + [{ role: 'user', content: 'Hello' }], undefined, - abortSignal - ); + abortSignal, + ) // Consume generator to trigger fetch for await (const _ of generator) { // Consume all chunks } - expect(fetchMock).toHaveBeenCalled(); - const fetchCall = fetchMock.mock.calls[0]; - expect(fetchCall[1]?.signal).toBe(abortSignal); - }); + expect(fetchMock).toHaveBeenCalled() + const fetchCall = fetchMock.mock.calls[0] + expect(fetchCall?.[1]?.signal).toBe(abortSignal) + }) - it("should use provided abortSignal over options.signal", async () => { - const providedSignal = new AbortController().signal; - const optionsSignal = new AbortController().signal; + it('should use provided abortSignal over options.signal', async () => { + const providedSignal = new AbortController().signal + const optionsSignal = new AbortController().signal const mockResponse = { ok: true, body: { getReader: () => ({ - read: async () => ({ done: true, value: undefined }), + read: () => ({ done: true, value: undefined }), releaseLock: vi.fn(), }), }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat", { + const adapter = fetchServerSentEvents('/api/chat', { signal: optionsSignal, - }); + }) const generator = adapter.connect( - [{ role: "user", content: "Hello" }], + [{ role: 'user', content: 'Hello' }], undefined, - providedSignal - ); + providedSignal, + ) for await (const _ of generator) { // Consume all chunks } - const fetchCall = fetchMock.mock.calls[0]; - expect(fetchCall[1]?.signal).toBe(providedSignal); - }); + const fetchCall = fetchMock.mock.calls[0] + expect(fetchCall?.[1]?.signal).toBe(providedSignal) + }) - it("should stop reading stream when aborted", async () => { - const abortController = new AbortController(); - const abortSignal = abortController.signal; + it('should stop reading stream when aborted', async () => { + const abortController = new AbortController() + const abortSignal = abortController.signal - let readCount = 0; + let readCount = 0 const mockReader = { - read: async () => { - readCount++; + read: () => { + readCount++ if (readCount === 1) { // Abort after first read - abortController.abort(); + abortController.abort() return { done: false, - value: new TextEncoder().encode("data: {\"type\":\"content\",\"id\":\"1\",\"model\":\"test\",\"timestamp\":123,\"delta\":\"Hello\",\"content\":\"Hello\",\"role\":\"assistant\"}\n\n"), - }; + value: new TextEncoder().encode( + 'data: {"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n\n', + ), + } } - return { done: true, value: undefined }; + return { done: true, value: undefined } }, releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); + const adapter = fetchServerSentEvents('/api/chat') const generator = adapter.connect( - [{ role: "user", content: "Hello" }], + [{ role: 'user', content: 'Hello' }], undefined, - abortSignal - ); + abortSignal, + ) - const chunks: StreamChunk[] = []; + const chunks: Array = [] for await (const chunk of generator) { - chunks.push(chunk); + chunks.push(chunk) } // Should have read at least once but stopped after abort - expect(readCount).toBeGreaterThan(0); - expect(mockReader.releaseLock).toHaveBeenCalled(); - }); + expect(readCount).toBeGreaterThan(0) + expect(mockReader.releaseLock).toHaveBeenCalled() + }) - it("should check abortSignal before each read", async () => { - const abortController = new AbortController(); - const abortSignal = abortController.signal; + it('should check abortSignal before each read', async () => { + const abortController = new AbortController() + const abortSignal = abortController.signal - let readCount = 0; + let readCount = 0 const mockReader = { - read: async () => { - readCount++; + read: () => { + readCount++ if (readCount === 1) { - abortController.abort(); + abortController.abort() } return { done: false, - value: new TextEncoder().encode("data: {\"type\":\"content\",\"id\":\"1\",\"model\":\"test\",\"timestamp\":123,\"delta\":\"Hello\",\"content\":\"Hello\",\"role\":\"assistant\"}\n\n"), - }; + value: new TextEncoder().encode( + 'data: {"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n\n', + ), + } }, releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); + const adapter = fetchServerSentEvents('/api/chat') const generator = adapter.connect( - [{ role: "user", content: "Hello" }], + [{ role: 'user', content: 'Hello' }], undefined, - abortSignal - ); + abortSignal, + ) - const chunks: StreamChunk[] = []; + const chunks: Array = [] try { for await (const chunk of generator) { - chunks.push(chunk); + chunks.push(chunk) } } catch (err) { // Ignore abort errors } // Should stop reading after abort - expect(readCount).toBeLessThanOrEqual(2); // At most 2 reads (one before check, one after) - }); - }); + expect(readCount).toBeLessThanOrEqual(2) // At most 2 reads (one before check, one after) + }) + }) - describe("fetchHttpStream", () => { - it("should pass abortSignal to fetch", async () => { - const abortController = new AbortController(); - const abortSignal = abortController.signal; + describe('fetchHttpStream', () => { + it('should pass abortSignal to fetch', async () => { + const abortController = new AbortController() + const abortSignal = abortController.signal const mockResponse = { ok: true, body: { getReader: () => ({ - read: async () => ({ done: true, value: undefined }), + read: () => ({ done: true, value: undefined }), releaseLock: vi.fn(), }), }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchHttpStream("/api/chat"); + const adapter = fetchHttpStream('/api/chat') const generator = adapter.connect( - [{ role: "user", content: "Hello" }], + [{ role: 'user', content: 'Hello' }], undefined, - abortSignal - ); + abortSignal, + ) for await (const _ of generator) { // Consume all chunks } - expect(fetchMock).toHaveBeenCalled(); - const fetchCall = fetchMock.mock.calls[0]; - expect(fetchCall[1]?.signal).toBe(abortSignal); - }); + expect(fetchMock).toHaveBeenCalled() + const fetchCall = fetchMock.mock.calls[0] + expect(fetchCall?.[1]?.signal).toBe(abortSignal) + }) - it("should stop reading stream when aborted", async () => { - const abortController = new AbortController(); - const abortSignal = abortController.signal; + it('should stop reading stream when aborted', async () => { + const abortController = new AbortController() + const abortSignal = abortController.signal - let readCount = 0; + let readCount = 0 const mockReader = { - read: async () => { - readCount++; + read: () => { + readCount++ if (readCount === 1) { - abortController.abort(); + abortController.abort() return { done: false, - value: new TextEncoder().encode("{\"type\":\"content\",\"id\":\"1\",\"model\":\"test\",\"timestamp\":123,\"delta\":\"Hello\",\"content\":\"Hello\",\"role\":\"assistant\"}\n"), - }; + value: new TextEncoder().encode( + '{"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n', + ), + } } - return { done: true, value: undefined }; + return { done: true, value: undefined } }, releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchHttpStream("/api/chat"); + const adapter = fetchHttpStream('/api/chat') const generator = adapter.connect( - [{ role: "user", content: "Hello" }], + [{ role: 'user', content: 'Hello' }], undefined, - abortSignal - ); + abortSignal, + ) - const chunks: StreamChunk[] = []; + const chunks: Array = [] for await (const chunk of generator) { - chunks.push(chunk); + chunks.push(chunk) } - expect(readCount).toBeGreaterThan(0); - expect(mockReader.releaseLock).toHaveBeenCalled(); - }); - }); -}); - + expect(readCount).toBeGreaterThan(0) + expect(mockReader.releaseLock).toHaveBeenCalled() + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/connection-adapters.test.ts b/packages/typescript/ai-client/tests/connection-adapters.test.ts index 6d2ec3310..8db8907e2 100644 --- a/packages/typescript/ai-client/tests/connection-adapters.test.ts +++ b/packages/typescript/ai-client/tests/connection-adapters.test.ts @@ -1,470 +1,474 @@ -import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"; +import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest' import { - fetchServerSentEvents, fetchHttpStream, + fetchServerSentEvents, stream, -} from "../src/connection-adapters"; -import type { StreamChunk } from "@tanstack/ai"; +} from '../src/connection-adapters' +import type { StreamChunk } from '@tanstack/ai' -describe("connection-adapters", () => { - let originalFetch: typeof fetch; - let fetchMock: ReturnType; +describe('connection-adapters', () => { + let originalFetch: typeof fetch + let fetchMock: ReturnType beforeEach(() => { - originalFetch = global.fetch; - fetchMock = vi.fn(); - global.fetch = fetchMock; - }); + originalFetch = global.fetch + fetchMock = vi.fn() + // @ts-ignore - we mock global fetch + global.fetch = fetchMock + }) afterEach(() => { - global.fetch = originalFetch; - vi.clearAllMocks(); - }); + global.fetch = originalFetch + vi.clearAllMocks() + }) - describe("fetchServerSentEvents", () => { - it("should handle SSE format with data: prefix", async () => { + describe('fetchServerSentEvents', () => { + it('should handle SSE format with data: prefix', async () => { const mockReader = { read: vi .fn() .mockResolvedValueOnce({ done: false, value: new TextEncoder().encode( - 'data: {"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n\n' + 'data: {"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n\n', ), }) .mockResolvedValueOnce({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); - const chunks: StreamChunk[] = []; + const adapter = fetchServerSentEvents('/api/chat') + const chunks: Array = [] - for await (const chunk of adapter.connect( - [{ role: "user", content: "Hello" }] - )) { - chunks.push(chunk); + for await (const chunk of adapter.connect([ + { role: 'user', content: 'Hello' }, + ])) { + chunks.push(chunk) } - expect(chunks).toHaveLength(1); + expect(chunks).toHaveLength(1) expect(chunks[0]).toMatchObject({ - type: "content", - delta: "Hello", - }); - }); + type: 'content', + delta: 'Hello', + }) + }) - it("should handle SSE format without data: prefix", async () => { + it('should handle SSE format without data: prefix', async () => { const mockReader = { read: vi .fn() .mockResolvedValueOnce({ done: false, value: new TextEncoder().encode( - '{"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n' + '{"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n', ), }) .mockResolvedValueOnce({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); - const chunks: StreamChunk[] = []; + const adapter = fetchServerSentEvents('/api/chat') + const chunks: Array = [] - for await (const chunk of adapter.connect( - [{ role: "user", content: "Hello" }] - )) { - chunks.push(chunk); + for await (const chunk of adapter.connect([ + { role: 'user', content: 'Hello' }, + ])) { + chunks.push(chunk) } - expect(chunks).toHaveLength(1); - }); + expect(chunks).toHaveLength(1) + }) - it("should skip [DONE] markers", async () => { + it('should skip [DONE] markers', async () => { const mockReader = { read: vi .fn() .mockResolvedValueOnce({ done: false, - value: new TextEncoder().encode("data: [DONE]\n\n"), + value: new TextEncoder().encode('data: [DONE]\n\n'), }) .mockResolvedValueOnce({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); - const chunks: StreamChunk[] = []; + const adapter = fetchServerSentEvents('/api/chat') + const chunks: Array = [] - for await (const chunk of adapter.connect( - [{ role: "user", content: "Hello" }] - )) { - chunks.push(chunk); + for await (const chunk of adapter.connect([ + { role: 'user', content: 'Hello' }, + ])) { + chunks.push(chunk) } - expect(chunks).toHaveLength(0); - }); + expect(chunks).toHaveLength(0) + }) - it("should handle malformed JSON gracefully", async () => { - const consoleWarnSpy = vi.spyOn(console, "warn").mockImplementation(() => {}); + it('should handle malformed JSON gracefully', async () => { + const consoleWarnSpy = vi + .spyOn(console, 'warn') + .mockImplementation(() => {}) const mockReader = { read: vi .fn() .mockResolvedValueOnce({ done: false, - value: new TextEncoder().encode("data: invalid json\n\n"), + value: new TextEncoder().encode('data: invalid json\n\n'), }) .mockResolvedValueOnce({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); - const chunks: StreamChunk[] = []; + const adapter = fetchServerSentEvents('/api/chat') + const chunks: Array = [] - for await (const chunk of adapter.connect( - [{ role: "user", content: "Hello" }] - )) { - chunks.push(chunk); + for await (const chunk of adapter.connect([ + { role: 'user', content: 'Hello' }, + ])) { + chunks.push(chunk) } - expect(chunks).toHaveLength(0); - expect(consoleWarnSpy).toHaveBeenCalled(); - consoleWarnSpy.mockRestore(); - }); + expect(chunks).toHaveLength(0) + expect(consoleWarnSpy).toHaveBeenCalled() + consoleWarnSpy.mockRestore() + }) - it("should handle HTTP errors", async () => { + it('should handle HTTP errors', async () => { const mockResponse = { ok: false, status: 500, - statusText: "Internal Server Error", - }; + statusText: 'Internal Server Error', + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); + const adapter = fetchServerSentEvents('/api/chat') await expect( (async () => { for await (const _ of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { // Consume } - })() - ).rejects.toThrow("HTTP error! status: 500 Internal Server Error"); - }); + })(), + ).rejects.toThrow('HTTP error! status: 500 Internal Server Error') + }) - it("should handle missing response body", async () => { + it('should handle missing response body', async () => { const mockResponse = { ok: true, body: null, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); + const adapter = fetchServerSentEvents('/api/chat') await expect( (async () => { for await (const _ of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { // Consume } - })() - ).rejects.toThrow("Response body is not readable"); - }); + })(), + ).rejects.toThrow('Response body is not readable') + }) - it("should merge custom headers", async () => { + it('should merge custom headers', async () => { const mockReader = { read: vi.fn().mockResolvedValue({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat", { - headers: { Authorization: "Bearer token" }, - }); + const adapter = fetchServerSentEvents('/api/chat', { + headers: { Authorization: 'Bearer token' }, + }) for await (const _ of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { // Consume } - expect(fetchMock).toHaveBeenCalled(); - const call = fetchMock.mock.calls[0]; - expect(call[1]?.headers).toMatchObject({ - "Content-Type": "application/json", - Authorization: "Bearer token", - }); - }); + expect(fetchMock).toHaveBeenCalled() + const call = fetchMock.mock.calls[0] + expect(call?.[1]?.headers).toMatchObject({ + 'Content-Type': 'application/json', + Authorization: 'Bearer token', + }) + }) - it("should handle Headers object", async () => { + it('should handle Headers object', async () => { const mockReader = { read: vi.fn().mockResolvedValue({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const headers = new Headers(); - headers.set("Authorization", "Bearer token"); + const headers = new Headers() + headers.set('Authorization', 'Bearer token') - const adapter = fetchServerSentEvents("/api/chat", { headers }); + const adapter = fetchServerSentEvents('/api/chat', { headers }) for await (const _ of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { // Consume } - expect(fetchMock).toHaveBeenCalled(); - const call = fetchMock.mock.calls[0]; - const requestHeaders = call[1]?.headers; - + expect(fetchMock).toHaveBeenCalled() + const call = fetchMock.mock.calls[0] + const requestHeaders = call?.[1]?.headers + // mergeHeaders converts Headers to plain object, then spread into new object // The headers should be a plain object with both Content-Type and Authorization - const headersObj = requestHeaders as Record; - expect(headersObj).toBeDefined(); - expect(headersObj["Content-Type"]).toBe("application/json"); + const headersObj = requestHeaders as Record + expect(headersObj).toBeDefined() + expect(headersObj['Content-Type']).toBe('application/json') // Check if Authorization exists (it should from the Headers object) // The mergeHeaders function should convert Headers.forEach to object keys const authValue = Object.entries(headersObj).find( - ([key]) => key.toLowerCase() === "authorization" - )?.[1]; - expect(authValue).toBe("Bearer token"); - }); + ([key]) => key.toLowerCase() === 'authorization', + )?.[1] + expect(authValue).toBe('Bearer token') + }) - it("should pass data to request body", async () => { + it('should pass data to request body', async () => { const mockReader = { read: vi.fn().mockResolvedValue({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchServerSentEvents("/api/chat"); + const adapter = fetchServerSentEvents('/api/chat') for await (const _ of adapter.connect( - [{ role: "user", content: "Hello" }], - { key: "value" } + [{ role: 'user', content: 'Hello' }], + { key: 'value' }, )) { // Consume } - expect(fetchMock).toHaveBeenCalled(); - const call = fetchMock.mock.calls[0]; - const body = JSON.parse(call[1]?.body as string); - expect(body.data).toEqual({ key: "value" }); - }); - }); + expect(fetchMock).toHaveBeenCalled() + const call = fetchMock.mock.calls[0] + const body = JSON.parse(call?.[1]?.body as string) + expect(body.data).toEqual({ key: 'value' }) + }) + }) - describe("fetchHttpStream", () => { - it("should parse newline-delimited JSON", async () => { + describe('fetchHttpStream', () => { + it('should parse newline-delimited JSON', async () => { const mockReader = { read: vi .fn() .mockResolvedValueOnce({ done: false, value: new TextEncoder().encode( - '{"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n' + '{"type":"content","id":"1","model":"test","timestamp":123,"delta":"Hello","content":"Hello","role":"assistant"}\n', ), }) .mockResolvedValueOnce({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchHttpStream("/api/chat"); - const chunks: StreamChunk[] = []; + const adapter = fetchHttpStream('/api/chat') + const chunks: Array = [] for await (const chunk of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { - chunks.push(chunk); + chunks.push(chunk) } - expect(chunks).toHaveLength(1); - }); + expect(chunks).toHaveLength(1) + }) - it("should handle malformed JSON gracefully", async () => { - const consoleWarnSpy = vi.spyOn(console, "warn").mockImplementation(() => {}); + it('should handle malformed JSON gracefully', async () => { + const consoleWarnSpy = vi + .spyOn(console, 'warn') + .mockImplementation(() => {}) const mockReader = { read: vi .fn() .mockResolvedValueOnce({ done: false, - value: new TextEncoder().encode("invalid json\n"), + value: new TextEncoder().encode('invalid json\n'), }) .mockResolvedValueOnce({ done: true, value: undefined }), releaseLock: vi.fn(), - }; + } const mockResponse = { ok: true, body: { getReader: () => mockReader, }, - }; + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchHttpStream("/api/chat"); - const chunks: StreamChunk[] = []; + const adapter = fetchHttpStream('/api/chat') + const chunks: Array = [] for await (const chunk of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { - chunks.push(chunk); + chunks.push(chunk) } - expect(chunks).toHaveLength(0); - expect(consoleWarnSpy).toHaveBeenCalled(); - consoleWarnSpy.mockRestore(); - }); + expect(chunks).toHaveLength(0) + expect(consoleWarnSpy).toHaveBeenCalled() + consoleWarnSpy.mockRestore() + }) - it("should handle HTTP errors", async () => { + it('should handle HTTP errors', async () => { const mockResponse = { ok: false, status: 404, - statusText: "Not Found", - }; + statusText: 'Not Found', + } - fetchMock.mockResolvedValue(mockResponse as any); + fetchMock.mockResolvedValue(mockResponse as any) - const adapter = fetchHttpStream("/api/chat"); + const adapter = fetchHttpStream('/api/chat') await expect( (async () => { for await (const _ of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { // Consume } - })() - ).rejects.toThrow("HTTP error! status: 404 Not Found"); - }); - }); - - describe("stream", () => { - it("should delegate to stream factory", async () => { - const streamFactory = vi.fn().mockImplementation(async function* () { + })(), + ).rejects.toThrow('HTTP error! status: 404 Not Found') + }) + }) + + describe('stream', () => { + it('should delegate to stream factory', async () => { + const streamFactory = vi.fn().mockImplementation(function* () { yield { - type: "content", - id: "1", - model: "test", + type: 'content', + id: '1', + model: 'test', timestamp: Date.now(), - delta: "Hello", - content: "Hello", - role: "assistant", - }; - }); + delta: 'Hello', + content: 'Hello', + role: 'assistant', + } + }) - const adapter = stream(streamFactory); - const chunks: StreamChunk[] = []; + const adapter = stream(streamFactory) + const chunks: Array = [] for await (const chunk of adapter.connect([ - { role: "user", content: "Hello" }, + { role: 'user', content: 'Hello' }, ])) { - chunks.push(chunk); + chunks.push(chunk) } - expect(streamFactory).toHaveBeenCalled(); - expect(chunks).toHaveLength(1); - }); + expect(streamFactory).toHaveBeenCalled() + expect(chunks).toHaveLength(1) + }) - it("should pass data to stream factory", async () => { - const streamFactory = vi.fn().mockImplementation(async function* () { + it('should pass data to stream factory', async () => { + const streamFactory = vi.fn().mockImplementation(function* () { yield { - type: "done", - id: "1", - model: "test", + type: 'done', + id: '1', + model: 'test', timestamp: Date.now(), - finishReason: "stop", - }; - }); + finishReason: 'stop', + } + }) - const adapter = stream(streamFactory); - const data = { key: "value" }; + const adapter = stream(streamFactory) + const data = { key: 'value' } for await (const _ of adapter.connect( - [{ role: "user", content: "Hello" }], - data + [{ role: 'user', content: 'Hello' }], + data, )) { // Consume } expect(streamFactory).toHaveBeenCalledWith( - expect.arrayContaining([expect.objectContaining({ role: "user" })]), - data - ); - }); - }); -}); - + expect.arrayContaining([expect.objectContaining({ role: 'user' })]), + data, + ) + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/events.test.ts b/packages/typescript/ai-client/tests/events.test.ts index 0702cea38..12df82d5f 100644 --- a/packages/typescript/ai-client/tests/events.test.ts +++ b/packages/typescript/ai-client/tests/events.test.ts @@ -1,334 +1,341 @@ -import { describe, it, expect, vi, beforeEach, afterEach } from "vitest"; -import { - ChatClientEventEmitter, - DefaultChatClientEventEmitter, -} from "../src/events"; -import { aiEventClient } from "@tanstack/ai/event-client"; -import type { UIMessage } from "../src/types"; +import { beforeEach, describe, expect, it, vi } from 'vitest' +import { aiEventClient } from '@tanstack/ai/event-client' +import { DefaultChatClientEventEmitter } from '../src/events' +import type { UIMessage } from '../src/types' // Mock the event client -vi.mock("@tanstack/ai/event-client", () => ({ +vi.mock('@tanstack/ai/event-client', () => ({ aiEventClient: { emit: vi.fn(), }, -})); +})) -describe("events", () => { +describe('events', () => { beforeEach(() => { - vi.clearAllMocks(); - }); + vi.clearAllMocks() + }) - describe("DefaultChatClientEventEmitter", () => { - let emitter: DefaultChatClientEventEmitter; + describe('DefaultChatClientEventEmitter', () => { + let emitter: DefaultChatClientEventEmitter beforeEach(() => { - emitter = new DefaultChatClientEventEmitter("test-client-id"); - }); + emitter = new DefaultChatClientEventEmitter('test-client-id') + }) - it("should emit client:created event with clientId and timestamp", () => { - emitter.clientCreated(5); + it('should emit client:created event with clientId and timestamp', () => { + emitter.clientCreated(5) - expect(aiEventClient.emit).toHaveBeenCalledWith("client:created", { + expect(aiEventClient.emit).toHaveBeenCalledWith('client:created', { initialMessageCount: 5, - clientId: "test-client-id", + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit client:loading-changed event", () => { - emitter.loadingChanged(true); + it('should emit client:loading-changed event', () => { + emitter.loadingChanged(true) expect(aiEventClient.emit).toHaveBeenCalledWith( - "client:loading-changed", + 'client:loading-changed', { isLoading: true, - clientId: "test-client-id", + clientId: 'test-client-id', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit client:error-changed event with null", () => { - emitter.errorChanged(null); + it('should emit client:error-changed event with null', () => { + emitter.errorChanged(null) - expect(aiEventClient.emit).toHaveBeenCalledWith("client:error-changed", { + expect(aiEventClient.emit).toHaveBeenCalledWith('client:error-changed', { error: null, - clientId: "test-client-id", + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit client:error-changed event with error string", () => { - emitter.errorChanged("Something went wrong"); + it('should emit client:error-changed event with error string', () => { + emitter.errorChanged('Something went wrong') - expect(aiEventClient.emit).toHaveBeenCalledWith("client:error-changed", { - error: "Something went wrong", - clientId: "test-client-id", + expect(aiEventClient.emit).toHaveBeenCalledWith('client:error-changed', { + error: 'Something went wrong', + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit processor:text-updated and client:assistant-message-updated", () => { - emitter.textUpdated("stream-1", "msg-1", "Hello world"); + it('should emit processor:text-updated and client:assistant-message-updated', () => { + emitter.textUpdated('stream-1', 'msg-1', 'Hello world') - expect(aiEventClient.emit).toHaveBeenCalledTimes(2); + expect(aiEventClient.emit).toHaveBeenCalledTimes(2) expect(aiEventClient.emit).toHaveBeenNthCalledWith( 1, - "processor:text-updated", + 'processor:text-updated', { - streamId: "stream-1", - content: "Hello world", + streamId: 'stream-1', + content: 'Hello world', timestamp: expect.any(Number), - } - ); + }, + ) expect(aiEventClient.emit).toHaveBeenNthCalledWith( 2, - "client:assistant-message-updated", + 'client:assistant-message-updated', { - messageId: "msg-1", - content: "Hello world", - clientId: "test-client-id", + messageId: 'msg-1', + content: 'Hello world', + clientId: 'test-client-id', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit processor:tool-call-state-changed and client:tool-call-updated", () => { + it('should emit processor:tool-call-state-changed and client:tool-call-updated', () => { emitter.toolCallStateChanged( - "stream-1", - "msg-1", - "call-1", - "get_weather", - "input-complete", - '{"city": "NYC"}' - ); - - expect(aiEventClient.emit).toHaveBeenCalledTimes(2); + 'stream-1', + 'msg-1', + 'call-1', + 'get_weather', + 'input-complete', + '{"city": "NYC"}', + ) + + expect(aiEventClient.emit).toHaveBeenCalledTimes(2) expect(aiEventClient.emit).toHaveBeenNthCalledWith( 1, - "processor:tool-call-state-changed", + 'processor:tool-call-state-changed', { - streamId: "stream-1", - toolCallId: "call-1", - toolName: "get_weather", - state: "input-complete", + streamId: 'stream-1', + toolCallId: 'call-1', + toolName: 'get_weather', + state: 'input-complete', arguments: '{"city": "NYC"}', timestamp: expect.any(Number), - } - ); + }, + ) expect(aiEventClient.emit).toHaveBeenNthCalledWith( 2, - "client:tool-call-updated", + 'client:tool-call-updated', { - messageId: "msg-1", - toolCallId: "call-1", - toolName: "get_weather", - state: "input-complete", + messageId: 'msg-1', + toolCallId: 'call-1', + toolName: 'get_weather', + state: 'input-complete', arguments: '{"city": "NYC"}', - clientId: "test-client-id", + clientId: 'test-client-id', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit processor:tool-result-state-changed event", () => { + it('should emit processor:tool-result-state-changed event', () => { emitter.toolResultStateChanged( - "stream-1", - "call-1", - "Result content", - "complete" - ); + 'stream-1', + 'call-1', + 'Result content', + 'complete', + ) expect(aiEventClient.emit).toHaveBeenCalledWith( - "processor:tool-result-state-changed", + 'processor:tool-result-state-changed', { - streamId: "stream-1", - toolCallId: "call-1", - content: "Result content", - state: "complete", + streamId: 'stream-1', + toolCallId: 'call-1', + content: 'Result content', + state: 'complete', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit processor:tool-result-state-changed with error", () => { + it('should emit processor:tool-result-state-changed with error', () => { emitter.toolResultStateChanged( - "stream-1", - "call-1", - "Error occurred", - "error", - "Something failed" - ); + 'stream-1', + 'call-1', + 'Error occurred', + 'error', + 'Something failed', + ) expect(aiEventClient.emit).toHaveBeenCalledWith( - "processor:tool-result-state-changed", + 'processor:tool-result-state-changed', { - streamId: "stream-1", - toolCallId: "call-1", - content: "Error occurred", - state: "error", - error: "Something failed", + streamId: 'stream-1', + toolCallId: 'call-1', + content: 'Error occurred', + state: 'error', + error: 'Something failed', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit client:approval-requested event", () => { + it('should emit client:approval-requested event', () => { emitter.approvalRequested( - "msg-1", - "call-1", - "get_weather", - { city: "NYC" }, - "approval-1" - ); + 'msg-1', + 'call-1', + 'get_weather', + { city: 'NYC' }, + 'approval-1', + ) expect(aiEventClient.emit).toHaveBeenCalledWith( - "client:approval-requested", + 'client:approval-requested', { - messageId: "msg-1", - toolCallId: "call-1", - toolName: "get_weather", - input: { city: "NYC" }, - approvalId: "approval-1", - clientId: "test-client-id", + messageId: 'msg-1', + toolCallId: 'call-1', + toolName: 'get_weather', + input: { city: 'NYC' }, + approvalId: 'approval-1', + clientId: 'test-client-id', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit client:message-appended with content preview", () => { + it('should emit client:message-appended with content preview', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "user", + id: 'msg-1', + role: 'user', parts: [ - { type: "text", content: "Hello" }, - { type: "text", content: "World" }, + { type: 'text', content: 'Hello' }, + { type: 'text', content: 'World' }, ], createdAt: new Date(), - }; + } - emitter.messageAppended(uiMessage); + emitter.messageAppended(uiMessage) expect(aiEventClient.emit).toHaveBeenCalledWith( - "client:message-appended", + 'client:message-appended', { - messageId: "msg-1", - role: "user", - contentPreview: "Hello World", - clientId: "test-client-id", + messageId: 'msg-1', + role: 'user', + contentPreview: 'Hello World', + clientId: 'test-client-id', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should truncate content preview to 100 characters", () => { - const longContent = "a".repeat(150); + it('should truncate content preview to 100 characters', () => { + const longContent = 'a'.repeat(150) const uiMessage: UIMessage = { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: longContent }], + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: longContent }], createdAt: new Date(), - }; + } - emitter.messageAppended(uiMessage); + emitter.messageAppended(uiMessage) - const call = (aiEventClient.emit as any).mock.calls[0]; - expect(call[1].contentPreview).toHaveLength(100); - }); + const call = (aiEventClient.emit as any).mock.calls[0] + expect(call[1].contentPreview).toHaveLength(100) + }) - it("should handle message with no text parts", () => { + it('should handle message with no text parts', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "call-1", - name: "tool1", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'call-1', + name: 'tool1', + arguments: '{}', + state: 'input-complete', }, ], createdAt: new Date(), - }; + } - emitter.messageAppended(uiMessage); + emitter.messageAppended(uiMessage) expect(aiEventClient.emit).toHaveBeenCalledWith( - "client:message-appended", + 'client:message-appended', { - messageId: "msg-1", - role: "assistant", - contentPreview: "", - clientId: "test-client-id", + messageId: 'msg-1', + role: 'assistant', + contentPreview: '', + clientId: 'test-client-id', timestamp: expect.any(Number), - } - ); - }); + }, + ) + }) - it("should emit client:message-sent event", () => { - emitter.messageSent("msg-1", "Hello world"); + it('should emit client:message-sent event', () => { + emitter.messageSent('msg-1', 'Hello world') - expect(aiEventClient.emit).toHaveBeenCalledWith("client:message-sent", { - messageId: "msg-1", - content: "Hello world", - clientId: "test-client-id", + expect(aiEventClient.emit).toHaveBeenCalledWith('client:message-sent', { + messageId: 'msg-1', + content: 'Hello world', + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit client:reloaded event", () => { - emitter.reloaded(3); + it('should emit client:reloaded event', () => { + emitter.reloaded(3) - expect(aiEventClient.emit).toHaveBeenCalledWith("client:reloaded", { + expect(aiEventClient.emit).toHaveBeenCalledWith('client:reloaded', { fromMessageIndex: 3, - clientId: "test-client-id", + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit client:stopped event", () => { - emitter.stopped(); + it('should emit client:stopped event', () => { + emitter.stopped() - expect(aiEventClient.emit).toHaveBeenCalledWith("client:stopped", { - clientId: "test-client-id", + expect(aiEventClient.emit).toHaveBeenCalledWith('client:stopped', { + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit client:messages-cleared event", () => { - emitter.messagesCleared(); + it('should emit client:messages-cleared event', () => { + emitter.messagesCleared() - expect(aiEventClient.emit).toHaveBeenCalledWith("client:messages-cleared", { - clientId: "test-client-id", - timestamp: expect.any(Number), - }); - }); - - it("should emit tool:result-added event", () => { - emitter.toolResultAdded("call-1", "get_weather", { temp: 72 }, "output-available"); - - expect(aiEventClient.emit).toHaveBeenCalledWith("tool:result-added", { - toolCallId: "call-1", - toolName: "get_weather", + expect(aiEventClient.emit).toHaveBeenCalledWith( + 'client:messages-cleared', + { + clientId: 'test-client-id', + timestamp: expect.any(Number), + }, + ) + }) + + it('should emit tool:result-added event', () => { + emitter.toolResultAdded( + 'call-1', + 'get_weather', + { temp: 72 }, + 'output-available', + ) + + expect(aiEventClient.emit).toHaveBeenCalledWith('tool:result-added', { + toolCallId: 'call-1', + toolName: 'get_weather', output: { temp: 72 }, - state: "output-available", - clientId: "test-client-id", + state: 'output-available', + clientId: 'test-client-id', timestamp: expect.any(Number), - }); - }); + }) + }) - it("should emit tool:approval-responded event", () => { - emitter.toolApprovalResponded("approval-1", "call-1", true); - - expect(aiEventClient.emit).toHaveBeenCalledWith("tool:approval-responded", { - approvalId: "approval-1", - toolCallId: "call-1", - approved: true, - clientId: "test-client-id", - timestamp: expect.any(Number), - }); - }); - }); -}); + it('should emit tool:approval-responded event', () => { + emitter.toolApprovalResponded('approval-1', 'call-1', true) + expect(aiEventClient.emit).toHaveBeenCalledWith( + 'tool:approval-responded', + { + approvalId: 'approval-1', + toolCallId: 'call-1', + approved: true, + clientId: 'test-client-id', + timestamp: expect.any(Number), + }, + ) + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/message-converters.test.ts b/packages/typescript/ai-client/tests/message-converters.test.ts index 63dfccc18..a41e1bfad 100644 --- a/packages/typescript/ai-client/tests/message-converters.test.ts +++ b/packages/typescript/ai-client/tests/message-converters.test.ts @@ -1,626 +1,637 @@ -import { describe, it, expect } from "vitest"; +import { describe, expect, it } from 'vitest' import { convertMessagesToModelMessages, - uiMessageToModelMessages, modelMessageToUIMessage, modelMessagesToUIMessages, normalizeToUIMessage, -} from "../src/message-converters"; -import type { UIMessage, ModelMessage } from "../src/types"; - -describe("message-converters", () => { - describe("convertMessagesToModelMessages", () => { - it("should convert UIMessages to ModelMessages", () => { - const uiMessages: UIMessage[] = [ + uiMessageToModelMessages, +} from '../src/message-converters' +import type { UIMessage } from '../src/types' +import type { ModelMessage } from '@tanstack/ai' + +describe('message-converters', () => { + describe('convertMessagesToModelMessages', () => { + it('should convert UIMessages to ModelMessages', () => { + const uiMessages: Array = [ { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), }, - ]; + ] - const result = convertMessagesToModelMessages(uiMessages); - expect(result).toHaveLength(1); + const result = convertMessagesToModelMessages(uiMessages) + expect(result).toHaveLength(1) expect(result[0]).toEqual({ - role: "user", - content: "Hello", - }); - }); + role: 'user', + content: 'Hello', + }) + }) - it("should pass through ModelMessages", () => { - const modelMessages: ModelMessage[] = [ + it('should pass through ModelMessages', () => { + const modelMessages: Array = [ { - role: "user", - content: "Hello", + role: 'user', + content: 'Hello', }, - ]; + ] - const result = convertMessagesToModelMessages(modelMessages); - expect(result).toEqual(modelMessages); - }); + const result = convertMessagesToModelMessages(modelMessages) + expect(result).toEqual(modelMessages) + }) - it("should handle mixed UIMessages and ModelMessages", () => { - const messages: (UIMessage | ModelMessage)[] = [ + it('should handle mixed UIMessages and ModelMessages', () => { + const messages: Array = [ { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), }, { - role: "assistant", - content: "Hi there", + role: 'assistant', + content: 'Hi there', }, - ]; - - const result = convertMessagesToModelMessages(messages); - expect(result).toHaveLength(2); - expect(result[0]).toEqual({ role: "user", content: "Hello" }); - expect(result[1]).toEqual({ role: "assistant", content: "Hi there" }); - }); - - it("should handle empty array", () => { - const result = convertMessagesToModelMessages([]); - expect(result).toEqual([]); - }); - }); - - describe("uiMessageToModelMessages", () => { - it("should convert text-only message", () => { + ] + + const result = convertMessagesToModelMessages(messages) + expect(result).toHaveLength(2) + expect(result[0]).toEqual({ role: 'user', content: 'Hello' }) + expect(result[1]).toEqual({ role: 'assistant', content: 'Hi there' }) + }) + + it('should handle empty array', () => { + const result = convertMessagesToModelMessages([]) + expect(result).toEqual([]) + }) + }) + + describe('uiMessageToModelMessages', () => { + it('should convert text-only message', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result).toHaveLength(1); + const result = uiMessageToModelMessages(uiMessage) + expect(result).toHaveLength(1) expect(result[0]).toEqual({ - role: "user", - content: "Hello", - }); - }); + role: 'user', + content: 'Hello', + }) + }) - it("should convert message with multiple text parts", () => { + it('should convert message with multiple text parts', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "user", + id: 'msg-1', + role: 'user', parts: [ - { type: "text", content: "Hello " }, - { type: "text", content: "World" }, + { type: 'text', content: 'Hello ' }, + { type: 'text', content: 'World' }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result[0].content).toBe("Hello World"); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result[0]?.content).toBe('Hello World') + }) - it("should convert message with tool calls", () => { + it('should convert message with tool calls', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "call-1", - name: "get_weather", + type: 'tool-call', + id: 'call-1', + name: 'get_weather', arguments: '{"city": "NYC"}', - state: "input-complete", + state: 'input-complete', }, ], createdAt: new Date(), - }; - - const result = uiMessageToModelMessages(uiMessage); - expect(result).toHaveLength(1); - expect(result[0].toolCalls).toBeDefined(); - expect(result[0].toolCalls?.[0]).toEqual({ - id: "call-1", - type: "function", + } + + const result = uiMessageToModelMessages(uiMessage) + expect(result).toHaveLength(1) + expect(result[0]?.toolCalls).toBeDefined() + expect(result[0]?.toolCalls?.[0]).toEqual({ + id: 'call-1', + type: 'function', function: { - name: "get_weather", + name: 'get_weather', arguments: '{"city": "NYC"}', }, - }); - }); + }) + }) - it("should filter tool calls by state", () => { + it('should filter tool calls by state', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "call-1", - name: "tool1", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'call-1', + name: 'tool1', + arguments: '{}', + state: 'input-complete', }, { - type: "tool-call", - id: "call-2", - name: "tool2", - arguments: "{}", - state: "input-streaming", // Not complete + type: 'tool-call', + id: 'call-2', + name: 'tool2', + arguments: '{}', + state: 'input-streaming', // Not complete }, { - type: "tool-call", - id: "call-3", - name: "tool3", - arguments: "{}", - state: "approval-responded", + type: 'tool-call', + id: 'call-3', + name: 'tool3', + arguments: '{}', + state: 'approval-responded', }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result[0].toolCalls).toHaveLength(2); // call-1 and call-3 - expect(result[0].toolCalls?.map((tc) => tc.id)).toEqual([ - "call-1", - "call-3", - ]); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result[0]?.toolCalls).toHaveLength(2) // call-1 and call-3 + expect(result[0]?.toolCalls?.map((tc) => tc.id)).toEqual([ + 'call-1', + 'call-3', + ]) + }) - it("should include tool calls with output", () => { + it('should include tool calls with output', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "call-1", - name: "tool1", - arguments: "{}", - state: "awaiting-input", - output: { result: "success" }, + type: 'tool-call', + id: 'call-1', + name: 'tool1', + arguments: '{}', + state: 'awaiting-input', + output: { result: 'success' }, }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result[0].toolCalls).toHaveLength(1); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result[0]?.toolCalls).toHaveLength(1) + }) - it("should convert tool result parts to separate messages", () => { + it('should convert tool result parts to separate messages', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-result", - toolCallId: "call-1", - content: "Result content", - state: "complete", + type: 'tool-result', + toolCallId: 'call-1', + content: 'Result content', + state: 'complete', }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result).toHaveLength(2); // Main message + tool result + const result = uiMessageToModelMessages(uiMessage) + expect(result).toHaveLength(2) // Main message + tool result expect(result[1]).toEqual({ - role: "tool", - content: "Result content", - toolCallId: "call-1", - }); - }); + role: 'tool', + content: 'Result content', + toolCallId: 'call-1', + }) + }) - it("should filter tool results by state", () => { + it('should filter tool results by state', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-result", - toolCallId: "call-1", - content: "Complete", - state: "complete", + type: 'tool-result', + toolCallId: 'call-1', + content: 'Complete', + state: 'complete', }, { - type: "tool-result", - toolCallId: "call-2", - content: "Error", - state: "error", + type: 'tool-result', + toolCallId: 'call-2', + content: 'Error', + state: 'error', }, { - type: "tool-result", - toolCallId: "call-3", - content: "Streaming", - state: "streaming", // Not included + type: 'tool-result', + toolCallId: 'call-3', + content: 'Streaming', + state: 'streaming', // Not included }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result).toHaveLength(3); // Main message + 2 tool results - expect(result.filter((m) => m.role === "tool")).toHaveLength(2); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result).toHaveLength(3) // Main message + 2 tool results + expect(result.filter((m) => m.role === 'tool')).toHaveLength(2) + }) - it("should handle assistant message with only tool calls", () => { + it('should handle assistant message with only tool calls', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "call-1", - name: "tool1", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'call-1', + name: 'tool1', + arguments: '{}', + state: 'input-complete', }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result).toHaveLength(1); - expect(result[0].toolCalls).toBeDefined(); - expect(result[0].content).toBeNull(); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result).toHaveLength(1) + expect(result[0]?.toolCalls).toBeDefined() + expect(result[0]?.content).toBeNull() + }) - it("should handle message with text and tool calls", () => { + it('should handle message with text and tool calls', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ - { type: "text", content: "Let me check" }, + { type: 'text', content: 'Let me check' }, { - type: "tool-call", - id: "call-1", - name: "tool1", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'call-1', + name: 'tool1', + arguments: '{}', + state: 'input-complete', }, ], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result[0].content).toBe("Let me check"); - expect(result[0].toolCalls).toBeDefined(); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result[0]?.content).toBe('Let me check') + expect(result[0]?.toolCalls).toBeDefined() + }) - it("should handle empty content", () => { + it('should handle empty content', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "user", + id: 'msg-1', + role: 'user', parts: [], createdAt: new Date(), - }; + } - const result = uiMessageToModelMessages(uiMessage); - expect(result[0].content).toBeNull(); - }); - }); + const result = uiMessageToModelMessages(uiMessage) + expect(result[0]?.content).toBeNull() + }) + }) - describe("modelMessageToUIMessage", () => { - it("should convert text message", () => { + describe('modelMessageToUIMessage', () => { + it('should convert text message', () => { const modelMessage: ModelMessage = { - role: "user", - content: "Hello", - }; - - const result = modelMessageToUIMessage(modelMessage, "msg-1"); - expect(result.id).toBe("msg-1"); - expect(result.role).toBe("user"); - expect(result.parts).toHaveLength(1); + role: 'user', + content: 'Hello', + } + + const result = modelMessageToUIMessage(modelMessage, 'msg-1') + expect(result.id).toBe('msg-1') + expect(result.role).toBe('user') + expect(result.parts).toHaveLength(1) expect(result.parts[0]).toEqual({ - type: "text", - content: "Hello", - }); - }); + type: 'text', + content: 'Hello', + }) + }) - it("should generate ID if not provided", () => { + it('should generate ID if not provided', () => { const modelMessage: ModelMessage = { - role: "user", - content: "Hello", - }; + role: 'user', + content: 'Hello', + } - const result = modelMessageToUIMessage(modelMessage); - expect(result.id).toBeTruthy(); - expect(result.id).toMatch(/^msg-/); - }); + const result = modelMessageToUIMessage(modelMessage) + expect(result.id).toBeTruthy() + expect(result.id).toMatch(/^msg-/) + }) - it("should convert message with tool calls", () => { + it('should convert message with tool calls', () => { const modelMessage: ModelMessage = { - role: "assistant", + role: 'assistant', + content: 'Here is the info', toolCalls: [ { - id: "call-1", - type: "function", + id: 'call-1', + type: 'function', function: { - name: "get_weather", + name: 'get_weather', arguments: '{"city": "NYC"}', }, }, ], - }; + } - const result = modelMessageToUIMessage(modelMessage); - expect(result.parts).toHaveLength(1); + const result = modelMessageToUIMessage(modelMessage) + // Should have both text and tool-call parts + expect(result.parts).toHaveLength(2) expect(result.parts[0]).toEqual({ - type: "tool-call", - id: "call-1", - name: "get_weather", + type: 'text', + content: 'Here is the info', + }) + expect(result.parts[1]).toEqual({ + type: 'tool-call', + id: 'call-1', + name: 'get_weather', arguments: '{"city": "NYC"}', - state: "input-complete", - }); - }); + state: 'input-complete', + }) + }) - it("should convert tool role message", () => { + it('should convert tool role message', () => { const modelMessage: ModelMessage = { - role: "tool", - content: "Tool result", - toolCallId: "call-1", - }; + role: 'tool', + content: 'Tool result', + toolCallId: 'call-1', + } - const result = modelMessageToUIMessage(modelMessage); - expect(result.role).toBe("assistant"); // Tool messages converted to assistant + const result = modelMessageToUIMessage(modelMessage) + expect(result.role).toBe('assistant') // Tool messages converted to assistant // Tool messages with content create both text and tool-result parts - expect(result.parts.length).toBeGreaterThanOrEqual(1); - const toolResultPart = result.parts.find((p) => p.type === "tool-result"); + expect(result.parts.length).toBeGreaterThanOrEqual(1) + const toolResultPart = result.parts.find((p) => p.type === 'tool-result') expect(toolResultPart).toEqual({ - type: "tool-result", - toolCallId: "call-1", - content: "Tool result", - state: "complete", - }); - }); - - it("should handle message without content", () => { + type: 'tool-result', + toolCallId: 'call-1', + content: 'Tool result', + state: 'complete', + }) + }) + + it('should handle message without content', () => { const modelMessage: ModelMessage = { - role: "assistant", + role: 'assistant', + content: null, toolCalls: [ { - id: "call-1", - type: "function", + id: 'call-1', + type: 'function', function: { - name: "tool1", - arguments: "{}", + name: 'tool1', + arguments: '{}', }, }, ], - }; + } - const result = modelMessageToUIMessage(modelMessage); - expect(result.parts).toHaveLength(1); - expect(result.parts[0].type).toBe("tool-call"); - }); + const result = modelMessageToUIMessage(modelMessage) + expect(result.parts).toHaveLength(1) + expect(result.parts[0]?.type).toBe('tool-call') + }) - it("should handle empty tool result content", () => { + it('should handle empty tool result content', () => { const modelMessage: ModelMessage = { - role: "tool", + role: 'tool', content: null, - toolCallId: "call-1", - }; + toolCallId: 'call-1', + } - const result = modelMessageToUIMessage(modelMessage); + const result = modelMessageToUIMessage(modelMessage) expect(result.parts[0]).toEqual({ - type: "tool-result", - toolCallId: "call-1", - content: "", - state: "complete", - }); - }); - }); - - describe("modelMessagesToUIMessages", () => { - it("should convert simple messages", () => { - const modelMessages: ModelMessage[] = [ - { role: "user", content: "Hello" }, - { role: "assistant", content: "Hi" }, - ]; - - const result = modelMessagesToUIMessages(modelMessages); - expect(result).toHaveLength(2); - expect(result[0].role).toBe("user"); - expect(result[1].role).toBe("assistant"); - }); - - it("should merge tool results into assistant messages", () => { - const modelMessages: ModelMessage[] = [ + type: 'tool-result', + toolCallId: 'call-1', + content: '', + state: 'complete', + }) + }) + }) + + describe('modelMessagesToUIMessages', () => { + it('should convert simple messages', () => { + const modelMessages: Array = [ + { role: 'user', content: 'Hello' }, + { role: 'assistant', content: 'Hi' }, + ] + + const result = modelMessagesToUIMessages(modelMessages) + expect(result).toHaveLength(2) + expect(result[0]?.role).toBe('user') + expect(result[1]?.role).toBe('assistant') + }) + + it('should merge tool results into assistant messages', () => { + const modelMessages: Array = [ { - role: "assistant", + role: 'assistant', + content: null, toolCalls: [ { - id: "call-1", - type: "function", - function: { name: "tool1", arguments: "{}" }, + id: 'call-1', + type: 'function', + function: { name: 'tool1', arguments: '{}' }, }, ], }, { - role: "tool", - content: "Result", - toolCallId: "call-1", + role: 'tool', + content: 'Result', + toolCallId: 'call-1', }, - ]; - - const result = modelMessagesToUIMessages(modelMessages); - expect(result).toHaveLength(1); - expect(result[0].parts).toHaveLength(2); // tool-call + tool-result - expect(result[0].parts[1]).toEqual({ - type: "tool-result", - toolCallId: "call-1", - content: "Result", - state: "complete", - }); - }); - - it("should create standalone tool result if no assistant message", () => { - const modelMessages: ModelMessage[] = [ - { role: "user", content: "Hello" }, + ] + + const result = modelMessagesToUIMessages(modelMessages) + expect(result).toHaveLength(1) + expect(result[0]?.parts).toHaveLength(2) // tool-call + tool-result + expect(result[0]?.parts[1]).toEqual({ + type: 'tool-result', + toolCallId: 'call-1', + content: 'Result', + state: 'complete', + }) + }) + + it('should create standalone tool result if no assistant message', () => { + const modelMessages: Array = [ + { role: 'user', content: 'Hello' }, { - role: "tool", - content: "Result", - toolCallId: "call-1", + role: 'tool', + content: 'Result', + toolCallId: 'call-1', }, - ]; + ] - const result = modelMessagesToUIMessages(modelMessages); - expect(result).toHaveLength(2); - expect(result[1].role).toBe("assistant"); + const result = modelMessagesToUIMessages(modelMessages) + expect(result).toHaveLength(2) + expect(result[1]?.role).toBe('assistant') // Tool messages with content create both text and tool-result parts - const toolResultPart = result[1].parts.find( - (p) => p.type === "tool-result" - ); - expect(toolResultPart).toBeDefined(); + const toolResultPart = result[1]?.parts.find( + (p) => p.type === 'tool-result', + ) + expect(toolResultPart).toBeDefined() expect(toolResultPart).toEqual({ - type: "tool-result", - toolCallId: "call-1", - content: "Result", - state: "complete", - }); - }); - - it("should reset assistant tracking on non-assistant message", () => { - const modelMessages: ModelMessage[] = [ + type: 'tool-result', + toolCallId: 'call-1', + content: 'Result', + state: 'complete', + }) + }) + + it('should reset assistant tracking on non-assistant message', () => { + const modelMessages: Array = [ { - role: "assistant", - content: "First", + role: 'assistant', + content: 'First', }, - { role: "user", content: "Second" }, + { role: 'user', content: 'Second' }, { - role: "tool", - content: "Result", - toolCallId: "call-1", + role: 'tool', + content: 'Result', + toolCallId: 'call-1', }, - ]; + ] - const result = modelMessagesToUIMessages(modelMessages); - expect(result).toHaveLength(3); + const result = modelMessagesToUIMessages(modelMessages) + expect(result).toHaveLength(3) // Tool result should be standalone since user message reset tracking // Tool messages with content create both text and tool-result parts - const toolResultPart = result[2].parts.find( - (p) => p.type === "tool-result" - ); - expect(toolResultPart).toBeDefined(); + const toolResultPart = result[2]?.parts.find( + (p) => p.type === 'tool-result', + ) + expect(toolResultPart).toBeDefined() expect(toolResultPart).toEqual({ - type: "tool-result", - toolCallId: "call-1", - content: "Result", - state: "complete", - }); - }); - - it("should handle multiple tool results for same assistant", () => { - const modelMessages: ModelMessage[] = [ + type: 'tool-result', + toolCallId: 'call-1', + content: 'Result', + state: 'complete', + }) + }) + + it('should handle multiple tool results for same assistant', () => { + const modelMessages: Array = [ { - role: "assistant", + role: 'assistant', + content: null, toolCalls: [ { - id: "call-1", - type: "function", - function: { name: "tool1", arguments: "{}" }, + id: 'call-1', + type: 'function', + function: { name: 'tool1', arguments: '{}' }, }, { - id: "call-2", - type: "function", - function: { name: "tool2", arguments: "{}" }, + id: 'call-2', + type: 'function', + function: { name: 'tool2', arguments: '{}' }, }, ], }, { - role: "tool", - content: "Result 1", - toolCallId: "call-1", + role: 'tool', + content: 'Result 1', + toolCallId: 'call-1', }, { - role: "tool", - content: "Result 2", - toolCallId: "call-2", + role: 'tool', + content: 'Result 2', + toolCallId: 'call-2', }, - ]; + ] - const result = modelMessagesToUIMessages(modelMessages); - expect(result).toHaveLength(1); - expect(result[0].parts).toHaveLength(4); // 2 tool-calls + 2 tool-results - }); - }); + const result = modelMessagesToUIMessages(modelMessages) + expect(result).toHaveLength(1) + expect(result[0]?.parts).toHaveLength(4) // 2 tool-calls + 2 tool-results + }) + }) - describe("normalizeToUIMessage", () => { - it("should normalize UIMessage with missing id", () => { + describe('normalizeToUIMessage', () => { + it('should normalize UIMessage with missing id', () => { const uiMessage: UIMessage = { - id: "", - role: "user", - parts: [{ type: "text", content: "Hello" }], - }; + id: '', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], + } - const generateId = () => "generated-id"; - const result = normalizeToUIMessage(uiMessage, generateId); + const generateId = () => 'generated-id' + const result = normalizeToUIMessage(uiMessage, generateId) - expect(result.id).toBe("generated-id"); - expect(result.createdAt).toBeInstanceOf(Date); - }); + expect(result.id).toBe('generated-id') + expect(result.createdAt).toBeInstanceOf(Date) + }) - it("should normalize UIMessage with missing createdAt", () => { + it('should normalize UIMessage with missing createdAt', () => { const uiMessage: UIMessage = { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], - }; + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], + } - const generateId = () => "id"; - const result = normalizeToUIMessage(uiMessage, generateId); + const generateId = () => 'id' + const result = normalizeToUIMessage(uiMessage, generateId) - expect(result.id).toBe("msg-1"); - expect(result.createdAt).toBeInstanceOf(Date); - }); + expect(result.id).toBe('msg-1') + expect(result.createdAt).toBeInstanceOf(Date) + }) - it("should preserve existing id and createdAt", () => { - const createdAt = new Date("2024-01-01"); + it('should preserve existing id and createdAt', () => { + const createdAt = new Date('2024-01-01') const uiMessage: UIMessage = { - id: "msg-1", - role: "user", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'user', + parts: [{ type: 'text', content: 'Hello' }], createdAt, - }; + } - const generateId = () => "new-id"; - const result = normalizeToUIMessage(uiMessage, generateId); + const generateId = () => 'new-id' + const result = normalizeToUIMessage(uiMessage, generateId) - expect(result.id).toBe("msg-1"); - expect(result.createdAt).toBe(createdAt); - }); + expect(result.id).toBe('msg-1') + expect(result.createdAt).toBe(createdAt) + }) - it("should convert ModelMessage to UIMessage", () => { + it('should convert ModelMessage to UIMessage', () => { const modelMessage: ModelMessage = { - role: "user", - content: "Hello", - }; + role: 'user', + content: 'Hello', + } - const generateId = () => "msg-1"; - const result = normalizeToUIMessage(modelMessage, generateId); + const generateId = () => 'msg-1' + const result = normalizeToUIMessage(modelMessage, generateId) - expect(result.id).toBe("msg-1"); - expect(result.role).toBe("user"); - expect(result.parts).toHaveLength(1); - expect(result.createdAt).toBeInstanceOf(Date); - }); + expect(result.id).toBe('msg-1') + expect(result.role).toBe('user') + expect(result.parts).toHaveLength(1) + expect(result.createdAt).toBeInstanceOf(Date) + }) - it("should convert ModelMessage with tool calls", () => { + it('should convert ModelMessage with tool calls', () => { const modelMessage: ModelMessage = { - role: "assistant", + role: 'assistant', + content: null, toolCalls: [ { - id: "call-1", - type: "function", - function: { name: "tool1", arguments: "{}" }, + id: 'call-1', + type: 'function', + function: { name: 'tool1', arguments: '{}' }, }, ], - }; + } - const generateId = () => "msg-1"; - const result = normalizeToUIMessage(modelMessage, generateId); + const generateId = () => 'msg-1' + const result = normalizeToUIMessage(modelMessage, generateId) - expect(result.parts).toHaveLength(1); - expect(result.parts[0].type).toBe("tool-call"); - }); - }); -}); + expect(result.parts).toHaveLength(1) + expect(result.parts[0]?.type).toBe('tool-call') + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/message-updaters.test.ts b/packages/typescript/ai-client/tests/message-updaters.test.ts index eebe287fb..99b4d27cc 100644 --- a/packages/typescript/ai-client/tests/message-updaters.test.ts +++ b/packages/typescript/ai-client/tests/message-updaters.test.ts @@ -1,740 +1,735 @@ -import { describe, it, expect } from "vitest"; +import { describe, expect, it } from 'vitest' import { updateTextPart, - updateToolCallPart, - updateToolResultPart, updateToolCallApproval, + updateToolCallApprovalResponse, + updateToolCallPart, updateToolCallState, updateToolCallWithOutput, - updateToolCallApprovalResponse, -} from "../src/message-updaters"; -import type { UIMessage } from "../src/types"; + updateToolResultPart, +} from '../src/message-updaters' +import type { UIMessage } from '../src/types' -describe("message-updaters", () => { - describe("updateTextPart", () => { - it("should add text part to empty message", () => { - const messages: UIMessage[] = [ +describe('message-updaters', () => { + describe('updateTextPart', () => { + it('should add text part to empty message', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, - ]; + ] - const result = updateTextPart(messages, "msg-1", "Hello"); + const result = updateTextPart(messages, 'msg-1', 'Hello') - expect(result[0].parts).toHaveLength(1); - expect(result[0].parts[0]).toEqual({ type: "text", content: "Hello" }); - }); + expect(result[0]?.parts).toHaveLength(1) + expect(result[0]?.parts[0]).toEqual({ type: 'text', content: 'Hello' }) + }) - it("should update existing text part", () => { - const messages: UIMessage[] = [ + it('should update existing text part', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'assistant', + parts: [{ type: 'text', content: 'Hello' }], }, - ]; + ] - const result = updateTextPart(messages, "msg-1", "Hello world"); + const result = updateTextPart(messages, 'msg-1', 'Hello world') - expect(result[0].parts).toHaveLength(1); - expect(result[0].parts[0]).toEqual({ - type: "text", - content: "Hello world", - }); - }); + expect(result[0]?.parts).toHaveLength(1) + expect(result[0]?.parts[0]).toEqual({ + type: 'text', + content: 'Hello world', + }) + }) - it("should place text part after tool calls", () => { - const messages: UIMessage[] = [ + it('should place text part after tool calls', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] - const result = updateTextPart(messages, "msg-1", "Hello"); + const result = updateTextPart(messages, 'msg-1', 'Hello') - expect(result[0].parts).toHaveLength(2); - expect(result[0].parts[0].type).toBe("tool-call"); - expect(result[0].parts[1]).toEqual({ type: "text", content: "Hello" }); - }); + expect(result[0]?.parts).toHaveLength(2) + expect(result[0]?.parts[0]?.type).toBe('tool-call') + expect(result[0]?.parts[1]).toEqual({ type: 'text', content: 'Hello' }) + }) - it("should maintain order: tool calls, other parts, text", () => { - const messages: UIMessage[] = [ + it('should maintain order: tool calls, other parts, text', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, { - type: "tool-result", - toolCallId: "tool-1", - content: "result", - state: "complete", + type: 'tool-result', + toolCallId: 'tool-1', + content: 'result', + state: 'complete', }, ], }, - ]; + ] - const result = updateTextPart(messages, "msg-1", "Hello"); + const result = updateTextPart(messages, 'msg-1', 'Hello') - expect(result[0].parts).toHaveLength(3); - expect(result[0].parts[0].type).toBe("tool-call"); - expect(result[0].parts[1].type).toBe("tool-result"); - expect(result[0].parts[2]).toEqual({ type: "text", content: "Hello" }); - }); + expect(result[0]?.parts).toHaveLength(3) + expect(result[0]?.parts[0]?.type).toBe('tool-call') + expect(result[0]?.parts[1]?.type).toBe('tool-result') + expect(result[0]?.parts[2]).toEqual({ type: 'text', content: 'Hello' }) + }) - it("should not modify other messages", () => { - const messages: UIMessage[] = [ + it('should not modify other messages', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, { - id: "msg-2", - role: "user", - parts: [{ type: "text", content: "User message" }], + id: 'msg-2', + role: 'user', + parts: [{ type: 'text', content: 'User message' }], }, - ]; + ] - const result = updateTextPart(messages, "msg-1", "Hello"); + const result = updateTextPart(messages, 'msg-1', 'Hello') - expect(result[0].parts).toHaveLength(1); - expect(result[1].parts).toHaveLength(1); - expect(result[1].parts[0]).toEqual({ - type: "text", - content: "User message", - }); - }); + expect(result[0]?.parts).toHaveLength(1) + expect(result[1]?.parts).toHaveLength(1) + expect(result[1]?.parts[0]).toEqual({ + type: 'text', + content: 'User message', + }) + }) - it("should return new array (immutability)", () => { - const messages: UIMessage[] = [ + it('should return new array (immutability)', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, - ]; + ] - const result = updateTextPart(messages, "msg-1", "Hello"); + const result = updateTextPart(messages, 'msg-1', 'Hello') - expect(result).not.toBe(messages); - expect(messages[0].parts).toHaveLength(0); - }); - }); + expect(result).not.toBe(messages) + expect(messages[0]?.parts).toHaveLength(0) + }) + }) - describe("updateToolCallPart", () => { - it("should add tool call part to empty message", () => { - const messages: UIMessage[] = [ + describe('updateToolCallPart', () => { + it('should add tool call part to empty message', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, - ]; + ] - const result = updateToolCallPart(messages, "msg-1", { - id: "tool-1", - name: "test", + const result = updateToolCallPart(messages, 'msg-1', { + id: 'tool-1', + name: 'test', arguments: '{"x": 1}', - state: "input-complete", - }); - - expect(result[0].parts).toHaveLength(1); - expect(result[0].parts[0]).toEqual({ - type: "tool-call", - id: "tool-1", - name: "test", + state: 'input-complete', + }) + + expect(result[0]?.parts).toHaveLength(1) + expect(result[0]?.parts[0]).toEqual({ + type: 'tool-call', + id: 'tool-1', + name: 'test', arguments: '{"x": 1}', - state: "input-complete", - }); - }); + state: 'input-complete', + }) + }) - it("should update existing tool call part", () => { - const messages: UIMessage[] = [ + it('should update existing tool call part', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", + type: 'tool-call', + id: 'tool-1', + name: 'test', arguments: '{"x": 1}', - state: "input-streaming", + state: 'input-streaming', }, ], }, - ]; + ] - const result = updateToolCallPart(messages, "msg-1", { - id: "tool-1", - name: "test", + const result = updateToolCallPart(messages, 'msg-1', { + id: 'tool-1', + name: 'test', arguments: '{"x": 1, "y": 2}', - state: "input-complete", - }); - - expect(result[0].parts).toHaveLength(1); - expect(result[0].parts[0]).toEqual({ - type: "tool-call", - id: "tool-1", - name: "test", + state: 'input-complete', + }) + + expect(result[0]?.parts).toHaveLength(1) + expect(result[0]?.parts[0]).toEqual({ + type: 'tool-call', + id: 'tool-1', + name: 'test', arguments: '{"x": 1, "y": 2}', - state: "input-complete", - }); - }); + state: 'input-complete', + }) + }) - it("should insert tool call before text parts", () => { - const messages: UIMessage[] = [ + it('should insert tool call before text parts', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", - parts: [{ type: "text", content: "Hello" }], + id: 'msg-1', + role: 'assistant', + parts: [{ type: 'text', content: 'Hello' }], }, - ]; - - const result = updateToolCallPart(messages, "msg-1", { - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", - }); - - expect(result[0].parts).toHaveLength(2); - expect(result[0].parts[0].type).toBe("tool-call"); - expect(result[0].parts[1].type).toBe("text"); - }); - - it("should not modify other messages", () => { - const messages: UIMessage[] = [ + ] + + const result = updateToolCallPart(messages, 'msg-1', { + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', + }) + + expect(result[0]?.parts).toHaveLength(2) + expect(result[0]?.parts[0]?.type).toBe('tool-call') + expect(result[0]?.parts[1]?.type).toBe('text') + }) + + it('should not modify other messages', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, { - id: "msg-2", - role: "user", - parts: [{ type: "text", content: "User message" }], + id: 'msg-2', + role: 'user', + parts: [{ type: 'text', content: 'User message' }], }, - ]; - - const result = updateToolCallPart(messages, "msg-1", { - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", - }); - - expect(result[0].parts).toHaveLength(1); - expect(result[1].parts).toHaveLength(1); - }); - }); - - describe("updateToolResultPart", () => { - it("should add tool result part to message", () => { - const messages: UIMessage[] = [ + ] + + const result = updateToolCallPart(messages, 'msg-1', { + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', + }) + + expect(result[0]?.parts).toHaveLength(1) + expect(result[1]?.parts).toHaveLength(1) + }) + }) + + describe('updateToolResultPart', () => { + it('should add tool result part to message', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, - ]; + ] const result = updateToolResultPart( messages, - "msg-1", - "tool-1", - "result content", - "complete" - ); - - expect(result[0].parts).toHaveLength(1); - expect(result[0].parts[0]).toEqual({ - type: "tool-result", - toolCallId: "tool-1", - content: "result content", - state: "complete", - }); - }); - - it("should update existing tool result part", () => { - const messages: UIMessage[] = [ + 'msg-1', + 'tool-1', + 'result content', + 'complete', + ) + + expect(result[0]?.parts).toHaveLength(1) + expect(result[0]?.parts[0]).toEqual({ + type: 'tool-result', + toolCallId: 'tool-1', + content: 'result content', + state: 'complete', + }) + }) + + it('should update existing tool result part', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-result", - toolCallId: "tool-1", - content: "old content", - state: "streaming", + type: 'tool-result', + toolCallId: 'tool-1', + content: 'old content', + state: 'streaming', }, ], }, - ]; + ] const result = updateToolResultPart( messages, - "msg-1", - "tool-1", - "new content", - "complete" - ); - - expect(result[0].parts).toHaveLength(1); - expect(result[0].parts[0]).toEqual({ - type: "tool-result", - toolCallId: "tool-1", - content: "new content", - state: "complete", - }); - }); - - it("should include error when provided", () => { - const messages: UIMessage[] = [ + 'msg-1', + 'tool-1', + 'new content', + 'complete', + ) + + expect(result[0]?.parts).toHaveLength(1) + expect(result[0]?.parts[0]).toEqual({ + type: 'tool-result', + toolCallId: 'tool-1', + content: 'new content', + state: 'complete', + }) + }) + + it('should include error when provided', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, - ]; + ] const result = updateToolResultPart( messages, - "msg-1", - "tool-1", - "error content", - "error", - "Something went wrong" - ); - - expect(result[0].parts[0]).toEqual({ - type: "tool-result", - toolCallId: "tool-1", - content: "error content", - state: "error", - error: "Something went wrong", - }); - }); - }); - - describe("updateToolCallApproval", () => { - it("should add approval metadata to tool call", () => { - const messages: UIMessage[] = [ + 'msg-1', + 'tool-1', + 'error content', + 'error', + 'Something went wrong', + ) + + expect(result[0]?.parts[0]).toEqual({ + type: 'tool-result', + toolCallId: 'tool-1', + content: 'error content', + state: 'error', + error: 'Something went wrong', + }) + }) + }) + + describe('updateToolCallApproval', () => { + it('should add approval metadata to tool call', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] const result = updateToolCallApproval( messages, - "msg-1", - "tool-1", - "approval-123" - ); - - const toolCall = result[0].parts[0]; - expect(toolCall.type).toBe("tool-call"); - if (toolCall.type === "tool-call") { - expect(toolCall.state).toBe("approval-requested"); + 'msg-1', + 'tool-1', + 'approval-123', + ) + + const toolCall = result[0]?.parts[0] + expect(toolCall?.type).toBe('tool-call') + if (toolCall?.type === 'tool-call') { + expect(toolCall.state).toBe('approval-requested') expect(toolCall.approval).toEqual({ - id: "approval-123", + id: 'approval-123', needsApproval: true, - }); + }) } - }); + }) - it("should not modify tool call if not found", () => { - const messages: UIMessage[] = [ + it('should not modify tool call if not found', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] const result = updateToolCallApproval( messages, - "msg-1", - "tool-2", - "approval-123" - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.state).toBe("input-complete"); - expect(toolCall.approval).toBeUndefined(); + 'msg-1', + 'tool-2', + 'approval-123', + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.state).toBe('input-complete') + expect(toolCall.approval).toBeUndefined() } - }); - }); + }) + }) - describe("updateToolCallState", () => { - it("should update tool call state", () => { - const messages: UIMessage[] = [ + describe('updateToolCallState', () => { + it('should update tool call state', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-streaming", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-streaming', }, ], }, - ]; + ] const result = updateToolCallState( messages, - "msg-1", - "tool-1", - "input-complete" - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.state).toBe("input-complete"); + 'msg-1', + 'tool-1', + 'input-complete', + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.state).toBe('input-complete') } - }); + }) - it("should not modify tool call if not found", () => { - const messages: UIMessage[] = [ + it('should not modify tool call if not found', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-streaming", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-streaming', }, ], }, - ]; + ] const result = updateToolCallState( messages, - "msg-1", - "tool-2", - "input-complete" - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.state).toBe("input-streaming"); + 'msg-1', + 'tool-2', + 'input-complete', + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.state).toBe('input-streaming') } - }); - }); + }) + }) - describe("updateToolCallWithOutput", () => { - it("should update tool call with output", () => { - const messages: UIMessage[] = [ + describe('updateToolCallWithOutput', () => { + it('should update tool call with output', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] - const result = updateToolCallWithOutput( - messages, - "tool-1", - { result: "success" } - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.output).toEqual({ result: "success" }); - expect(toolCall.state).toBe("input-complete"); + const result = updateToolCallWithOutput(messages, 'tool-1', { + result: 'success', + }) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.output).toEqual({ result: 'success' }) + expect(toolCall.state).toBe('input-complete') } - }); + }) - it("should update state when provided", () => { - const messages: UIMessage[] = [ + it('should update state when provided', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] const result = updateToolCallWithOutput( messages, - "tool-1", - { result: "success" }, - "approval-requested" - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.state).toBe("approval-requested"); + 'tool-1', + { result: 'success' }, + 'approval-requested', + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.state).toBe('approval-requested') } - }); + }) - it("should handle error text", () => { - const messages: UIMessage[] = [ + it('should handle error text', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] const result = updateToolCallWithOutput( messages, - "tool-1", + 'tool-1', null, undefined, - "Error occurred" - ); + 'Error occurred', + ) - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.output).toEqual({ error: "Error occurred" }); + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.output).toEqual({ error: 'Error occurred' }) } - }); + }) - it("should search across all messages", () => { - const messages: UIMessage[] = [ + it('should search across all messages', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, { - id: "msg-2", - role: "assistant", + id: 'msg-2', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "input-complete", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'input-complete', }, ], }, - ]; + ] - const result = updateToolCallWithOutput( - messages, - "tool-1", - { result: "success" } - ); - - expect(result[0].parts).toHaveLength(0); - const toolCall = result[1].parts[0]; - if (toolCall.type === "tool-call") { - expect(toolCall.output).toEqual({ result: "success" }); + const result = updateToolCallWithOutput(messages, 'tool-1', { + result: 'success', + }) + + expect(result[0]?.parts).toHaveLength(0) + const toolCall = result[1]?.parts[0] + if (toolCall?.type === 'tool-call') { + expect(toolCall.output).toEqual({ result: 'success' }) } - }); - }); + }) + }) - describe("updateToolCallApprovalResponse", () => { - it("should update approval response", () => { - const messages: UIMessage[] = [ + describe('updateToolCallApprovalResponse', () => { + it('should update approval response', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "approval-requested", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'approval-requested', approval: { - id: "approval-123", + id: 'approval-123', needsApproval: true, }, }, ], }, - ]; + ] const result = updateToolCallApprovalResponse( messages, - "approval-123", - true - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call" && toolCall.approval) { - expect(toolCall.approval.approved).toBe(true); - expect(toolCall.state).toBe("approval-responded"); + 'approval-123', + true, + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call' && toolCall.approval) { + expect(toolCall.approval.approved).toBe(true) + expect(toolCall.state).toBe('approval-responded') } - }); + }) - it("should handle denied approval", () => { - const messages: UIMessage[] = [ + it('should handle denied approval', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "approval-requested", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'approval-requested', approval: { - id: "approval-123", + id: 'approval-123', needsApproval: true, }, }, ], }, - ]; + ] const result = updateToolCallApprovalResponse( messages, - "approval-123", - false - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call" && toolCall.approval) { - expect(toolCall.approval.approved).toBe(false); - expect(toolCall.state).toBe("approval-responded"); + 'approval-123', + false, + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call' && toolCall.approval) { + expect(toolCall.approval.approved).toBe(false) + expect(toolCall.state).toBe('approval-responded') } - }); + }) - it("should search across all messages", () => { - const messages: UIMessage[] = [ + it('should search across all messages', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [], }, { - id: "msg-2", - role: "assistant", + id: 'msg-2', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "approval-requested", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'approval-requested', approval: { - id: "approval-123", + id: 'approval-123', needsApproval: true, }, }, ], }, - ]; + ] const result = updateToolCallApprovalResponse( messages, - "approval-123", - true - ); - - expect(result[0].parts).toHaveLength(0); - const toolCall = result[1].parts[0]; - if (toolCall.type === "tool-call" && toolCall.approval) { - expect(toolCall.approval.approved).toBe(true); + 'approval-123', + true, + ) + + expect(result[0]?.parts).toHaveLength(0) + const toolCall = result[1]?.parts[0] + if (toolCall?.type === 'tool-call' && toolCall.approval) { + expect(toolCall.approval.approved).toBe(true) } - }); + }) - it("should not modify if approval not found", () => { - const messages: UIMessage[] = [ + it('should not modify if approval not found', () => { + const messages: Array = [ { - id: "msg-1", - role: "assistant", + id: 'msg-1', + role: 'assistant', parts: [ { - type: "tool-call", - id: "tool-1", - name: "test", - arguments: "{}", - state: "approval-requested", + type: 'tool-call', + id: 'tool-1', + name: 'test', + arguments: '{}', + state: 'approval-requested', approval: { - id: "approval-123", + id: 'approval-123', needsApproval: true, }, }, ], }, - ]; + ] const result = updateToolCallApprovalResponse( messages, - "approval-999", - true - ); - - const toolCall = result[0].parts[0]; - if (toolCall.type === "tool-call" && toolCall.approval) { - expect(toolCall.approval.approved).toBeUndefined(); - expect(toolCall.state).toBe("approval-requested"); + 'approval-999', + true, + ) + + const toolCall = result[0]?.parts[0] + if (toolCall?.type === 'tool-call' && toolCall.approval) { + expect(toolCall.approval.approved).toBeUndefined() + expect(toolCall.state).toBe('approval-requested') } - }); - }); -}); - + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/stream-processor.test.ts b/packages/typescript/ai-client/tests/stream-processor.test.ts index d954a11b6..1a152da5a 100644 --- a/packages/typescript/ai-client/tests/stream-processor.test.ts +++ b/packages/typescript/ai-client/tests/stream-processor.test.ts @@ -1,333 +1,334 @@ -import { describe, it, expect, vi } from "vitest"; -import { StreamProcessor } from "../src/stream/processor"; +import { describe, expect, it } from 'vitest' +import { StreamProcessor } from '../src/stream/processor' -describe("StreamProcessor - Tool Call Handling", () => { - it("should handle multiple tool calls with same index correctly", async () => { +describe('StreamProcessor - Tool Call Handling', () => { + it('should handle multiple tool calls with same index correctly', async () => { // REAL chunks captured from actual OpenAI stream const rawChunks = [ // First response: getGuitars { - type: "tool_call", - id: "chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703060, toolCall: { - id: "call_RhSbfkt2O34Wozns6KFxSvL7", - type: "function", + id: 'call_RhSbfkt2O34Wozns6KFxSvL7', + type: 'function', function: { - name: "getGuitars", - arguments: "", + name: 'getGuitars', + arguments: '', }, }, index: 0, }, { - type: "tool_call", - id: "chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703060, toolCall: { - id: "call_RhSbfkt2O34Wozns6KFxSvL7", - type: "function", + id: 'call_RhSbfkt2O34Wozns6KFxSvL7', + type: 'function', function: { - name: "getGuitars", - arguments: "{}", + name: 'getGuitars', + arguments: '{}', }, }, index: 0, }, { - type: "done", - id: "chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ", - model: "gpt-4o-2024-08-06", + type: 'done', + id: 'chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703060, - finishReason: "tool_calls", + finishReason: 'tool_calls', }, // Tool result { - type: "tool_result", - id: "chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ", - model: "gpt-4o-2024-08-06", + type: 'tool_result', + id: 'chatcmpl-CXZrKuhSRu4G2qbT1mNYCEvNd8DMJ', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703087, - toolCallId: "call_RhSbfkt2O34Wozns6KFxSvL7", + toolCallId: 'call_RhSbfkt2O34Wozns6KFxSvL7', content: '[{"id":6,"name":"Travelin\' Man Guitar"}]', }, // Second response: recommendGuitar (ALSO index 0!) { - type: "tool_call", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, toolCall: { - id: "call_SP6fjKyNURSf4EebfrnsdpTM", - type: "function", + id: 'call_SP6fjKyNURSf4EebfrnsdpTM', + type: 'function', function: { - name: "recommendGuitar", - arguments: "", + name: 'recommendGuitar', + arguments: '', }, }, index: 0, }, { - type: "tool_call", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, toolCall: { - id: "call_SP6fjKyNURSf4EebfrnsdpTM", - type: "function", + id: 'call_SP6fjKyNURSf4EebfrnsdpTM', + type: 'function', function: { - name: "recommendGuitar", + name: 'recommendGuitar', arguments: '{"', }, }, index: 0, }, { - type: "tool_call", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, toolCall: { - id: "call_SP6fjKyNURSf4EebfrnsdpTM", - type: "function", + id: 'call_SP6fjKyNURSf4EebfrnsdpTM', + type: 'function', function: { - name: "recommendGuitar", - arguments: "id", + name: 'recommendGuitar', + arguments: 'id', }, }, index: 0, }, { - type: "tool_call", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, toolCall: { - id: "call_SP6fjKyNURSf4EebfrnsdpTM", - type: "function", + id: 'call_SP6fjKyNURSf4EebfrnsdpTM', + type: 'function', function: { - name: "recommendGuitar", + name: 'recommendGuitar', arguments: '":"', }, }, index: 0, }, { - type: "tool_call", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, toolCall: { - id: "call_SP6fjKyNURSf4EebfrnsdpTM", - type: "function", + id: 'call_SP6fjKyNURSf4EebfrnsdpTM', + type: 'function', function: { - name: "recommendGuitar", - arguments: "6", + name: 'recommendGuitar', + arguments: '6', }, }, index: 0, }, { - type: "tool_call", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_call', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, toolCall: { - id: "call_SP6fjKyNURSf4EebfrnsdpTM", - type: "function", + id: 'call_SP6fjKyNURSf4EebfrnsdpTM', + type: 'function', function: { - name: "recommendGuitar", + name: 'recommendGuitar', arguments: '"}', }, }, index: 0, }, { - type: "done", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'done', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703598, - finishReason: "tool_calls", + finishReason: 'tool_calls', }, // Tool result for recommendGuitar { - type: "tool_result", - id: "chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I", - model: "gpt-4o-2024-08-06", + type: 'tool_result', + id: 'chatcmpl-CXZrLsKvaT7GXnWB6MY7v0uxylC4I', + model: 'gpt-4o-2024-08-06', timestamp: 1762118703715, - toolCallId: "call_SP6fjKyNURSf4EebfrnsdpTM", + toolCallId: 'call_SP6fjKyNURSf4EebfrnsdpTM', content: '{"id":"6"}', }, // Final response with text content { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: "Complete", - content: "Complete", - role: "assistant", + delta: 'Complete', + content: 'Complete', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: "!", - content: "Complete!", - role: "assistant", + delta: '!', + content: 'Complete!', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " If", - content: "Complete! If", - role: "assistant", + delta: ' If', + content: 'Complete! If', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " you", - content: "Complete! If you", - role: "assistant", + delta: ' you', + content: 'Complete! If you', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " need", - content: "Complete! If you need", - role: "assistant", + delta: ' need', + content: 'Complete! If you need', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " anything", - content: "Complete! If you need anything", - role: "assistant", + delta: ' anything', + content: 'Complete! If you need anything', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " else", - content: "Complete! If you need anything else", - role: "assistant", + delta: ' else', + content: 'Complete! If you need anything else', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: ",", - content: "Complete! If you need anything else,", - role: "assistant", + delta: ',', + content: 'Complete! If you need anything else,', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " feel", - content: "Complete! If you need anything else, feel", - role: "assistant", + delta: ' feel', + content: 'Complete! If you need anything else, feel', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " free", - content: "Complete! If you need anything else, feel free", - role: "assistant", + delta: ' free', + content: 'Complete! If you need anything else, feel free', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " to", - content: "Complete! If you need anything else, feel free to", - role: "assistant", + delta: ' to', + content: 'Complete! If you need anything else, feel free to', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: " ask", - content: "Complete! If you need anything else, feel free to ask", - role: "assistant", + delta: ' ask', + content: 'Complete! If you need anything else, feel free to ask', + role: 'assistant', }, { - type: "content", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'content', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - delta: ".", - content: "Complete! If you need anything else, feel free to ask.", - role: "assistant", + delta: '.', + content: 'Complete! If you need anything else, feel free to ask.', + role: 'assistant', }, { - type: "done", - id: "chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81", - model: "gpt-4o-2024-08-06", + type: 'done', + id: 'chatcmpl-CXZrLlrA0MSqH2JAUVEz6OWwc7z81', + model: 'gpt-4o-2024-08-06', timestamp: 1762118704048, - finishReason: "stop", + finishReason: 'stop', }, - ]; + ] // Track what handlers are called - const events: any[] = []; + const events: Array = [] const processor = new StreamProcessor({ handlers: { onTextUpdate: (content) => { - events.push({ type: "text", content }); + events.push({ type: 'text', content }) }, onToolCallStateChange: (index, id, name, state, args) => { - events.push({ type: "tool-call", index, id, name, state, args }); + events.push({ type: 'tool-call', index, id, name, state, args }) }, }, - }); + }) // Convert chunks to async iterable + // eslint-disable-next-line @typescript-eslint/require-await async function* createStream() { for (const chunk of rawChunks) { - yield chunk; + yield chunk } } - const result = await processor.process(createStream()); + const result = await processor.process(createStream()) // Expected: TWO tool calls with different IDs - expect(result.toolCalls).toBeDefined(); - expect(result.toolCalls!.length).toBe(2); + expect(result.toolCalls).toBeDefined() + expect(result.toolCalls!.length).toBe(2) // First tool call: getGuitars - const getGuitarsCall = result.toolCalls![0]; - expect(getGuitarsCall.function.name).toBe("getGuitars"); - expect(getGuitarsCall.function.arguments).toBe("{}"); - expect(getGuitarsCall.id).toBe("call_RhSbfkt2O34Wozns6KFxSvL7"); + const getGuitarsCall = result.toolCalls![0] + expect(getGuitarsCall.function.name).toBe('getGuitars') + expect(getGuitarsCall.function.arguments).toBe('{}') + expect(getGuitarsCall.id).toBe('call_RhSbfkt2O34Wozns6KFxSvL7') // Second tool call: recommendGuitar - const recommendCall = result.toolCalls![1]; - expect(recommendCall.function.name).toBe("recommendGuitar"); - expect(recommendCall.function.arguments).toBe('{"id":"6"}'); - expect(recommendCall.id).toBe("call_SP6fjKyNURSf4EebfrnsdpTM"); + const recommendCall = result.toolCalls![1] + expect(recommendCall.function.name).toBe('recommendGuitar') + expect(recommendCall.function.arguments).toBe('{"id":"6"}') + expect(recommendCall.id).toBe('call_SP6fjKyNURSf4EebfrnsdpTM') // Text content should be present expect(result.content).toBe( - "Complete! If you need anything else, feel free to ask." - ); - }); -}); + 'Complete! If you need anything else, feel free to ask.', + ) + }) +}) diff --git a/packages/typescript/ai-client/tests/stream/chunk-strategies.test.ts b/packages/typescript/ai-client/tests/stream/chunk-strategies.test.ts index 5c594bd29..25bfd071d 100644 --- a/packages/typescript/ai-client/tests/stream/chunk-strategies.test.ts +++ b/packages/typescript/ai-client/tests/stream/chunk-strategies.test.ts @@ -1,4 +1,4 @@ -import { describe, it, expect, beforeEach, afterEach, vi } from "vitest"; +import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest' import { ImmediateStrategy, PunctuationStrategy, @@ -6,406 +6,407 @@ import { WordBoundaryStrategy, CompositeStrategy, DebounceStrategy, -} from "../../src/stream/chunk-strategies"; +} from '../../src/stream/chunk-strategies' -describe("ImmediateStrategy", () => { - let strategy: ImmediateStrategy; +describe('ImmediateStrategy', () => { + let strategy: ImmediateStrategy beforeEach(() => { - strategy = new ImmediateStrategy(); - }); - - it("should emit on every chunk", () => { - expect(strategy.shouldEmit("Hello", "Hello")).toBe(true); - expect(strategy.shouldEmit(" world", "Hello world")).toBe(true); - expect(strategy.shouldEmit("!", "Hello world!")).toBe(true); - }); - - it("should emit regardless of chunk content", () => { - expect(strategy.shouldEmit("", "")).toBe(true); - expect(strategy.shouldEmit("abc", "abc")).toBe(true); - expect(strategy.shouldEmit("123", "123")).toBe(true); - expect(strategy.shouldEmit("!@#", "!@#")).toBe(true); - }); - - it("should emit regardless of accumulated content", () => { - expect(strategy.shouldEmit("chunk", "")).toBe(true); - expect(strategy.shouldEmit("chunk", "previous")).toBe(true); - expect(strategy.shouldEmit("chunk", "very long accumulated text")).toBe(true); - }); -}); - -describe("PunctuationStrategy", () => { - let strategy: PunctuationStrategy; + strategy = new ImmediateStrategy() + }) + + it('should emit on every chunk', () => { + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(true) + expect(strategy.shouldEmit(' world', 'Hello world')).toBe(true) + expect(strategy.shouldEmit('!', 'Hello world!')).toBe(true) + }) + + it('should emit regardless of chunk content', () => { + expect(strategy.shouldEmit('', '')).toBe(true) + expect(strategy.shouldEmit('abc', 'abc')).toBe(true) + expect(strategy.shouldEmit('123', '123')).toBe(true) + expect(strategy.shouldEmit('!@#', '!@#')).toBe(true) + }) + + it('should emit regardless of accumulated content', () => { + expect(strategy.shouldEmit('chunk', '')).toBe(true) + expect(strategy.shouldEmit('chunk', 'previous')).toBe(true) + expect(strategy.shouldEmit('chunk', 'very long accumulated text')).toBe( + true, + ) + }) +}) + +describe('PunctuationStrategy', () => { + let strategy: PunctuationStrategy beforeEach(() => { - strategy = new PunctuationStrategy(); - }); + strategy = new PunctuationStrategy() + }) - it("should emit when chunk contains period", () => { - expect(strategy.shouldEmit("Hello.", "Hello.")).toBe(true); - }); + it('should emit when chunk contains period', () => { + expect(strategy.shouldEmit('Hello.', 'Hello.')).toBe(true) + }) - it("should emit when chunk contains comma", () => { - expect(strategy.shouldEmit("Hi,", "Hi,")).toBe(true); - }); + it('should emit when chunk contains comma', () => { + expect(strategy.shouldEmit('Hi,', 'Hi,')).toBe(true) + }) - it("should emit when chunk contains exclamation", () => { - expect(strategy.shouldEmit("Wow!", "Wow!")).toBe(true); - }); + it('should emit when chunk contains exclamation', () => { + expect(strategy.shouldEmit('Wow!', 'Wow!')).toBe(true) + }) - it("should emit when chunk contains question mark", () => { - expect(strategy.shouldEmit("Why?", "Why?")).toBe(true); - }); + it('should emit when chunk contains question mark', () => { + expect(strategy.shouldEmit('Why?', 'Why?')).toBe(true) + }) - it("should emit when chunk contains semicolon", () => { - expect(strategy.shouldEmit("First;", "First;")).toBe(true); - }); + it('should emit when chunk contains semicolon', () => { + expect(strategy.shouldEmit('First;', 'First;')).toBe(true) + }) - it("should emit when chunk contains colon", () => { - expect(strategy.shouldEmit("Title:", "Title:")).toBe(true); - }); + it('should emit when chunk contains colon', () => { + expect(strategy.shouldEmit('Title:', 'Title:')).toBe(true) + }) - it("should emit when chunk contains newline", () => { - expect(strategy.shouldEmit("Line\n", "Line\n")).toBe(true); - }); + it('should emit when chunk contains newline', () => { + expect(strategy.shouldEmit('Line\n', 'Line\n')).toBe(true) + }) - it("should not emit when chunk has no punctuation", () => { - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - expect(strategy.shouldEmit(" world", "Hello world")).toBe(false); - expect(strategy.shouldEmit("abc123", "abc123")).toBe(false); - }); + it('should not emit when chunk has no punctuation', () => { + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + expect(strategy.shouldEmit(' world', 'Hello world')).toBe(false) + expect(strategy.shouldEmit('abc123', 'abc123')).toBe(false) + }) - it("should emit when punctuation is in the middle of chunk", () => { - expect(strategy.shouldEmit("Hello.world", "Hello.world")).toBe(true); - expect(strategy.shouldEmit("test,data", "test,data")).toBe(true); - }); + it('should emit when punctuation is in the middle of chunk', () => { + expect(strategy.shouldEmit('Hello.world', 'Hello.world')).toBe(true) + expect(strategy.shouldEmit('test,data', 'test,data')).toBe(true) + }) - it("should handle multiple punctuation marks", () => { - expect(strategy.shouldEmit("Hello, world!", "Hello, world!")).toBe(true); - expect(strategy.shouldEmit("What?!", "What?!")).toBe(true); - }); + it('should handle multiple punctuation marks', () => { + expect(strategy.shouldEmit('Hello, world!', 'Hello, world!')).toBe(true) + expect(strategy.shouldEmit('What?!', 'What?!')).toBe(true) + }) - it("should handle empty chunks", () => { - expect(strategy.shouldEmit("", "")).toBe(false); - }); -}); + it('should handle empty chunks', () => { + expect(strategy.shouldEmit('', '')).toBe(false) + }) +}) -describe("BatchStrategy", () => { - it("should emit every N chunks", () => { - const strategy = new BatchStrategy(3); +describe('BatchStrategy', () => { + it('should emit every N chunks', () => { + const strategy = new BatchStrategy(3) - expect(strategy.shouldEmit("1", "1")).toBe(false); - expect(strategy.shouldEmit("2", "12")).toBe(false); - expect(strategy.shouldEmit("3", "123")).toBe(true); // 3rd chunk + expect(strategy.shouldEmit('1', '1')).toBe(false) + expect(strategy.shouldEmit('2', '12')).toBe(false) + expect(strategy.shouldEmit('3', '123')).toBe(true) // 3rd chunk - expect(strategy.shouldEmit("4", "1234")).toBe(false); - expect(strategy.shouldEmit("5", "12345")).toBe(false); - expect(strategy.shouldEmit("6", "123456")).toBe(true); // 6th chunk - }); + expect(strategy.shouldEmit('4', '1234')).toBe(false) + expect(strategy.shouldEmit('5', '12345')).toBe(false) + expect(strategy.shouldEmit('6', '123456')).toBe(true) // 6th chunk + }) - it("should reset counter when reset is called", () => { - const strategy = new BatchStrategy(3); + it('should reset counter when reset is called', () => { + const strategy = new BatchStrategy(3) - strategy.shouldEmit("1", "1"); - strategy.shouldEmit("2", "12"); + strategy.shouldEmit('1', '1') + strategy.shouldEmit('2', '12') - strategy.reset(); + strategy.reset() - expect(strategy.shouldEmit("1", "1")).toBe(false); - expect(strategy.shouldEmit("2", "12")).toBe(false); - expect(strategy.shouldEmit("3", "123")).toBe(true); - }); + expect(strategy.shouldEmit('1', '1')).toBe(false) + expect(strategy.shouldEmit('2', '12')).toBe(false) + expect(strategy.shouldEmit('3', '123')).toBe(true) + }) - it("should work with batch size of 1", () => { - const strategy = new BatchStrategy(1); + it('should work with batch size of 1', () => { + const strategy = new BatchStrategy(1) - expect(strategy.shouldEmit("1", "1")).toBe(true); - expect(strategy.shouldEmit("2", "12")).toBe(true); - expect(strategy.shouldEmit("3", "123")).toBe(true); - }); + expect(strategy.shouldEmit('1', '1')).toBe(true) + expect(strategy.shouldEmit('2', '12')).toBe(true) + expect(strategy.shouldEmit('3', '123')).toBe(true) + }) - it("should use default batch size of 5", () => { - const strategy = new BatchStrategy(); + it('should use default batch size of 5', () => { + const strategy = new BatchStrategy() - expect(strategy.shouldEmit("1", "1")).toBe(false); - expect(strategy.shouldEmit("2", "12")).toBe(false); - expect(strategy.shouldEmit("3", "123")).toBe(false); - expect(strategy.shouldEmit("4", "1234")).toBe(false); - expect(strategy.shouldEmit("5", "12345")).toBe(true); - }); + expect(strategy.shouldEmit('1', '1')).toBe(false) + expect(strategy.shouldEmit('2', '12')).toBe(false) + expect(strategy.shouldEmit('3', '123')).toBe(false) + expect(strategy.shouldEmit('4', '1234')).toBe(false) + expect(strategy.shouldEmit('5', '12345')).toBe(true) + }) - it("should handle very large batch sizes", () => { - const strategy = new BatchStrategy(10); + it('should handle very large batch sizes', () => { + const strategy = new BatchStrategy(10) for (let i = 1; i < 10; i++) { - expect(strategy.shouldEmit(`${i}`, "1".repeat(i))).toBe(false); + expect(strategy.shouldEmit(`${i}`, '1'.repeat(i))).toBe(false) } - expect(strategy.shouldEmit("10", "1".repeat(10))).toBe(true); - }); + expect(strategy.shouldEmit('10', '1'.repeat(10))).toBe(true) + }) - it("should work correctly across multiple batches", () => { - const strategy = new BatchStrategy(2); + it('should work correctly across multiple batches', () => { + const strategy = new BatchStrategy(2) // First batch - expect(strategy.shouldEmit("a", "a")).toBe(false); - expect(strategy.shouldEmit("b", "ab")).toBe(true); + expect(strategy.shouldEmit('a', 'a')).toBe(false) + expect(strategy.shouldEmit('b', 'ab')).toBe(true) // Second batch - expect(strategy.shouldEmit("c", "abc")).toBe(false); - expect(strategy.shouldEmit("d", "abcd")).toBe(true); + expect(strategy.shouldEmit('c', 'abc')).toBe(false) + expect(strategy.shouldEmit('d', 'abcd')).toBe(true) // Third batch - expect(strategy.shouldEmit("e", "abcde")).toBe(false); - expect(strategy.shouldEmit("f", "abcdef")).toBe(true); - }); -}); + expect(strategy.shouldEmit('e', 'abcde')).toBe(false) + expect(strategy.shouldEmit('f', 'abcdef')).toBe(true) + }) +}) -describe("WordBoundaryStrategy", () => { - let strategy: WordBoundaryStrategy; +describe('WordBoundaryStrategy', () => { + let strategy: WordBoundaryStrategy beforeEach(() => { - strategy = new WordBoundaryStrategy(); - }); - - it("should emit when chunk ends with space", () => { - expect(strategy.shouldEmit("Hello ", "Hello ")).toBe(true); - }); - - it("should emit when chunk ends with tab", () => { - expect(strategy.shouldEmit("Hello\t", "Hello\t")).toBe(true); - }); - - it("should emit when chunk ends with newline", () => { - expect(strategy.shouldEmit("Hello\n", "Hello\n")).toBe(true); - }); - - it("should not emit when chunk ends with letter", () => { - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - expect(strategy.shouldEmit("Hel", "Hel")).toBe(false); - }); - - it("should not emit when chunk ends with punctuation (no space)", () => { - expect(strategy.shouldEmit("Hello!", "Hello!")).toBe(false); - expect(strategy.shouldEmit("Hello.", "Hello.")).toBe(false); - expect(strategy.shouldEmit("Hello?", "Hello?")).toBe(false); - }); - - it("should emit when chunk ends with multiple spaces", () => { - expect(strategy.shouldEmit("Hello ", "Hello ")).toBe(true); - expect(strategy.shouldEmit("Hello\t\t", "Hello\t\t")).toBe(true); - }); - - it("should not emit when chunk starts with whitespace but ends with character", () => { - expect(strategy.shouldEmit(" Hello", " Hello")).toBe(false); - expect(strategy.shouldEmit("\tHello", "\tHello")).toBe(false); - }); - - it("should handle empty chunks", () => { - expect(strategy.shouldEmit("", "")).toBe(false); - }); - - it("should handle chunks that are only whitespace", () => { - expect(strategy.shouldEmit(" ", " ")).toBe(true); - expect(strategy.shouldEmit("\t", "\t")).toBe(true); - expect(strategy.shouldEmit("\n", "\n")).toBe(true); - }); -}); - -describe("CompositeStrategy", () => { - it("should emit if ANY sub-strategy returns true", () => { + strategy = new WordBoundaryStrategy() + }) + + it('should emit when chunk ends with space', () => { + expect(strategy.shouldEmit('Hello ', 'Hello ')).toBe(true) + }) + + it('should emit when chunk ends with tab', () => { + expect(strategy.shouldEmit('Hello\t', 'Hello\t')).toBe(true) + }) + + it('should emit when chunk ends with newline', () => { + expect(strategy.shouldEmit('Hello\n', 'Hello\n')).toBe(true) + }) + + it('should not emit when chunk ends with letter', () => { + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + expect(strategy.shouldEmit('Hel', 'Hel')).toBe(false) + }) + + it('should not emit when chunk ends with punctuation (no space)', () => { + expect(strategy.shouldEmit('Hello!', 'Hello!')).toBe(false) + expect(strategy.shouldEmit('Hello.', 'Hello.')).toBe(false) + expect(strategy.shouldEmit('Hello?', 'Hello?')).toBe(false) + }) + + it('should emit when chunk ends with multiple spaces', () => { + expect(strategy.shouldEmit('Hello ', 'Hello ')).toBe(true) + expect(strategy.shouldEmit('Hello\t\t', 'Hello\t\t')).toBe(true) + }) + + it('should not emit when chunk starts with whitespace but ends with character', () => { + expect(strategy.shouldEmit(' Hello', ' Hello')).toBe(false) + expect(strategy.shouldEmit('\tHello', '\tHello')).toBe(false) + }) + + it('should handle empty chunks', () => { + expect(strategy.shouldEmit('', '')).toBe(false) + }) + + it('should handle chunks that are only whitespace', () => { + expect(strategy.shouldEmit(' ', ' ')).toBe(true) + expect(strategy.shouldEmit('\t', '\t')).toBe(true) + expect(strategy.shouldEmit('\n', '\n')).toBe(true) + }) +}) + +describe('CompositeStrategy', () => { + it('should emit if ANY sub-strategy returns true', () => { const strategy = new CompositeStrategy([ new PunctuationStrategy(), new WordBoundaryStrategy(), - ]); + ]) // Punctuation - should emit - expect(strategy.shouldEmit("Hello.", "Hello.")).toBe(true); + expect(strategy.shouldEmit('Hello.', 'Hello.')).toBe(true) // Word boundary - should emit - expect(strategy.shouldEmit("Hello ", "Hello ")).toBe(true); + expect(strategy.shouldEmit('Hello ', 'Hello ')).toBe(true) // Both - should emit - expect(strategy.shouldEmit("Hello. ", "Hello. ")).toBe(true); + expect(strategy.shouldEmit('Hello. ', 'Hello. ')).toBe(true) // Neither - should not emit - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - }); + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + }) - it("should reset all sub-strategies", () => { - const batch1 = new BatchStrategy(2); - const batch2 = new BatchStrategy(3); - const strategy = new CompositeStrategy([batch1, batch2]); + it('should reset all sub-strategies', () => { + const batch1 = new BatchStrategy(2) + const batch2 = new BatchStrategy(3) + const strategy = new CompositeStrategy([batch1, batch2]) - batch1.shouldEmit("1", "1"); - batch2.shouldEmit("1", "1"); + batch1.shouldEmit('1', '1') + batch2.shouldEmit('1', '1') - strategy.reset(); + strategy.reset() // After reset, counters should be back to 0 - expect(batch1.shouldEmit("1", "1")).toBe(false); - expect(batch2.shouldEmit("1", "1")).toBe(false); - }); + expect(batch1.shouldEmit('1', '1')).toBe(false) + expect(batch2.shouldEmit('1', '1')).toBe(false) + }) - it("should work with empty strategies array", () => { - const strategy = new CompositeStrategy([]); - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - }); + it('should work with empty strategies array', () => { + const strategy = new CompositeStrategy([]) + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + }) - it("should work with single strategy", () => { - const strategy = new CompositeStrategy([new ImmediateStrategy()]); - expect(strategy.shouldEmit("Hello", "Hello")).toBe(true); - }); + it('should work with single strategy', () => { + const strategy = new CompositeStrategy([new ImmediateStrategy()]) + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(true) + }) - it("should handle strategies without reset method", () => { - const strategyWithoutReset = new ImmediateStrategy(); - const strategy = new CompositeStrategy([strategyWithoutReset]); + it('should handle strategies without reset method', () => { + const strategyWithoutReset = new ImmediateStrategy() + const strategy = new CompositeStrategy([strategyWithoutReset]) // Should not throw when calling reset - expect(() => strategy.reset()).not.toThrow(); - }); + expect(() => strategy.reset()).not.toThrow() + }) - it("should work with three or more strategies", () => { + it('should work with three or more strategies', () => { const strategy = new CompositeStrategy([ new PunctuationStrategy(), new WordBoundaryStrategy(), new ImmediateStrategy(), - ]); + ]) // Should always emit because ImmediateStrategy always returns true - expect(strategy.shouldEmit("Hello", "Hello")).toBe(true); - expect(strategy.shouldEmit(" world", "Hello world")).toBe(true); - }); + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(true) + expect(strategy.shouldEmit(' world', 'Hello world')).toBe(true) + }) - it("should handle mixed strategy results correctly", () => { + it('should handle mixed strategy results correctly', () => { const strategy = new CompositeStrategy([ new PunctuationStrategy(), new WordBoundaryStrategy(), - ]); + ]) // Only punctuation - should emit - expect(strategy.shouldEmit("Hello.", "Hello.")).toBe(true); + expect(strategy.shouldEmit('Hello.', 'Hello.')).toBe(true) // Only word boundary - should emit - expect(strategy.shouldEmit("Hello ", "Hello ")).toBe(true); + expect(strategy.shouldEmit('Hello ', 'Hello ')).toBe(true) // Both - should emit - expect(strategy.shouldEmit("Hello. ", "Hello. ")).toBe(true); + expect(strategy.shouldEmit('Hello. ', 'Hello. ')).toBe(true) // Neither - should not emit - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - }); -}); + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + }) +}) -describe("DebounceStrategy", () => { +describe('DebounceStrategy', () => { beforeEach(() => { - vi.useFakeTimers(); - }); + vi.useFakeTimers() + }) afterEach(() => { - vi.restoreAllMocks(); - vi.useRealTimers(); - }); + vi.restoreAllMocks() + vi.useRealTimers() + }) - it("should not emit immediately on first chunk", () => { - const strategy = new DebounceStrategy(100); + it('should not emit immediately on first chunk', () => { + const strategy = new DebounceStrategy(100) - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - }); + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + }) - it("should schedule emission after delay", () => { - const strategy = new DebounceStrategy(100); + it('should schedule emission after delay', () => { + const strategy = new DebounceStrategy(100) + + strategy.shouldEmit('Hello', 'Hello') - strategy.shouldEmit("Hello", "Hello"); - // After delay, shouldEmitNow should be true - vi.advanceTimersByTime(100); - + vi.advanceTimersByTime(100) + // Note: The current implementation has a limitation - shouldEmitNow // is set asynchronously, so we can't check it synchronously. // This test documents the current behavior. - expect(strategy.shouldEmit(" world", "Hello world")).toBe(false); - }); + expect(strategy.shouldEmit(' world', 'Hello world')).toBe(false) + }) + + it('should reset timeout when new chunk arrives before delay', () => { + const strategy = new DebounceStrategy(100) - it("should reset timeout when new chunk arrives before delay", () => { - const strategy = new DebounceStrategy(100); + strategy.shouldEmit('Hello', 'Hello') - strategy.shouldEmit("Hello", "Hello"); - // Advance time but not enough to trigger - vi.advanceTimersByTime(50); - + vi.advanceTimersByTime(50) + // New chunk should reset the timer - strategy.shouldEmit(" world", "Hello world"); - + strategy.shouldEmit(' world', 'Hello world') + // Advance remaining time from first chunk - should not trigger - vi.advanceTimersByTime(50); - + vi.advanceTimersByTime(50) + // Advance time for second chunk - should trigger - vi.advanceTimersByTime(50); - + vi.advanceTimersByTime(50) + // The strategy should still return false synchronously - expect(strategy.shouldEmit("!", "Hello world!")).toBe(false); - }); + expect(strategy.shouldEmit('!', 'Hello world!')).toBe(false) + }) - it("should use default delay of 100ms", () => { - const strategy = new DebounceStrategy(); + it('should use default delay of 100ms', () => { + const strategy = new DebounceStrategy() - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - }); + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) + }) - it("should clear timeout on reset", () => { - const strategy = new DebounceStrategy(100); + it('should clear timeout on reset', () => { + const strategy = new DebounceStrategy(100) - strategy.shouldEmit("Hello", "Hello"); - strategy.reset(); + strategy.shouldEmit('Hello', 'Hello') + strategy.reset() // After reset, timeout should be cleared - vi.advanceTimersByTime(100); + vi.advanceTimersByTime(100) // New chunk after reset - expect(strategy.shouldEmit(" world", "Hello world")).toBe(false); - }); - - it("should handle multiple rapid chunks", () => { - const strategy = new DebounceStrategy(100); - - strategy.shouldEmit("a", "a"); - vi.advanceTimersByTime(30); - - strategy.shouldEmit("b", "ab"); - vi.advanceTimersByTime(30); - - strategy.shouldEmit("c", "abc"); - vi.advanceTimersByTime(30); - - strategy.shouldEmit("d", "abcd"); - + expect(strategy.shouldEmit(' world', 'Hello world')).toBe(false) + }) + + it('should handle multiple rapid chunks', () => { + const strategy = new DebounceStrategy(100) + + strategy.shouldEmit('a', 'a') + vi.advanceTimersByTime(30) + + strategy.shouldEmit('b', 'ab') + vi.advanceTimersByTime(30) + + strategy.shouldEmit('c', 'abc') + vi.advanceTimersByTime(30) + + strategy.shouldEmit('d', 'abcd') + // All should return false synchronously - expect(strategy.shouldEmit("e", "abcde")).toBe(false); - }); + expect(strategy.shouldEmit('e', 'abcde')).toBe(false) + }) - it("should handle custom delay values", () => { - const strategy = new DebounceStrategy(50); + it('should handle custom delay values', () => { + const strategy = new DebounceStrategy(50) - expect(strategy.shouldEmit("Hello", "Hello")).toBe(false); - - strategy.shouldEmit(" world", "Hello world"); - vi.advanceTimersByTime(50); - - expect(strategy.shouldEmit("!", "Hello world!")).toBe(false); - }); + expect(strategy.shouldEmit('Hello', 'Hello')).toBe(false) - it("should handle reset when no timeout is active", () => { - const strategy = new DebounceStrategy(100); + strategy.shouldEmit(' world', 'Hello world') + vi.advanceTimersByTime(50) + + expect(strategy.shouldEmit('!', 'Hello world!')).toBe(false) + }) + + it('should handle reset when no timeout is active', () => { + const strategy = new DebounceStrategy(100) // Reset without any chunks should not throw - expect(() => strategy.reset()).not.toThrow(); - - // Reset after timeout has been cleared should not throw - strategy.shouldEmit("Hello", "Hello"); - strategy.reset(); - expect(() => strategy.reset()).not.toThrow(); - }); -}); + expect(() => strategy.reset()).not.toThrow() + // Reset after timeout has been cleared should not throw + strategy.shouldEmit('Hello', 'Hello') + strategy.reset() + expect(() => strategy.reset()).not.toThrow() + }) +}) diff --git a/packages/typescript/ai-client/tests/stream/processor.test.ts b/packages/typescript/ai-client/tests/stream/processor.test.ts index 0d02fc6e9..51382c1ce 100644 --- a/packages/typescript/ai-client/tests/stream/processor.test.ts +++ b/packages/typescript/ai-client/tests/stream/processor.test.ts @@ -1,610 +1,618 @@ -import { describe, it, expect, vi } from "vitest"; -import { StreamProcessor } from "../../src/stream/processor"; +import { describe, it, expect, vi } from 'vitest' +import { StreamProcessor } from '../../src/stream/processor' import { ImmediateStrategy, PunctuationStrategy, BatchStrategy, -} from "../../src/stream/chunk-strategies"; -import type { StreamChunk, StreamProcessorHandlers } from "../../src/stream/types"; +} from '../../src/stream/chunk-strategies' +import type { + StreamChunk, + StreamProcessorHandlers, +} from '../../src/stream/types' // Mock stream generator helper async function* createMockStream( - chunks: StreamChunk[] + chunks: StreamChunk[], ): AsyncGenerator { for (const chunk of chunks) { - yield chunk; + yield chunk } } -describe("StreamProcessor", () => { - describe("Text Streaming", () => { - it("should accumulate text content", async () => { +describe('StreamProcessor', () => { + describe('Text Streaming', () => { + it('should accumulate text content', async () => { const handlers: StreamProcessorHandlers = { onTextUpdate: vi.fn(), onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ chunkStrategy: new ImmediateStrategy(), handlers, - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello" }, - { type: "text", content: " world" }, - { type: "text", content: "!" }, - ]); + { type: 'text', content: 'Hello' }, + { type: 'text', content: ' world' }, + { type: 'text', content: '!' }, + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) - expect(result.content).toBe("Hello world!"); - expect(handlers.onTextUpdate).toHaveBeenCalledTimes(3); - expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(1, "Hello"); - expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(2, "Hello world"); - expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(3, "Hello world!"); - }); + expect(result.content).toBe('Hello world!') + expect(handlers.onTextUpdate).toHaveBeenCalledTimes(3) + expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(1, 'Hello') + expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(2, 'Hello world') + expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(3, 'Hello world!') + }) - it("should respect ImmediateStrategy", async () => { + it('should respect ImmediateStrategy', async () => { const handlers: StreamProcessorHandlers = { onTextUpdate: vi.fn(), - }; + } const processor = new StreamProcessor({ chunkStrategy: new ImmediateStrategy(), handlers, - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello" }, - { type: "text", content: " world" }, - ]); + { type: 'text', content: 'Hello' }, + { type: 'text', content: ' world' }, + ]) - await processor.process(stream); + await processor.process(stream) - expect(handlers.onTextUpdate).toHaveBeenCalledTimes(2); - }); + expect(handlers.onTextUpdate).toHaveBeenCalledTimes(2) + }) - it("should respect PunctuationStrategy", async () => { + it('should respect PunctuationStrategy', async () => { const handlers: StreamProcessorHandlers = { onTextUpdate: vi.fn(), - }; + } const processor = new StreamProcessor({ chunkStrategy: new PunctuationStrategy(), handlers, - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello" }, - { type: "text", content: " world" }, - { type: "text", content: "!" }, - { type: "text", content: " How" }, - { type: "text", content: " are" }, - { type: "text", content: " you?" }, - ]); + { type: 'text', content: 'Hello' }, + { type: 'text', content: ' world' }, + { type: 'text', content: '!' }, + { type: 'text', content: ' How' }, + { type: 'text', content: ' are' }, + { type: 'text', content: ' you?' }, + ]) - await processor.process(stream); + await processor.process(stream) // Should only emit on punctuation (! and ?) - expect(handlers.onTextUpdate).toHaveBeenCalledTimes(2); - expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(1, "Hello world!"); + expect(handlers.onTextUpdate).toHaveBeenCalledTimes(2) + expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(1, 'Hello world!') expect(handlers.onTextUpdate).toHaveBeenNthCalledWith( 2, - "Hello world! How are you?" - ); - }); + 'Hello world! How are you?', + ) + }) - it("should respect BatchStrategy", async () => { + it('should respect BatchStrategy', async () => { const handlers: StreamProcessorHandlers = { onTextUpdate: vi.fn(), - }; + } const processor = new StreamProcessor({ chunkStrategy: new BatchStrategy(3), handlers, - }); + }) const stream = createMockStream([ - { type: "text", content: "1" }, - { type: "text", content: "2" }, - { type: "text", content: "3" }, - { type: "text", content: "4" }, - { type: "text", content: "5" }, - { type: "text", content: "6" }, - ]); + { type: 'text', content: '1' }, + { type: 'text', content: '2' }, + { type: 'text', content: '3' }, + { type: 'text', content: '4' }, + { type: 'text', content: '5' }, + { type: 'text', content: '6' }, + ]) - await processor.process(stream); + await processor.process(stream) // Should emit on chunks 3 and 6 - expect(handlers.onTextUpdate).toHaveBeenCalledTimes(2); - expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(1, "123"); - expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(2, "123456"); - }); + expect(handlers.onTextUpdate).toHaveBeenCalledTimes(2) + expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(1, '123') + expect(handlers.onTextUpdate).toHaveBeenNthCalledWith(2, '123456') + }) it("should emit final text on stream end even if strategy hasn't triggered", async () => { const handlers: StreamProcessorHandlers = { onTextUpdate: vi.fn(), onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ chunkStrategy: new BatchStrategy(10), // High batch size handlers, - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello" }, - { type: "text", content: " world" }, - ]); + { type: 'text', content: 'Hello' }, + { type: 'text', content: ' world' }, + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) - expect(result.content).toBe("Hello world"); + expect(result.content).toBe('Hello world') expect(handlers.onStreamEnd).toHaveBeenCalledWith( - "Hello world", - undefined - ); - }); - }); - - describe("Single Tool Call", () => { - it("should track a single tool call", async () => { + 'Hello world', + undefined, + ) + }) + }) + + describe('Single Tool Call', () => { + it('should track a single tool call', async () => { const handlers: StreamProcessorHandlers = { onToolCallStart: vi.fn(), onToolCallDelta: vi.fn(), onToolCallComplete: vi.fn(), onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ handlers, - }); + }) const stream = createMockStream([ { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "getWeather", arguments: '{"lo' }, + id: 'call_1', + function: { name: 'getWeather', arguments: '{"lo' }, }, }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "getWeather", arguments: 'cation":' }, + id: 'call_1', + function: { name: 'getWeather', arguments: 'cation":' }, }, }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "getWeather", arguments: ' "Paris"}' }, + id: 'call_1', + function: { name: 'getWeather', arguments: ' "Paris"}' }, }, }, - ]); + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) // Verify start event - expect(handlers.onToolCallStart).toHaveBeenCalledTimes(1); + expect(handlers.onToolCallStart).toHaveBeenCalledTimes(1) expect(handlers.onToolCallStart).toHaveBeenCalledWith( 0, - "call_1", - "getWeather" - ); + 'call_1', + 'getWeather', + ) // Verify delta events - expect(handlers.onToolCallDelta).toHaveBeenCalledTimes(3); - expect(handlers.onToolCallDelta).toHaveBeenNthCalledWith(1, 0, '{"lo'); - expect(handlers.onToolCallDelta).toHaveBeenNthCalledWith(2, 0, 'cation":'); + expect(handlers.onToolCallDelta).toHaveBeenCalledTimes(3) + expect(handlers.onToolCallDelta).toHaveBeenNthCalledWith(1, 0, '{"lo') + expect(handlers.onToolCallDelta).toHaveBeenNthCalledWith(2, 0, 'cation":') expect(handlers.onToolCallDelta).toHaveBeenNthCalledWith( 3, 0, - ' "Paris"}' - ); + ' "Paris"}', + ) // Verify completion (triggered by stream end) - expect(handlers.onToolCallComplete).toHaveBeenCalledTimes(1); + expect(handlers.onToolCallComplete).toHaveBeenCalledTimes(1) expect(handlers.onToolCallComplete).toHaveBeenCalledWith( 0, - "call_1", - "getWeather", - '{"location": "Paris"}' - ); + 'call_1', + 'getWeather', + '{"location": "Paris"}', + ) // Verify result - expect(result.toolCalls).toHaveLength(1); + expect(result.toolCalls).toHaveLength(1) expect(result.toolCalls![0]).toEqual({ - id: "call_1", - type: "function", + id: 'call_1', + type: 'function', function: { - name: "getWeather", + name: 'getWeather', arguments: '{"location": "Paris"}', }, - }); - }); - }); + }) + }) + }) - describe("Parallel Tool Calls", () => { - it("should handle multiple parallel tool calls", async () => { + describe('Parallel Tool Calls', () => { + it('should handle multiple parallel tool calls', async () => { const handlers: StreamProcessorHandlers = { onToolCallStart: vi.fn(), onToolCallDelta: vi.fn(), onToolCallComplete: vi.fn(), onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ handlers, - }); + }) const stream = createMockStream([ { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "getWeather", arguments: '{"lo' }, + id: 'call_1', + function: { name: 'getWeather', arguments: '{"lo' }, }, }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 1, toolCall: { - id: "call_2", - function: { name: "getTime", arguments: '{"ci' }, + id: 'call_2', + function: { name: 'getTime', arguments: '{"ci' }, }, }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "getWeather", arguments: 'cation":"Paris"}' }, + id: 'call_1', + function: { name: 'getWeather', arguments: 'cation":"Paris"}' }, }, }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 1, toolCall: { - id: "call_2", - function: { name: "getTime", arguments: 'ty":"Tokyo"}' }, + id: 'call_2', + function: { name: 'getTime', arguments: 'ty":"Tokyo"}' }, }, }, - ]); + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) // Should start both tool calls - expect(handlers.onToolCallStart).toHaveBeenCalledTimes(2); + expect(handlers.onToolCallStart).toHaveBeenCalledTimes(2) expect(handlers.onToolCallStart).toHaveBeenNthCalledWith( 1, 0, - "call_1", - "getWeather" - ); + 'call_1', + 'getWeather', + ) expect(handlers.onToolCallStart).toHaveBeenNthCalledWith( 2, 1, - "call_2", - "getTime" - ); + 'call_2', + 'getTime', + ) // Tool 0 completes when tool 1 starts - expect(handlers.onToolCallComplete).toHaveBeenCalledTimes(2); + expect(handlers.onToolCallComplete).toHaveBeenCalledTimes(2) // Both tool calls in result - expect(result.toolCalls).toHaveLength(2); - expect(result.toolCalls![0].function.name).toBe("getWeather"); - expect(result.toolCalls![1].function.name).toBe("getTime"); - }); + expect(result.toolCalls).toHaveLength(2) + expect(result.toolCalls![0].function.name).toBe('getWeather') + expect(result.toolCalls![1].function.name).toBe('getTime') + }) - it("should complete tool calls when switching indices", async () => { + it('should complete tool calls when switching indices', async () => { const handlers: StreamProcessorHandlers = { onToolCallComplete: vi.fn(), - }; + } const processor = new StreamProcessor({ handlers, - }); + }) const stream = createMockStream([ { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "tool1", arguments: "args1" }, + id: 'call_1', + function: { name: 'tool1', arguments: 'args1' }, }, }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 1, toolCall: { - id: "call_2", - function: { name: "tool2", arguments: "args2" }, + id: 'call_2', + function: { name: 'tool2', arguments: 'args2' }, }, }, - ]); + ]) - await processor.process(stream); + await processor.process(stream) // Tool 0 should complete when tool 1 starts expect(handlers.onToolCallComplete).toHaveBeenNthCalledWith( 1, 0, - "call_1", - "tool1", - "args1" - ); - }); - }); - - describe("Mixed: Tool Calls + Text", () => { - it("should complete tool calls when text arrives", async () => { + 'call_1', + 'tool1', + 'args1', + ) + }) + }) + + describe('Mixed: Tool Calls + Text', () => { + it('should complete tool calls when text arrives', async () => { const handlers: StreamProcessorHandlers = { onToolCallStart: vi.fn(), onToolCallComplete: vi.fn(), onTextUpdate: vi.fn(), onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ chunkStrategy: new ImmediateStrategy(), handlers, - }); + }) const stream = createMockStream([ { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "getWeather", arguments: '{"location":"Paris"}' }, + id: 'call_1', + function: { name: 'getWeather', arguments: '{"location":"Paris"}' }, }, }, - { type: "text", content: "The weather in Paris is" }, - { type: "text", content: " sunny" }, - { type: "text", content: " and warm." }, - ]); + { type: 'text', content: 'The weather in Paris is' }, + { type: 'text', content: ' sunny' }, + { type: 'text', content: ' and warm.' }, + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) // Tool call should start expect(handlers.onToolCallStart).toHaveBeenCalledWith( 0, - "call_1", - "getWeather" - ); + 'call_1', + 'getWeather', + ) // Tool call should complete when text arrives expect(handlers.onToolCallComplete).toHaveBeenCalledWith( 0, - "call_1", - "getWeather", - '{"location":"Paris"}' - ); + 'call_1', + 'getWeather', + '{"location":"Paris"}', + ) // Text should accumulate - expect(result.content).toBe("The weather in Paris is sunny and warm."); + expect(result.content).toBe('The weather in Paris is sunny and warm.') // Should have both tool calls and text - expect(result.toolCalls).toHaveLength(1); - expect(result.content).toBeTruthy(); - }); - }); + expect(result.toolCalls).toHaveLength(1) + expect(result.content).toBeTruthy() + }) + }) - describe("Edge Cases", () => { - it("should handle empty stream", async () => { + describe('Edge Cases', () => { + it('should handle empty stream', async () => { const handlers: StreamProcessorHandlers = { onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ handlers, - }); + }) - const stream = createMockStream([]); - const result = await processor.process(stream); + const stream = createMockStream([]) + const result = await processor.process(stream) - expect(result.content).toBe(""); - expect(result.toolCalls).toBeUndefined(); - expect(handlers.onStreamEnd).toHaveBeenCalledWith("", undefined); - }); + expect(result.content).toBe('') + expect(result.toolCalls).toBeUndefined() + expect(handlers.onStreamEnd).toHaveBeenCalledWith('', undefined) + }) - it("should handle text-only stream", async () => { + it('should handle text-only stream', async () => { const processor = new StreamProcessor({ handlers: {}, - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello world" }, - ]); + { type: 'text', content: 'Hello world' }, + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) - expect(result.content).toBe("Hello world"); - expect(result.toolCalls).toBeUndefined(); - }); + expect(result.content).toBe('Hello world') + expect(result.toolCalls).toBeUndefined() + }) - it("should handle tool-calls-only stream", async () => { + it('should handle tool-calls-only stream', async () => { const processor = new StreamProcessor({ handlers: {}, - }); + }) const stream = createMockStream([ { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "test", arguments: "args" }, + id: 'call_1', + function: { name: 'test', arguments: 'args' }, }, }, - ]); + ]) - const result = await processor.process(stream); + const result = await processor.process(stream) - expect(result.content).toBe(""); - expect(result.toolCalls).toHaveLength(1); - }); + expect(result.content).toBe('') + expect(result.toolCalls).toHaveLength(1) + }) - it("should handle missing optional handlers gracefully", async () => { + it('should handle missing optional handlers gracefully', async () => { const processor = new StreamProcessor({ handlers: {}, // No handlers - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello" }, + { type: 'text', content: 'Hello' }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "test", arguments: "args" }, + id: 'call_1', + function: { name: 'test', arguments: 'args' }, }, }, - ]); + ]) // Should not throw - const result = await processor.process(stream); - expect(result).toBeDefined(); - }); - }); + const result = await processor.process(stream) + expect(result).toBeDefined() + }) + }) - describe("Stream End Events", () => { - it("should call onStreamEnd with final content and tool calls", async () => { + describe('Stream End Events', () => { + it('should call onStreamEnd with final content and tool calls', async () => { const handlers: StreamProcessorHandlers = { onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ handlers, - }); + }) const stream = createMockStream([ - { type: "text", content: "Hello" }, + { type: 'text', content: 'Hello' }, { - type: "tool-call-delta", + type: 'tool-call-delta', toolCallIndex: 0, toolCall: { - id: "call_1", - function: { name: "test", arguments: "args" }, + id: 'call_1', + function: { name: 'test', arguments: 'args' }, }, }, - ]); + ]) - await processor.process(stream); + await processor.process(stream) - expect(handlers.onStreamEnd).toHaveBeenCalledWith("Hello", [ + expect(handlers.onStreamEnd).toHaveBeenCalledWith('Hello', [ { - id: "call_1", - type: "function", + id: 'call_1', + type: 'function', function: { - name: "test", - arguments: "args", + name: 'test', + arguments: 'args', }, }, - ]); - }); + ]) + }) - it("should call onStreamEnd with undefined toolCalls if none exist", async () => { + it('should call onStreamEnd with undefined toolCalls if none exist', async () => { const handlers: StreamProcessorHandlers = { onStreamEnd: vi.fn(), - }; + } const processor = new StreamProcessor({ handlers, - }); + }) - const stream = createMockStream([{ type: "text", content: "Hello" }]); + const stream = createMockStream([{ type: 'text', content: 'Hello' }]) - await processor.process(stream); + await processor.process(stream) - expect(handlers.onStreamEnd).toHaveBeenCalledWith("Hello", undefined); - }); - }); + expect(handlers.onStreamEnd).toHaveBeenCalledWith('Hello', undefined) + }) + }) - describe("Delta Content Handling", () => { - it("emits cumulative text for content+delta chunks", async () => { - const onTextUpdate = vi.fn(); - const onStreamEnd = vi.fn(); + describe('Delta Content Handling', () => { + it('emits cumulative text for content+delta chunks', async () => { + const onTextUpdate = vi.fn() + const onStreamEnd = vi.fn() const processor = new StreamProcessor({ handlers: { onTextUpdate, onStreamEnd, }, - }); + }) const chunks = [ - { type: "content", content: "", delta: "Hello" }, - { type: "content", content: "Hello", delta: " world" }, - { type: "content", content: "", delta: "!" }, - { type: "done" }, - ]; - - await processor.process((async function* () { - yield* chunks; - })()); - - expect(onTextUpdate).toHaveBeenCalledTimes(3); + { type: 'content', content: '', delta: 'Hello' }, + { type: 'content', content: 'Hello', delta: ' world' }, + { type: 'content', content: '', delta: '!' }, + { type: 'done' }, + ] + + await processor.process( + (async function* () { + yield* chunks + })(), + ) + + expect(onTextUpdate).toHaveBeenCalledTimes(3) expect(onTextUpdate.mock.calls.map((c) => c[0])).toEqual([ - "Hello", - "Hello world", - "Hello world!", - ]); + 'Hello', + 'Hello world', + 'Hello world!', + ]) - expect(onStreamEnd).toHaveBeenCalledWith("Hello world!", undefined); - }); + expect(onStreamEnd).toHaveBeenCalledWith('Hello world!', undefined) + }) - it("emits text when only delta is present", async () => { - const onTextUpdate = vi.fn(); + it('emits text when only delta is present', async () => { + const onTextUpdate = vi.fn() const processor = new StreamProcessor({ handlers: { onTextUpdate, }, - }); + }) - const chunks = [{ type: "content", delta: "Hi there" }, { type: "done" }]; + const chunks = [{ type: 'content', delta: 'Hi there' }, { type: 'done' }] - await processor.process((async function* () { - yield* chunks; - })()); + await processor.process( + (async function* () { + yield* chunks + })(), + ) - expect(onTextUpdate).toHaveBeenCalledTimes(1); - expect(onTextUpdate).toHaveBeenLastCalledWith("Hi there"); - }); + expect(onTextUpdate).toHaveBeenCalledTimes(1) + expect(onTextUpdate).toHaveBeenLastCalledWith('Hi there') + }) - it("appends delta-only chunks to previous text", async () => { - const onTextUpdate = vi.fn(); + it('appends delta-only chunks to previous text', async () => { + const onTextUpdate = vi.fn() const processor = new StreamProcessor({ handlers: { onTextUpdate, }, - }); + }) const chunks = [ - { type: "content", delta: "Hello" }, - { type: "content", delta: " world" }, - { type: "done" }, - ]; + { type: 'content', delta: 'Hello' }, + { type: 'content', delta: ' world' }, + { type: 'done' }, + ] - await processor.process((async function* () { - yield* chunks; - })()); + await processor.process( + (async function* () { + yield* chunks + })(), + ) expect(onTextUpdate.mock.calls.map((c) => c[0])).toEqual([ - "Hello", - "Hello world", - ]); - }); - }); -}); - + 'Hello', + 'Hello world', + ]) + }) + }) +}) diff --git a/packages/typescript/ai-client/tests/test-utils.ts b/packages/typescript/ai-client/tests/test-utils.ts index 19a7ebf00..487046752 100644 --- a/packages/typescript/ai-client/tests/test-utils.ts +++ b/packages/typescript/ai-client/tests/test-utils.ts @@ -1,45 +1,48 @@ -import type { ConnectionAdapter } from "../src/connection-adapters"; -import type { StreamChunk } from "@tanstack/ai"; -import type { ModelMessage, UIMessage } from "../src/types"; - +import type { ConnectionAdapter } from '../src/connection-adapters' +import type { ModelMessage, StreamChunk } from '@tanstack/ai' +import type { UIMessage } from '../src/types' /** * Options for creating a mock connection adapter */ -export interface MockConnectionAdapterOptions { +interface MockConnectionAdapterOptions { /** * Chunks to yield from the stream */ - chunks?: StreamChunk[]; - + chunks?: Array + /** * Delay between chunks (in ms) */ - chunkDelay?: number; - + chunkDelay?: number + /** * Whether to throw an error */ - shouldError?: boolean; - + shouldError?: boolean + /** * Error to throw */ - error?: Error; - + error?: Error + /** * Callback when connect is called */ - onConnect?: (messages: ModelMessage[] | UIMessage[], data?: Record, abortSignal?: AbortSignal) => void; - + onConnect?: ( + messages: Array | Array, + data?: Record, + abortSignal?: AbortSignal, + ) => void + /** * Callback to check abort signal during streaming */ - onAbort?: (abortSignal: AbortSignal) => void; + onAbort?: (abortSignal: AbortSignal) => void } /** * Create a mock connection adapter for testing - * + * * @example * ```typescript * const adapter = createMockConnectionAdapter({ @@ -51,52 +54,52 @@ export interface MockConnectionAdapterOptions { * ``` */ export function createMockConnectionAdapter( - options: MockConnectionAdapterOptions = {} + options: MockConnectionAdapterOptions = {}, ): ConnectionAdapter { const { chunks = [], chunkDelay = 0, shouldError = false, - error = new Error("Mock adapter error"), + error = new Error('Mock adapter error'), onConnect, onAbort, - } = options; + } = options return { async *connect(messages, data, abortSignal) { if (onConnect) { - onConnect(messages, data, abortSignal); + onConnect(messages, data, abortSignal) } if (shouldError) { - throw error; + throw error } for (const chunk of chunks) { // Check abort signal before yielding if (abortSignal?.aborted) { if (onAbort) { - onAbort(abortSignal); + onAbort(abortSignal) } - return; + return } if (chunkDelay > 0) { - await new Promise((resolve) => setTimeout(resolve, chunkDelay)); + await new Promise((resolve) => setTimeout(resolve, chunkDelay)) } // Check again after delay if (abortSignal?.aborted) { if (onAbort) { - onAbort(abortSignal); + onAbort(abortSignal) } - return; + return } - yield chunk; + yield chunk } }, - }; + } } /** @@ -104,34 +107,34 @@ export function createMockConnectionAdapter( */ export function createTextChunks( text: string, - messageId: string = "msg-1", - model: string = "test" -): StreamChunk[] { - const chunks: StreamChunk[] = []; - let accumulated = ""; - - for (let i = 0; i < text.length; i++) { - accumulated += text[i]; + messageId: string = 'msg-1', + model: string = 'test', +): Array { + const chunks: Array = [] + let accumulated = '' + + for (const chunk of text) { + accumulated += chunk chunks.push({ - type: "content", + type: 'content', id: messageId, model, timestamp: Date.now(), - delta: text[i], + delta: chunk, content: accumulated, - role: "assistant", - } as StreamChunk); + role: 'assistant', + } as StreamChunk) } - + chunks.push({ - type: "done", + type: 'done', id: messageId, model, timestamp: Date.now(), - finishReason: "stop", - } as StreamChunk); - - return chunks; + finishReason: 'stop', + } as StreamChunk) + + return chunks } /** @@ -140,59 +143,58 @@ export function createTextChunks( */ export function createToolCallChunks( toolCalls: Array<{ id: string; name: string; arguments: string }>, - messageId: string = "msg-1", - model: string = "test", - includeToolInputAvailable: boolean = true -): StreamChunk[] { - const chunks: StreamChunk[] = []; - + messageId: string = 'msg-1', + model: string = 'test', + includeToolInputAvailable: boolean = true, +): Array { + const chunks: Array = [] + for (let i = 0; i < toolCalls.length; i++) { - const toolCall = toolCalls[i]; + const toolCall = toolCalls[i] chunks.push({ - type: "tool_call", + type: 'tool_call', id: messageId, model, timestamp: Date.now(), index: i, toolCall: { - id: toolCall.id, - type: "function", + id: toolCall?.id, + type: 'function', function: { - name: toolCall.name, - arguments: toolCall.arguments, + name: toolCall?.name, + arguments: toolCall?.arguments, }, }, - } as StreamChunk); - + } as StreamChunk) + // Add tool-input-available chunk if requested if (includeToolInputAvailable) { - let parsedInput: any; + let parsedInput: any try { - parsedInput = JSON.parse(toolCall.arguments); + parsedInput = JSON.parse(toolCall?.arguments ?? '') } catch { - parsedInput = toolCall.arguments; + parsedInput = toolCall?.arguments } - + chunks.push({ - type: "tool-input-available", + type: 'tool-input-available', id: messageId, model, timestamp: Date.now(), - toolCallId: toolCall.id, - toolName: toolCall.name, + toolCallId: toolCall?.id, + toolName: toolCall?.name, input: parsedInput, - } as StreamChunk); + } as StreamChunk) } } - + chunks.push({ - type: "done", + type: 'done', id: messageId, model, timestamp: Date.now(), - finishReason: "stop", - } as StreamChunk); - - return chunks; -} + finishReason: 'stop', + } as StreamChunk) + return chunks +} diff --git a/packages/typescript/ai-client/tsconfig.json b/packages/typescript/ai-client/tsconfig.json index d42a93c54..3e93ac127 100644 --- a/packages/typescript/ai-client/tsconfig.json +++ b/packages/typescript/ai-client/tsconfig.json @@ -1,12 +1,8 @@ { "extends": "../../../tsconfig.json", "compilerOptions": { - "outDir": "dist", - "rootDir": ".", - "moduleResolution": "bundler", - "lib": ["ES2023", "DOM"] + "outDir": "dist" }, - "include": ["src/**/*.ts", "tests/**/*.ts"], - "exclude": ["node_modules", "dist", "**/*.config.ts"], - "references": [{ "path": "../ai" }] + "include": ["src/**/*.ts", "src/**/*.tsx", "tests/**/*.ts", "vite.config.ts"], + "exclude": ["node_modules", "dist", "**/*.config.ts", "eslint.config.js"] } diff --git a/packages/typescript/ai-client/tsdown.config.ts b/packages/typescript/ai-client/tsdown.config.ts deleted file mode 100644 index 01597a963..000000000 --- a/packages/typescript/ai-client/tsdown.config.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { defineConfig } from "tsdown"; - -export default defineConfig({ - entry: ["./src/index.ts"], - format: ["esm"], - unbundle: true, - dts: true, - sourcemap: true, - clean: true, - minify: false, -}); - diff --git a/packages/typescript/ai-client/vite.config.ts b/packages/typescript/ai-client/vite.config.ts new file mode 100644 index 000000000..e83c13eb9 --- /dev/null +++ b/packages/typescript/ai-client/vite.config.ts @@ -0,0 +1,35 @@ +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/config/vite' +import packageJson from './package.json' + +const config = defineConfig({ + test: { + name: packageJson.name, + dir: './', + watch: false, + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: ['./src/index.ts'], + srcDir: './src', + }), +) diff --git a/packages/typescript/ai-client/vitest.config.ts b/packages/typescript/ai-client/vitest.config.ts deleted file mode 100644 index 606c0a8ce..000000000 --- a/packages/typescript/ai-client/vitest.config.ts +++ /dev/null @@ -1,32 +0,0 @@ -import { defineConfig } from "vitest/config"; -import { resolve } from "path"; -import { fileURLToPath } from "url"; - -const __dirname = fileURLToPath(new URL(".", import.meta.url)); - -export default defineConfig({ - test: { - globals: true, - environment: "node", - include: ["tests/**/*.test.ts"], - coverage: { - provider: "v8", - reporter: ["text", "json", "html", "lcov"], - exclude: [ - "node_modules/", - "dist/", - "tests/", - "**/*.test.ts", - "**/*.config.ts", - "**/types.ts", - ], - include: ["src/**/*.ts"], - }, - }, - resolve: { - alias: { - "@tanstack/ai/event-client": resolve(__dirname, "../ai/src/event-client.ts"), - }, - }, -}); - diff --git a/packages/typescript/ai-devtools/README.md b/packages/typescript/ai-devtools/README.md new file mode 100644 index 000000000..7c4143074 --- /dev/null +++ b/packages/typescript/ai-devtools/README.md @@ -0,0 +1,104 @@ +
+ +
+ +
+ +
+ + + + + + + + + +
+ + + +
+ +### [Become a Sponsor!](https://github.com/sponsors/tannerlinsley/) +
+ +# TanStack AI + +A powerful, type-safe AI SDK for building AI-powered applications. + +- Provider-agnostic adapters (OpenAI, Anthropic, Gemini, Ollama, etc.) +- Chat completion, streaming, and agent loop strategies +- Headless chat state management with adapters (SSE, HTTP stream, custom) +- Type-safe tools with server/client execution + +### Read the docs → + +## Get Involved + +- We welcome issues and pull requests! +- Participate in [GitHub discussions](https://github.com/TanStack/ai/discussions) +- Chat with the community on [Discord](https://discord.com/invite/WrRKjPJ) +- See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions + +## Partners + + + + + + +
+ + + + + CodeRabbit + + + + + + + + Cloudflare + + +
+ +
+AI & you? +

+We're looking for TanStack AI Partners to join our mission! Partner with us to push the boundaries of TanStack AI and build amazing things together. +

+LET'S CHAT +
+ +## Explore the TanStack Ecosystem + +- TanStack Config – Tooling for JS/TS packages +- TanStack DB – Reactive sync client store +- TanStack Devtools – Unified devtools panel +- TanStack Form – Type‑safe form state +- TanStack Pacer – Debouncing, throttling, batching +- TanStack Query – Async state & caching +- TanStack Ranger – Range & slider primitives +- TanStack Router – Type‑safe routing, caching & URL state +- TanStack Start – Full‑stack SSR & streaming +- TanStack Store – Reactive data store +- TanStack Table – Headless datagrids +- TanStack Virtual – Virtualized rendering + +… and more at TanStack.com Ā» + + diff --git a/packages/typescript/ai-devtools/package.json b/packages/typescript/ai-devtools/package.json index b9a8818dd..e023d5320 100644 --- a/packages/typescript/ai-devtools/package.json +++ b/packages/typescript/ai-devtools/package.json @@ -14,12 +14,12 @@ "module": "./dist/esm/index.js", "exports": { ".": { - "import": "./dist/esm/index.js", - "types": "./dist/esm/index.d.ts" + "types": "./dist/esm/index.d.ts", + "import": "./dist/esm/index.js" }, "./production": { - "import": "./dist/esm/production.js", - "types": "./dist/esm/production.d.ts" + "types": "./dist/esm/production.d.ts", + "import": "./dist/esm/production.js" }, "./package.json": "./package.json" }, @@ -28,13 +28,14 @@ "src" ], "scripts": { - "build": "vite build ", - "dev": "tsdown --watch", - "test": "exit 0", - "test:watch": "vitest", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" + "build": "vite build", + "clean": "premove ./build ./dist", + "lint:fix": "eslint ./src --fix", + "test:build": "publint --strict", + "test:eslint": "eslint ./src", + "test:lib": "vitest --passWithNoTests", + "test:lib:dev": "pnpm test:lib --watch", + "test:types": "tsc" }, "keywords": [ "ai", @@ -46,16 +47,15 @@ ], "dependencies": { "@tanstack/ai": "workspace:*", - "@tanstack/devtools-ui": "^0.4.3", - "@tanstack/devtools-utils": "^0.0.6", - "clsx": "^2.1.1", - "goober": "^2.1.16", - "solid-js": "^1.9.9" + "@tanstack/devtools-ui": "^0.4.4", + "@tanstack/devtools-utils": "^0.0.8", + "goober": "^2.1.18", + "solid-js": "^1.9.10" }, "devDependencies": { - "@tanstack/config": "^0.22.0", - "solid-js": "^1.9.9", - "vite": "^7.1.6", - "vite-plugin-solid": "^2.11.8" + "@vitest/coverage-v8": "4.0.14", + "jsdom": "^27.2.0", + "vite": "^7.2.4", + "vite-plugin-solid": "^2.11.10" } } diff --git a/packages/typescript/ai-devtools/src/components/ConversationDetails.tsx b/packages/typescript/ai-devtools/src/components/ConversationDetails.tsx index 3212be8af..cac6e747e 100644 --- a/packages/typescript/ai-devtools/src/components/ConversationDetails.tsx +++ b/packages/typescript/ai-devtools/src/components/ConversationDetails.tsx @@ -1,51 +1,68 @@ -import { Component, Show, createSignal, createEffect } from "solid-js"; -import { useStyles } from "../styles/use-styles"; -import { useAIStore, type Conversation } from "../store/ai-context"; -import { ConversationHeader, ConversationTabs, MessagesTab, ChunksTab } from "./conversation"; - -export const ConversationDetails: Component = () => { - const { state } = useAIStore(); - const styles = useStyles(); - const [activeTab, setActiveTab] = createSignal<"messages" | "chunks">("messages"); - - const activeConversation = (): Conversation | undefined => { - if (!state.activeConversationId) return undefined; - return state.conversations[state.activeConversationId]; - }; - - // Update active tab when conversation changes - createEffect(() => { - const conv = activeConversation(); - if (conv) { - // For server conversations, default to chunks tab - if (conv.type === "server") { - setActiveTab("chunks"); - } else { - // For client conversations, default to messages tab - setActiveTab("messages"); - } - } - }); - - return ( - Select a conversation to view details
} - > - {(conv) => ( -
- - -
- - - - - - -
-
- )} - - ); -}; +import { Show, createEffect, createSignal } from 'solid-js' +import { useStyles } from '../styles/use-styles' +import { useAIStore } from '../store/ai-context' +import { + ChunksTab, + ConversationHeader, + ConversationTabs, + MessagesTab, +} from './conversation' +import type { Conversation } from '../store/ai-context' +import type { Component } from 'solid-js' + +export const ConversationDetails: Component = () => { + const { state } = useAIStore() + const styles = useStyles() + const [activeTab, setActiveTab] = createSignal<'messages' | 'chunks'>( + 'messages', + ) + + const activeConversation = (): Conversation | undefined => { + if (!state.activeConversationId) return undefined + return state.conversations[state.activeConversationId] + } + + // Update active tab when conversation changes + createEffect(() => { + const conv = activeConversation() + if (conv) { + // For server conversations, default to chunks tab + if (conv.type === 'server') { + setActiveTab('chunks') + } else { + // For client conversations, default to messages tab + setActiveTab('messages') + } + } + }) + + return ( + + Select a conversation to view details + + } + > + {(conv) => ( +
+ + +
+ + + + + + +
+
+ )} +
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/ConversationsList.tsx b/packages/typescript/ai-devtools/src/components/ConversationsList.tsx index 1a8d7a592..50ac76561 100644 --- a/packages/typescript/ai-devtools/src/components/ConversationsList.tsx +++ b/packages/typescript/ai-devtools/src/components/ConversationsList.tsx @@ -1,23 +1,29 @@ -import { Component, For } from "solid-js"; -import { useStyles } from "../styles/use-styles"; -import { useAIStore, type Conversation } from "../store/ai-context"; -import { ConversationRow } from "./list"; - -export const ConversationsList: Component<{ - filterType: "all" | "client" | "server"; -}> = (props) => { - const { state } = useAIStore(); - const styles = useStyles(); - - const filteredConversations = () => { - const conversations = Object.values(state.conversations); - if (props.filterType === "all") return conversations; - return conversations.filter((conv: Conversation) => conv.type === props.filterType); - }; - - return ( -
- {(conv: Conversation) => } -
- ); -}; +import { For } from 'solid-js' +import { useStyles } from '../styles/use-styles' +import { useAIStore } from '../store/ai-context' +import { ConversationRow } from './list' +import type { Conversation } from '../store/ai-context' +import type { Component } from 'solid-js' + +export const ConversationsList: Component<{ + filterType: 'all' | 'client' | 'server' +}> = (props) => { + const { state } = useAIStore() + const styles = useStyles() + + const filteredConversations = () => { + const conversations = Object.values(state.conversations) + if (props.filterType === 'all') return conversations + return conversations.filter( + (conv: Conversation) => conv.type === props.filterType, + ) + } + + return ( +
+ + {(conv: Conversation) => } + +
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/Shell.tsx b/packages/typescript/ai-devtools/src/components/Shell.tsx index 7aea84e44..c4829c969 100644 --- a/packages/typescript/ai-devtools/src/components/Shell.tsx +++ b/packages/typescript/ai-devtools/src/components/Shell.tsx @@ -1,129 +1,142 @@ -import { createSignal, onCleanup, onMount } from "solid-js"; -import { Header, HeaderLogo, MainPanel } from "@tanstack/devtools-ui"; -import { useStyles } from "../styles/use-styles"; -import { ConversationsList } from "./ConversationsList"; -import { ConversationDetails } from "./ConversationDetails"; -import { AIProvider, useAIStore } from "../store/ai-context"; - -export default function Devtools() { - return ( - - - - ); -} - -function DevtoolsContent() { - const { state, clearAllConversations } = useAIStore(); - const styles = useStyles(); - const [leftPanelWidth, setLeftPanelWidth] = createSignal(300); - const [isDragging, setIsDragging] = createSignal(false); - const [filterType, setFilterType] = createSignal<"all" | "client" | "server">("all"); - - let dragStartX = 0; - let dragStartWidth = 0; - - const handleMouseDown = (e: MouseEvent) => { - e.preventDefault(); - e.stopPropagation(); - setIsDragging(true); - document.body.style.cursor = "col-resize"; - document.body.style.userSelect = "none"; - dragStartX = e.clientX; - dragStartWidth = leftPanelWidth(); - }; - - const handleMouseMove = (e: MouseEvent) => { - if (!isDragging()) return; - - e.preventDefault(); - const deltaX = e.clientX - dragStartX; - const newWidth = Math.max(150, Math.min(800, dragStartWidth + deltaX)); - setLeftPanelWidth(newWidth); - }; - - const handleMouseUp = () => { - setIsDragging(false); - document.body.style.cursor = ""; - document.body.style.userSelect = ""; - }; - - onMount(() => { - document.addEventListener("mousemove", handleMouseMove); - document.addEventListener("mouseup", handleMouseUp); - }); - - onCleanup(() => { - document.removeEventListener("mousemove", handleMouseMove); - document.removeEventListener("mouseup", handleMouseUp); - }); - - const conversationCount = () => Object.keys(state.conversations).length; - - return ( - -
- TanStack AI -
- -
-
- {/* Filter tabs and action buttons */} -
-
- - - -
-
- -
-
- - -
- -
- -
- -
-
- - ); -} +import { createSignal, onCleanup, onMount } from 'solid-js' +import { Header, HeaderLogo, MainPanel } from '@tanstack/devtools-ui' +import { useStyles } from '../styles/use-styles' +import { AIProvider, useAIStore } from '../store/ai-context' +import { ConversationsList } from './ConversationsList' +import { ConversationDetails } from './ConversationDetails' + +export default function Devtools() { + return ( + + + + ) +} + +function DevtoolsContent() { + const { state, clearAllConversations } = useAIStore() + const styles = useStyles() + const [leftPanelWidth, setLeftPanelWidth] = createSignal(300) + const [isDragging, setIsDragging] = createSignal(false) + const [filterType, setFilterType] = createSignal<'all' | 'client' | 'server'>( + 'all', + ) + + let dragStartX = 0 + let dragStartWidth = 0 + + const handleMouseDown = (e: MouseEvent) => { + e.preventDefault() + e.stopPropagation() + setIsDragging(true) + document.body.style.cursor = 'col-resize' + document.body.style.userSelect = 'none' + dragStartX = e.clientX + dragStartWidth = leftPanelWidth() + } + + const handleMouseMove = (e: MouseEvent) => { + if (!isDragging()) return + + e.preventDefault() + const deltaX = e.clientX - dragStartX + const newWidth = Math.max(150, Math.min(800, dragStartWidth + deltaX)) + setLeftPanelWidth(newWidth) + } + + const handleMouseUp = () => { + setIsDragging(false) + document.body.style.cursor = '' + document.body.style.userSelect = '' + } + + onMount(() => { + document.addEventListener('mousemove', handleMouseMove) + document.addEventListener('mouseup', handleMouseUp) + }) + + onCleanup(() => { + document.removeEventListener('mousemove', handleMouseMove) + document.removeEventListener('mouseup', handleMouseUp) + }) + + const conversationCount = () => Object.keys(state.conversations).length + + return ( + +
+ + TanStack AI + +
+ +
+
+ {/* Filter tabs and action buttons */} +
+
+ + + +
+
+ +
+
+ + +
+ +
+ +
+ +
+
+ + ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ChunkBadges.tsx b/packages/typescript/ai-devtools/src/components/conversation/ChunkBadges.tsx index 1ee958431..cf8935cd3 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ChunkBadges.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ChunkBadges.tsx @@ -1,41 +1,51 @@ -import { Component, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Chunk } from "../../store/ai-store"; - -interface ChunkBadgesProps { - chunks: Chunk[]; -} - -export const ChunkBadges: Component = (props) => { - const styles = useStyles(); - - const hasToolCalls = () => props.chunks.some((c) => c.type === "tool_call"); - const hasErrors = () => props.chunks.some((c) => c.type === "error"); - const hasApproval = () => props.chunks.some((c) => c.type === "approval"); - const finishReason = () => props.chunks.find((c) => c.type === "done")?.finishReason; - - return ( - <> - - - šŸ”§ Tool Calls - - - - - āŒ Error - - - - - āš ļø Approval - - - - - āœ“ {finishReason()} - - - - ); -}; +import { Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import type { Component } from 'solid-js' +import type { Chunk } from '../../store/ai-store' + +interface ChunkBadgesProps { + chunks: Array +} + +export const ChunkBadges: Component = (props) => { + const styles = useStyles() + + const hasToolCalls = () => props.chunks.some((c) => c.type === 'tool_call') + const hasErrors = () => props.chunks.some((c) => c.type === 'error') + const hasApproval = () => props.chunks.some((c) => c.type === 'approval') + const finishReason = () => + props.chunks.find((c) => c.type === 'done')?.finishReason + + return ( + <> + + + šŸ”§ Tool Calls + + + + + āŒ Error + + + + + āš ļø Approval + + + + + āœ“ {finishReason()} + + + + ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ChunkItem.tsx b/packages/typescript/ai-devtools/src/components/conversation/ChunkItem.tsx index e7dcc040d..3d91ca7bb 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ChunkItem.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ChunkItem.tsx @@ -1,152 +1,204 @@ -import { Component, Show, createSignal } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Chunk } from "../../store/ai-store"; -import { formatTimestamp, getChunkTypeColor } from "../utils"; - -interface ChunkItemProps { - chunk: Chunk; - index: number; - variant?: "small" | "large"; -} - -export const ChunkItem: Component = (props) => { - const styles = useStyles(); - const [showRaw, setShowRaw] = createSignal(false); - const isLarge = () => props.variant === "large"; - const chunkCount = () => props.chunk.chunkCount || 1; - - return ( -
- {/* Chunk Header */} -
- {/* Chunk Number */} -
- #{props.index + 1} - 1}> - ({chunkCount()} chunks) - -
- - {/* Type Badge */} -
-
-
- {props.chunk.type} -
-
- - {/* Tool Name Badge */} - -
- šŸ”§ {props.chunk.toolName} -
-
- - {/* Timestamp */} -
- {formatTimestamp(props.chunk.timestamp)} -
- - {/* Toggle Raw JSON Button */} - -
- - {/* Chunk Content */} - - -
- {props.chunk.content} -
-
- -
- āŒ {props.chunk.error} -
-
- -
- āœ“ {isLarge() ? `Finish: ${props.chunk.finishReason}` : props.chunk.finishReason} -
-
- -
-
- āš ļø Approval Required -
- -
- Input: {JSON.stringify(props.chunk.input, null, 2)} -
-
-
-
-
- - {/* Raw JSON View */} - -
- {JSON.stringify(props.chunk, null, 2)} -
-
-
- ); -}; +import { Show, createSignal } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { formatTimestamp, getChunkTypeColor } from '../utils' +import type { Component } from 'solid-js' +import type { Chunk } from '../../store/ai-store' + +interface ChunkItemProps { + chunk: Chunk + index: number + variant?: 'small' | 'large' +} + +export const ChunkItem: Component = (props) => { + const styles = useStyles() + const [showRaw, setShowRaw] = createSignal(false) + const isLarge = () => props.variant === 'large' + const chunkCount = () => props.chunk.chunkCount || 1 + + return ( +
+ {/* Chunk Header */} +
+ {/* Chunk Number */} +
+ #{props.index + 1} + 1}> + + ({chunkCount()} chunks) + + +
+ + {/* Type Badge */} +
+
+
+ {props.chunk.type} +
+
+ + {/* Tool Name Badge */} + +
+ šŸ”§ {props.chunk.toolName} +
+
+ + {/* Timestamp */} +
+ {formatTimestamp(props.chunk.timestamp)} +
+ + {/* Toggle Raw JSON Button */} + +
+ + {/* Chunk Content */} + + +
+ {props.chunk.content} +
+
+ +
+ āŒ {props.chunk.error} +
+
+ +
+ āœ“{' '} + {isLarge() + ? `Finish: ${props.chunk.finishReason}` + : props.chunk.finishReason} +
+
+ +
+
+ āš ļø Approval Required +
+ +
+ Input: {JSON.stringify(props.chunk.input, null, 2)} +
+
+
+
+
+ + {/* Raw JSON View */} + +
+ {JSON.stringify(props.chunk, null, 2)} +
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ChunksCollapsible.tsx b/packages/typescript/ai-devtools/src/components/conversation/ChunksCollapsible.tsx index c67e4a6ff..229611c96 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ChunksCollapsible.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ChunksCollapsible.tsx @@ -1,48 +1,57 @@ -import { Component, For, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Chunk } from "../../store/ai-store"; -import { ChunkItem } from "./ChunkItem"; -import { ChunkBadges } from "./ChunkBadges"; - -interface ChunksCollapsibleProps { - chunks: Chunk[]; -} - -export const ChunksCollapsible: Component = (props) => { - const styles = useStyles(); - - const accumulatedContent = () => - props.chunks - .filter((c) => c.type === "content" && (c.content || c.delta)) - .map((c) => c.delta || c.content) - .join(""); - - // Total raw chunks = sum of all chunkCounts - const totalRawChunks = () => props.chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0); - - return ( -
- -
- {/* Header */} -
- šŸ“¦ Server Chunks ({totalRawChunks()}) - -
- - {/* Accumulated Content Preview */} - -
- {accumulatedContent()} -
-
-
-
-
-
- {(chunk, index) => } -
-
-
- ); -}; +import { For, Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { ChunkItem } from './ChunkItem' +import { ChunkBadges } from './ChunkBadges' +import type { Component } from 'solid-js' +import type { Chunk } from '../../store/ai-store' + +interface ChunksCollapsibleProps { + chunks: Array +} + +export const ChunksCollapsible: Component = (props) => { + const styles = useStyles() + + const accumulatedContent = () => + props.chunks + .filter((c) => c.type === 'content' && (c.content || c.delta)) + .map((c) => c.delta || c.content) + .join('') + + // Total raw chunks = sum of all chunkCounts + const totalRawChunks = () => + props.chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0) + + return ( +
+ +
+ {/* Header */} +
+ šŸ“¦ Server Chunks ({totalRawChunks()}) + +
+ + {/* Accumulated Content Preview */} + +
+ {accumulatedContent()} +
+
+
+
+
+
+ + {(chunk, index) => ( + + )} + +
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ChunksTab.tsx b/packages/typescript/ai-devtools/src/components/conversation/ChunksTab.tsx index abefb123b..ddd739469 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ChunksTab.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ChunksTab.tsx @@ -1,58 +1,72 @@ -import { Component, For, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Chunk } from "../../store/ai-store"; -import { MessageGroup } from "./MessageGroup"; - -interface ChunksTabProps { - chunks: Chunk[]; -} - -export const ChunksTab: Component = (props) => { - const styles = useStyles(); - - const groupedChunks = () => { - const groups = new Map>(); - - props.chunks.forEach((chunk) => { - const key = chunk.messageId || "no-message-id"; - if (!groups.has(key)) { - groups.set(key, []); - } - groups.get(key)!.push(chunk); - }); - - return Array.from(groups.entries()); - }; - - // Calculate total raw chunks (sum of all chunkCounts) - const totalRawChunks = () => props.chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0); - - return ( - 0} - fallback={
No chunks yet
} - > -
- {/* Stream Header */} -
-
-
Stream Responses
-
- {totalRawChunks()} chunks Ā· {groupedChunks().length} messages -
-
-
Grouped by message ID
-
- - {/* Message Groups */} -
- - {([messageId, chunks], groupIndex) => ( - - )} - -
-
-
- ); -}; +import { For, Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { MessageGroup } from './MessageGroup' +import type { Component } from 'solid-js' +import type { Chunk } from '../../store/ai-store' + +interface ChunksTabProps { + chunks: Array +} + +export const ChunksTab: Component = (props) => { + const styles = useStyles() + + const groupedChunks = () => { + const groups = new Map>() + + props.chunks.forEach((chunk) => { + const key = chunk.messageId || 'no-message-id' + if (!groups.has(key)) { + groups.set(key, []) + } + groups.get(key)!.push(chunk) + }) + + return Array.from(groups.entries()) + } + + // Calculate total raw chunks (sum of all chunkCounts) + const totalRawChunks = () => + props.chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0) + + return ( + 0} + fallback={ +
No chunks yet
+ } + > +
+ {/* Stream Header */} +
+
+
+ Stream Responses +
+
+ {totalRawChunks()} chunks Ā· {groupedChunks().length} messages +
+
+
+ Grouped by message ID +
+
+ + {/* Message Groups */} +
+ + {([messageId, chunks], groupIndex) => ( + + )} + +
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ConversationHeader.tsx b/packages/typescript/ai-devtools/src/components/conversation/ConversationHeader.tsx index 0487c6fdc..63ba1378f 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ConversationHeader.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ConversationHeader.tsx @@ -1,104 +1,133 @@ -import { Component, For, Show, createSignal } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Conversation } from "../../store/ai-store"; -import { formatDuration } from "../utils"; - -interface ConversationHeaderProps { - conversation: Conversation; -} - -export const ConversationHeader: Component = (props) => { - const styles = useStyles(); - const conv = () => props.conversation; - const [showOptions, setShowOptions] = createSignal(false); - - const hasExtendedInfo = () => { - const c = conv(); - return ( - (c.toolNames && c.toolNames.length > 0) || - (c.options && Object.keys(c.options).length > 0) || - (c.providerOptions && Object.keys(c.providerOptions).length > 0) - ); - }; - - const toolNames = () => conv().toolNames ?? []; - const options = () => conv().options; - const providerOptions = () => conv().providerOptions; - - return ( -
-
-
-
{conv().label}
-
- {conv().status} -
-
-
- {conv().model && `Model: ${conv().model}`} - {conv().provider && ` • Provider: ${conv().provider}`} - {conv().completedAt && ` • Duration: ${formatDuration(conv().completedAt! - conv().startedAt)}`} -
- -
- šŸŽÆ Tokens: - Prompt: {conv().usage?.promptTokens.toLocaleString() || 0} - • - Completion: {conv().usage?.completionTokens.toLocaleString() || 0} - • - - Total: {conv().usage?.totalTokens.toLocaleString() || 0} - -
-
- - - -
- 0}> -
- šŸ”§ Tools: -
- - {(toolName) => {toolName}} - -
-
-
- - {(opts) => ( - 0}> -
- āš™ļø Options: -
{JSON.stringify(opts(), null, 2)}
-
-
- )} -
- - {(provOpts) => ( - 0}> -
- šŸ·ļø Provider Options: -
{JSON.stringify(provOpts(), null, 2)}
-
-
- )} -
-
-
-
-
-
- ); -}; +import { For, Show, createSignal } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { formatDuration } from '../utils' +import type { Component } from 'solid-js' +import type { Conversation } from '../../store/ai-store' + +interface ConversationHeaderProps { + conversation: Conversation +} + +export const ConversationHeader: Component = ( + props, +) => { + const styles = useStyles() + const conv = () => props.conversation + const [showOptions, setShowOptions] = createSignal(false) + + const hasExtendedInfo = () => { + const c = conv() + return ( + (c.toolNames && c.toolNames.length > 0) || + (c.options && Object.keys(c.options).length > 0) || + (c.providerOptions && Object.keys(c.providerOptions).length > 0) + ) + } + + const toolNames = () => conv().toolNames ?? [] + const options = () => conv().options + const providerOptions = () => conv().providerOptions + + return ( +
+
+
+
+ {conv().label} +
+
+ {conv().status} +
+
+
+ {conv().model && `Model: ${conv().model}`} + {conv().provider && ` • Provider: ${conv().provider}`} + {conv().completedAt && + ` • Duration: ${formatDuration(conv().completedAt! - conv().startedAt)}`} +
+ +
+ + šŸŽÆ Tokens: + + + Prompt: {conv().usage?.promptTokens.toLocaleString() || 0} + + • + + Completion: {conv().usage?.completionTokens.toLocaleString() || 0} + + • + + Total: {conv().usage?.totalTokens.toLocaleString() || 0} + +
+
+ + + +
+ 0}> +
+ + šŸ”§ Tools: + +
+ + {(toolName) => ( + + {toolName} + + )} + +
+
+
+ + {(opts) => ( + 0}> +
+ + āš™ļø Options: + +
+                        {JSON.stringify(opts(), null, 2)}
+                      
+
+
+ )} +
+ + {(provOpts) => ( + 0}> +
+ + šŸ·ļø Provider Options: + +
+                        {JSON.stringify(provOpts(), null, 2)}
+                      
+
+
+ )} +
+
+
+
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx b/packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx index e50ab56be..f3aee16f7 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ConversationTabs.tsx @@ -1,44 +1,50 @@ -import { Component, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Conversation } from "../../store/ai-store"; - -interface ConversationTabsProps { - conversation: Conversation; - activeTab: "messages" | "chunks"; - onTabChange: (tab: "messages" | "chunks") => void; -} - -export const ConversationTabs: Component = (props) => { - const styles = useStyles(); - const conv = () => props.conversation; - - // Total raw chunks = sum of all chunkCounts - const totalRawChunks = () => conv().chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0); - - return ( -
- {/* Only show messages tab for client conversations */} - - - - {/* Only show chunks tab for server-only conversations */} - - - -
- ); -}; +import { Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import type { Component } from 'solid-js' +import type { Conversation } from '../../store/ai-store' + +interface ConversationTabsProps { + conversation: Conversation + activeTab: 'messages' | 'chunks' + onTabChange: (tab: 'messages' | 'chunks') => void +} + +export const ConversationTabs: Component = (props) => { + const styles = useStyles() + const conv = () => props.conversation + + // Total raw chunks = sum of all chunkCounts + const totalRawChunks = () => + conv().chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0) + + return ( +
+ {/* Only show messages tab for client conversations */} + + + + {/* Only show chunks tab for server-only conversations */} + + + +
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/MessageCard.tsx b/packages/typescript/ai-devtools/src/components/conversation/MessageCard.tsx index 8c8df4b9d..6b51b3288 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/MessageCard.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/MessageCard.tsx @@ -1,80 +1,93 @@ -import { Component, For, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Message } from "../../store/ai-store"; -import { ToolCallDisplay } from "./ToolCallDisplay"; -import { ChunksCollapsible } from "./ChunksCollapsible"; -import { formatTimestamp } from "../utils"; - -interface MessageCardProps { - message: Message; -} - -export const MessageCard: Component = (props) => { - const styles = useStyles(); - const msg = () => props.message; - - return ( -
-
-
- {msg().role === "user" ? "U" : "šŸ¤–"} -
-
-
- {msg().role} -
-
-
{formatTimestamp(msg().timestamp)}
- {/* Per-message token usage */} - -
- šŸŽÆ - {msg().usage?.promptTokens.toLocaleString()} in - • - {msg().usage?.completionTokens.toLocaleString()} out -
-
-
- - {/* Thinking content (for extended thinking models) */} - -
- šŸ’­ Thinking... -
{msg().thinkingContent}
-
-
- -
{msg().content}
- - {/* Tool Calls Display */} - 0}> -
- {(tool) => } -
-
- - {/* Chunks Display (for client conversations with server chunks) */} - 0}> - - -
- ); -}; +import { For, Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { formatTimestamp } from '../utils' +import { ToolCallDisplay } from './ToolCallDisplay' +import { ChunksCollapsible } from './ChunksCollapsible' +import type { Message } from '../../store/ai-store' +import type { Component } from 'solid-js' + +interface MessageCardProps { + message: Message +} + +export const MessageCard: Component = (props) => { + const styles = useStyles() + const msg = () => props.message + + return ( +
+
+
+ {msg().role === 'user' ? 'U' : 'šŸ¤–'} +
+
+
+ {msg().role} +
+
+
+ {formatTimestamp(msg().timestamp)} +
+ {/* Per-message token usage */} + +
+ + šŸŽÆ + + {msg().usage?.promptTokens.toLocaleString()} in + • + {msg().usage?.completionTokens.toLocaleString()} out +
+
+
+ + {/* Thinking content (for extended thinking models) */} + +
+ + šŸ’­ Thinking... + +
+ {msg().thinkingContent} +
+
+
+ +
+ {msg().content} +
+ + {/* Tool Calls Display */} + 0}> +
+ + {(tool) => } + +
+
+ + {/* Chunks Display (for client conversations with server chunks) */} + 0}> + + +
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/MessageGroup.tsx b/packages/typescript/ai-devtools/src/components/conversation/MessageGroup.tsx index 3b3f55f2e..f22202556 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/MessageGroup.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/MessageGroup.tsx @@ -1,67 +1,81 @@ -import { Component, For, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Chunk } from "../../store/ai-store"; -import { ChunkItem } from "./ChunkItem"; -import { ChunkBadges } from "./ChunkBadges"; - -interface MessageGroupProps { - messageId: string; - chunks: Chunk[]; - groupIndex: number; -} - -export const MessageGroup: Component = (props) => { - const styles = useStyles(); - - const accumulatedContent = () => - props.chunks - .filter((c) => c.type === "content" && (c.content || c.delta)) - .map((c) => c.content) - .join(""); - - // Total raw chunks = sum of all chunkCounts - const totalRawChunks = () => props.chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0); - // Consolidated entries = number of entries in the chunks array - const consolidatedEntries = () => props.chunks.length; - - return ( -
- -
- {/* Header */} -
- Message #{props.groupIndex + 1} -
- šŸ“¦ {totalRawChunks()} chunks - - ({consolidatedEntries()} entries) - -
- -
- - {/* Message ID */} -
- ID: {props.messageId} -
- - {/* Accumulated Content Preview */} - -
- {accumulatedContent()} -
-
-
-
- - {/* Chunks in this group */} -
-
- - {(chunk, chunkIndex) => } - -
-
-
- ); -}; +import { For, Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { ChunkItem } from './ChunkItem' +import { ChunkBadges } from './ChunkBadges' +import type { Component } from 'solid-js' +import type { Chunk } from '../../store/ai-store' + +interface MessageGroupProps { + messageId: string + chunks: Array + groupIndex: number +} + +export const MessageGroup: Component = (props) => { + const styles = useStyles() + + const accumulatedContent = () => + props.chunks + .filter((c) => c.type === 'content' && (c.content || c.delta)) + .map((c) => c.content) + .join('') + + // Total raw chunks = sum of all chunkCounts + const totalRawChunks = () => + props.chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0) + // Consolidated entries = number of entries in the chunks array + const consolidatedEntries = () => props.chunks.length + + return ( +
+ +
+ {/* Header */} +
+ Message #{props.groupIndex + 1} +
+ šŸ“¦ {totalRawChunks()} chunks + + + ({consolidatedEntries()} entries) + + +
+ +
+ + {/* Message ID */} +
+ ID: {props.messageId} +
+ + {/* Accumulated Content Preview */} + +
+ {accumulatedContent()} +
+
+
+
+ + {/* Chunks in this group */} +
+
+ + {(chunk, chunkIndex) => ( + + )} + +
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/MessagesTab.tsx b/packages/typescript/ai-devtools/src/components/conversation/MessagesTab.tsx index 470522496..3cfd6da8d 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/MessagesTab.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/MessagesTab.tsx @@ -1,27 +1,30 @@ -import { Component, For, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import type { Message } from "../../store/ai-store"; -import { MessageCard } from "./MessageCard"; - -interface MessagesTabProps { - messages: Message[]; -} - -export const MessagesTab: Component = (props) => { - const styles = useStyles(); - - return ( - 0} - fallback={ -
- No messages yet. Start a conversation to see messages here. -
- } - > -
- {(msg) => } -
-
- ); -}; +import { For, Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { MessageCard } from './MessageCard' +import type { Component } from 'solid-js' +import type { Message } from '../../store/ai-store' + +interface MessagesTabProps { + messages: Array +} + +export const MessagesTab: Component = (props) => { + const styles = useStyles() + + return ( + 0} + fallback={ +
+ No messages yet. Start a conversation to see messages here. +
+ } + > +
+ + {(msg) => } + +
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/ToolCallDisplay.tsx b/packages/typescript/ai-devtools/src/components/conversation/ToolCallDisplay.tsx index b74e06f49..51c4d0995 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/ToolCallDisplay.tsx +++ b/packages/typescript/ai-devtools/src/components/conversation/ToolCallDisplay.tsx @@ -1,76 +1,91 @@ -import { Component, Show } from "solid-js"; -import { JsonTree } from "@tanstack/devtools-ui"; -import { useStyles } from "../../styles/use-styles"; -import type { ToolCall } from "../../store/ai-store"; - -interface ToolCallDisplayProps { - tool: ToolCall; -} - -export const ToolCallDisplay: Component = (props) => { - const styles = useStyles(); - const tool = () => props.tool; - - // Parse arguments if they're a string - const parsedArguments = () => { - const args = tool().arguments; - if (typeof args === "string") { - try { - return JSON.parse(args); - } catch { - return args; - } - } - return args; - }; - - return ( -
-
-
- {tool().approvalRequired ? "āš ļø" : "šŸ”§"} {tool().name} -
-
- {tool().state} -
- -
APPROVAL REQUIRED
-
-
- -
-
Arguments
-
- } defaultExpansionDepth={2} copyable /> -
-
-
- -
-
Result
-
- } defaultExpansionDepth={2} copyable /> -
-
-
-
- ); -}; +import { Show } from 'solid-js' +import { JsonTree } from '@tanstack/devtools-ui' +import { useStyles } from '../../styles/use-styles' +import type { Component } from 'solid-js' +import type { ToolCall } from '../../store/ai-store' + +interface ToolCallDisplayProps { + tool: ToolCall +} + +export const ToolCallDisplay: Component = (props) => { + const styles = useStyles() + const tool = () => props.tool + + // Parse arguments if they're a string + const parsedArguments = () => { + const args = tool().arguments + if (typeof args === 'string') { + try { + return JSON.parse(args) + } catch { + return args + } + } + return args + } + + return ( +
+
+
+ {tool().approvalRequired ? 'āš ļø' : 'šŸ”§'} {tool().name} +
+
+ {tool().state} +
+ +
+ APPROVAL REQUIRED +
+
+
+ +
+
+ Arguments +
+
+ } + defaultExpansionDepth={2} + copyable + /> +
+
+
+ +
+
+ Result +
+
+ } + defaultExpansionDepth={2} + copyable + /> +
+
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/conversation/index.ts b/packages/typescript/ai-devtools/src/components/conversation/index.ts index b2dd46c75..e8d528051 100644 --- a/packages/typescript/ai-devtools/src/components/conversation/index.ts +++ b/packages/typescript/ai-devtools/src/components/conversation/index.ts @@ -1,10 +1,5 @@ -export { ConversationHeader } from "./ConversationHeader"; -export { ConversationTabs } from "./ConversationTabs"; -export { MessageCard } from "./MessageCard"; -export { ToolCallDisplay } from "./ToolCallDisplay"; -export { ChunkBadges } from "./ChunkBadges"; -export { ChunkItem } from "./ChunkItem"; -export { ChunksCollapsible } from "./ChunksCollapsible"; -export { MessageGroup } from "./MessageGroup"; -export { MessagesTab } from "./MessagesTab"; -export { ChunksTab } from "./ChunksTab"; +export { ConversationHeader } from './ConversationHeader' +export { ConversationTabs } from './ConversationTabs' + +export { MessagesTab } from './MessagesTab' +export { ChunksTab } from './ChunksTab' diff --git a/packages/typescript/ai-devtools/src/components/list/ConversationRow.tsx b/packages/typescript/ai-devtools/src/components/list/ConversationRow.tsx index d6bf81e31..e9227a73e 100644 --- a/packages/typescript/ai-devtools/src/components/list/ConversationRow.tsx +++ b/packages/typescript/ai-devtools/src/components/list/ConversationRow.tsx @@ -1,67 +1,83 @@ -import { Component, Show } from "solid-js"; -import { useStyles } from "../../styles/use-styles"; -import { useAIStore, type Conversation } from "../../store/ai-context"; -import { getStatusColor, getTypeColor } from "../utils"; - -interface ConversationRowProps { - conversation: Conversation; -} - -export const ConversationRow: Component = (props) => { - const { state, selectConversation } = useAIStore(); - const styles = useStyles(); - const conv = () => props.conversation; - - const hasToolCalls = () => { - return conv().messages.some((msg) => msg.toolCalls && msg.toolCalls.length > 0); - }; - - const countToolCalls = () => { - return conv().messages.reduce((total, msg) => { - return total + (msg.toolCalls?.length || 0); - }, 0); - }; - - // Total raw chunks = sum of all chunkCounts from all chunks - const totalRawChunks = () => { - return conv().chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0); - }; - - return ( -
selectConversation(conv().id)} - > -
-
-
-
{conv().label}
- -
- šŸ”§ {countToolCalls()} -
-
-
-
-
-
-
šŸ’¬ {conv().messages.length}
-
šŸ“¦ {totalRawChunks()}
- -
- šŸŽÆ {conv().usage?.totalTokens || 0} -
-
-
- -
⟳ Loading...
-
-
- ); -}; +import { Show } from 'solid-js' +import { useStyles } from '../../styles/use-styles' +import { useAIStore } from '../../store/ai-context' +import { getStatusColor, getTypeColor } from '../utils' +import type { Conversation } from '../../store/ai-context' +import type { Component } from 'solid-js' + +interface ConversationRowProps { + conversation: Conversation +} + +export const ConversationRow: Component = (props) => { + const { state, selectConversation } = useAIStore() + const styles = useStyles() + const conv = () => props.conversation + + const hasToolCalls = () => { + return conv().messages.some( + (msg) => msg.toolCalls && msg.toolCalls.length > 0, + ) + } + + const countToolCalls = () => { + return conv().messages.reduce((total, msg) => { + return total + (msg.toolCalls?.length || 0) + }, 0) + } + + // Total raw chunks = sum of all chunkCounts from all chunks + const totalRawChunks = () => { + return conv().chunks.reduce((sum, c) => sum + (c.chunkCount || 1), 0) + } + + return ( +
selectConversation(conv().id)} + > +
+
+
+
{conv().label}
+ +
+ šŸ”§ {countToolCalls()} +
+
+
+
+
+
+
+ šŸ’¬ {conv().messages.length} +
+
+ šŸ“¦ {totalRawChunks()} +
+ +
+ šŸŽÆ {conv().usage?.totalTokens || 0} +
+
+
+ +
+ ⟳ Loading... +
+
+
+ ) +} diff --git a/packages/typescript/ai-devtools/src/components/list/index.ts b/packages/typescript/ai-devtools/src/components/list/index.ts index 983a286f0..1cde25059 100644 --- a/packages/typescript/ai-devtools/src/components/list/index.ts +++ b/packages/typescript/ai-devtools/src/components/list/index.ts @@ -1 +1 @@ -export { ConversationRow } from "./ConversationRow"; +export { ConversationRow } from './ConversationRow' diff --git a/packages/typescript/ai-devtools/src/components/utils/format.ts b/packages/typescript/ai-devtools/src/components/utils/format.ts index 849cdf606..b0cf6136f 100644 --- a/packages/typescript/ai-devtools/src/components/utils/format.ts +++ b/packages/typescript/ai-devtools/src/components/utils/format.ts @@ -1,55 +1,61 @@ -import type { Chunk } from "../../store/ai-store"; - -export const formatTimestamp = (timestamp: number): string => { - const date = new Date(timestamp); - return date.toLocaleTimeString() + "." + date.getMilliseconds().toString().padStart(3, "0"); -}; - -export const formatDuration = (ms?: number): string => { - if (!ms) return "-"; - if (ms < 1000) return `${ms}ms`; - return `${(ms / 1000).toFixed(2)}s`; -}; - -export const getChunkTypeColor = (type: Chunk["type"]): string => { - switch (type) { - case "content": - return "#10b981"; // green - case "tool_call": - return "#8b5cf6"; // purple - case "tool_result": - return "#3b82f6"; // blue - case "done": - return "#6b7280"; // gray - case "error": - return "#ef4444"; // red - case "approval": - return "#f59e0b"; // orange/amber - default: - return "#6b7280"; // gray - } -}; - -export const getStatusColor = (status: "active" | "completed" | "error"): string => { - switch (status) { - case "active": - return "oklch(0.7 0.17 142)"; // green - case "completed": - return "oklch(0.65 0.1 260)"; // blue - case "error": - return "oklch(0.65 0.2 25)"; // red - default: - return "oklch(0.6 0.05 200)"; - } -}; - -export const getTypeColor = (type: "client" | "server"): string => { - switch (type) { - case "client": - return "oklch(0.68 0.16 330)"; // pink - case "server": - return "oklch(0.68 0.15 280)"; // purple - default: - return "oklch(0.6 0.05 200)"; - } -}; +import type { Chunk } from '../../store/ai-store' + +export const formatTimestamp = (timestamp: number): string => { + const date = new Date(timestamp) + return ( + date.toLocaleTimeString() + + '.' + + date.getMilliseconds().toString().padStart(3, '0') + ) +} + +export const formatDuration = (ms?: number): string => { + if (!ms) return '-' + if (ms < 1000) return `${ms}ms` + return `${(ms / 1000).toFixed(2)}s` +} + +export const getChunkTypeColor = (type: Chunk['type']): string => { + switch (type) { + case 'content': + return '#10b981' // green + case 'tool_call': + return '#8b5cf6' // purple + case 'tool_result': + return '#3b82f6' // blue + case 'done': + return '#6b7280' // gray + case 'error': + return '#ef4444' // red + case 'approval': + return '#f59e0b' // orange/amber + default: + return '#6b7280' // gray + } +} + +export const getStatusColor = ( + status: 'active' | 'completed' | 'error', +): string => { + switch (status) { + case 'active': + return 'oklch(0.7 0.17 142)' // green + case 'completed': + return 'oklch(0.65 0.1 260)' // blue + case 'error': + return 'oklch(0.65 0.2 25)' // red + default: + return 'oklch(0.6 0.05 200)' + } +} + +export const getTypeColor = (type: 'client' | 'server'): string => { + switch (type) { + case 'client': + return 'oklch(0.68 0.16 330)' // pink + case 'server': + return 'oklch(0.68 0.15 280)' // purple + default: + return 'oklch(0.6 0.05 200)' + } +} diff --git a/packages/typescript/ai-devtools/src/components/utils/index.ts b/packages/typescript/ai-devtools/src/components/utils/index.ts index 704c53ac2..d1884f921 100644 --- a/packages/typescript/ai-devtools/src/components/utils/index.ts +++ b/packages/typescript/ai-devtools/src/components/utils/index.ts @@ -1 +1 @@ -export * from "./format"; +export * from './format' diff --git a/packages/typescript/ai-devtools/src/core.tsx b/packages/typescript/ai-devtools/src/core.tsx index a4eb67ee2..c0eedff68 100644 --- a/packages/typescript/ai-devtools/src/core.tsx +++ b/packages/typescript/ai-devtools/src/core.tsx @@ -1,10 +1,10 @@ -import { lazy } from "solid-js"; -import { constructCoreClass } from "@tanstack/devtools-utils/solid"; +import { lazy } from 'solid-js' +import { constructCoreClass } from '@tanstack/devtools-utils/solid' -const Component = lazy(() => import("./components/Shell")); +const Component = lazy(() => import('./components/Shell')) export interface AiDevtoolsInit {} -const [AiDevtoolsCore, AiDevtoolsCoreNoOp] = constructCoreClass(Component); +const [AiDevtoolsCore, AiDevtoolsCoreNoOp] = constructCoreClass(Component) -export { AiDevtoolsCore, AiDevtoolsCoreNoOp }; +export { AiDevtoolsCore, AiDevtoolsCoreNoOp } diff --git a/packages/typescript/ai-devtools/src/index.ts b/packages/typescript/ai-devtools/src/index.ts index 98bfec29b..a7956749c 100644 --- a/packages/typescript/ai-devtools/src/index.ts +++ b/packages/typescript/ai-devtools/src/index.ts @@ -1,5 +1,3 @@ -'use client' - import * as Devtools from './core' export const AiDevtoolsCore = diff --git a/packages/typescript/ai-devtools/src/production.ts b/packages/typescript/ai-devtools/src/production.ts index 9e36bba0a..b5020d047 100644 --- a/packages/typescript/ai-devtools/src/production.ts +++ b/packages/typescript/ai-devtools/src/production.ts @@ -1,5 +1,3 @@ -'use client' - export { AiDevtoolsCore } from './core' export type { AiDevtoolsInit } from './core' diff --git a/packages/typescript/ai-devtools/src/store/ai-context.tsx b/packages/typescript/ai-devtools/src/store/ai-context.tsx index be2d0a114..30f84d822 100644 --- a/packages/typescript/ai-devtools/src/store/ai-context.tsx +++ b/packages/typescript/ai-devtools/src/store/ai-context.tsx @@ -1,1221 +1,1412 @@ -import { createContext, useContext, onMount, onCleanup, ParentComponent, batch } from "solid-js"; -import { createStore, produce } from "solid-js/store"; -import { aiEventClient } from "@tanstack/ai/event-client"; - -export interface MessagePart { - type: "text" | "tool-call" | "tool-result"; - content?: string; - toolCallId?: string; - toolName?: string; - arguments?: string; - state?: string; - output?: unknown; - error?: string; -} - -export interface ToolCall { - id: string; - name: string; - arguments: string; - state: string; - result?: unknown; - approvalRequired?: boolean; - approvalId?: string; -} - -export interface TokenUsage { - promptTokens: number; - completionTokens: number; - totalTokens: number; -} - -export interface Message { - id: string; - role: "user" | "assistant" | "system"; - content: string; - timestamp: number; - parts?: MessagePart[]; - toolCalls?: ToolCall[]; - /** Consolidated chunks - consecutive same-type chunks are merged into one entry */ - chunks?: Chunk[]; - /** Total number of raw chunks received (before consolidation) */ - totalChunkCount?: number; - model?: string; - usage?: TokenUsage; - thinkingContent?: string; -} - -/** - * Consolidated chunk - represents one or more raw chunks of the same type. - * Consecutive content/thinking chunks are merged into a single entry with accumulated content. - */ -export interface Chunk { - id: string; - type: "content" | "tool_call" | "tool_result" | "done" | "error" | "approval" | "thinking"; - timestamp: number; - messageId?: string; - /** Accumulated content from all merged chunks */ - content?: string; - /** The last delta received (kept for debugging) */ - delta?: string; - toolName?: string; - toolCallId?: string; - finishReason?: string; - error?: string; - approvalId?: string; - input?: unknown; - /** Number of raw chunks that were merged into this consolidated chunk */ - chunkCount: number; -} - -export interface Conversation { - id: string; - type: "client" | "server"; - label: string; - messages: Message[]; - chunks: Chunk[]; - model?: string; - provider?: string; - status: "active" | "completed" | "error"; - startedAt: number; - completedAt?: number; - usage?: TokenUsage; - iterationCount?: number; - toolNames?: string[]; - options?: Record; - providerOptions?: Record; -} - -export interface AIStoreState { - conversations: Record; - activeConversationId: string | null; -} - -interface AIContextValue { - state: AIStoreState; - clearAllConversations: () => void; - selectConversation: (id: string) => void; -} - -const AIContext = createContext(); - -export function useAIStore(): AIContextValue { - const context = useContext(AIContext); - if (!context) { - throw new Error("useAIStore must be used within an AIProvider"); - } - return context; -} - -export const AIProvider: ParentComponent = (props) => { - const [state, setState] = createStore({ - conversations: {}, - activeConversationId: null, - }); - - const streamToConversation = new Map(); - const requestToConversation = new Map(); - - // Batching system for high-frequency chunk updates with consolidated chunk merging - // Stores: conversationId -> { chunks to merge, totalNewChunks count } - const pendingConversationChunks = new Map(); - // Stores: conversationId -> messageIndex -> { chunks to merge, totalNewChunks count } - const pendingMessageChunks = new Map>(); - let batchScheduled = false; - - function scheduleBatchFlush(): void { - if (batchScheduled) return; - batchScheduled = true; - queueMicrotask(flushPendingChunks); - } - - /** Check if a chunk type can be merged with consecutive same-type chunks */ - function isMergeableChunkType(type: Chunk["type"]): boolean { - return type === "content" || type === "thinking"; - } - - /** Merge pending chunks into existing chunks array, consolidating consecutive same-type chunks */ - function mergeChunks(existing: Chunk[], pending: Chunk[]): void { - for (const chunk of pending) { - const lastChunk = existing[existing.length - 1]; - - // If last chunk exists, is the same type, and both are mergeable types, merge them - if ( - lastChunk && - lastChunk.type === chunk.type && - isMergeableChunkType(chunk.type) && - lastChunk.messageId === chunk.messageId - ) { - // Merge: append content, update delta, increment count - lastChunk.content = (lastChunk.content || "") + (chunk.delta || chunk.content || ""); - lastChunk.delta = chunk.delta; - lastChunk.chunkCount += chunk.chunkCount; - } else { - // Different type or not mergeable - add as new entry - existing.push(chunk); - } - } - } - - function flushPendingChunks(): void { - batchScheduled = false; - - batch(() => { - // Flush conversation-level chunks - for (const [conversationId, { chunks, newChunkCount }] of pendingConversationChunks) { - const conv = state.conversations[conversationId]; - if (conv) { - setState( - "conversations", - conversationId, - "chunks", - produce((arr: Chunk[]) => { - mergeChunks(arr, chunks); - }) - ); - } - } - pendingConversationChunks.clear(); - - // Flush message-level chunks - for (const [conversationId, messageMap] of pendingMessageChunks) { - const conv = state.conversations[conversationId]; - if (!conv) continue; - - for (const [messageIndex, { chunks, newChunkCount }] of messageMap) { - const message = conv.messages[messageIndex]; - if (message) { - // Update chunks array with merging - setState( - "conversations", - conversationId, - "messages", - messageIndex, - "chunks", - produce((arr: Chunk[] | undefined) => { - if (!arr) { - // First time - just set the chunks (they're already consolidated in pending) - return chunks; - } - mergeChunks(arr, chunks); - return arr; - }) - ); - // Update total chunk count - const currentTotal = message.totalChunkCount || 0; - setState( - "conversations", - conversationId, - "messages", - messageIndex, - "totalChunkCount", - currentTotal + newChunkCount - ); - } - } - } - pendingMessageChunks.clear(); - }); - } - - function queueChunk(conversationId: string, chunk: Chunk): void { - if (!pendingConversationChunks.has(conversationId)) { - pendingConversationChunks.set(conversationId, { chunks: [], newChunkCount: 0 }); - } - const pending = pendingConversationChunks.get(conversationId)!; - - // Pre-merge in pending buffer to reduce array operations during flush - const lastPending = pending.chunks[pending.chunks.length - 1]; - if ( - lastPending && - lastPending.type === chunk.type && - isMergeableChunkType(chunk.type) && - lastPending.messageId === chunk.messageId - ) { - // Merge into pending buffer - lastPending.content = (lastPending.content || "") + (chunk.delta || chunk.content || ""); - lastPending.delta = chunk.delta; - lastPending.chunkCount += chunk.chunkCount; - } else { - pending.chunks.push(chunk); - } - pending.newChunkCount += chunk.chunkCount; - scheduleBatchFlush(); - } - - function queueMessageChunk(conversationId: string, messageIndex: number, chunk: Chunk): void { - if (!pendingMessageChunks.has(conversationId)) { - pendingMessageChunks.set(conversationId, new Map()); - } - const messageMap = pendingMessageChunks.get(conversationId)!; - if (!messageMap.has(messageIndex)) { - messageMap.set(messageIndex, { chunks: [], newChunkCount: 0 }); - } - const pending = messageMap.get(messageIndex)!; - - // Pre-merge in pending buffer - const lastPending = pending.chunks[pending.chunks.length - 1]; - if ( - lastPending && - lastPending.type === chunk.type && - isMergeableChunkType(chunk.type) && - lastPending.messageId === chunk.messageId - ) { - // Merge into pending buffer - lastPending.content = (lastPending.content || "") + (chunk.delta || chunk.content || ""); - lastPending.delta = chunk.delta; - lastPending.chunkCount += chunk.chunkCount; - } else { - pending.chunks.push(chunk); - } - pending.newChunkCount += chunk.chunkCount; - scheduleBatchFlush(); - } - - // Optimized helper functions using path-based updates - function getOrCreateConversation(id: string, type: "client" | "server", label: string): void { - if (!state.conversations[id]) { - setState("conversations", id, { - id, - type, - label, - messages: [], - chunks: [], - status: "active", - startedAt: Date.now(), - }); - if (!state.activeConversationId) { - setState("activeConversationId", id); - } - } - } - - function addMessage(conversationId: string, message: Message): void { - const conv = state.conversations[conversationId]; - if (!conv) return; - setState("conversations", conversationId, "messages", conv.messages.length, message); - } - - function addChunkToMessage(conversationId: string, chunk: Chunk): void { - const conv = state.conversations[conversationId]; - if (!conv) return; - - if (chunk.messageId) { - const messageIndex = conv.messages.findIndex((msg) => msg.id === chunk.messageId); - - if (messageIndex !== -1) { - queueMessageChunk(conversationId, messageIndex, chunk); - return; - } else { - // Create new message with the chunk - const newMessage: Message = { - id: chunk.messageId, - role: "assistant", - content: "", - timestamp: chunk.timestamp, - model: conv.model, - chunks: [chunk], - }; - setState("conversations", conversationId, "messages", conv.messages.length, newMessage); - return; - } - } - - // Find last assistant message - for (let i = conv.messages.length - 1; i >= 0; i--) { - const message = conv.messages[i]; - if (message && message.role === "assistant") { - queueMessageChunk(conversationId, i, chunk); - return; - } - } - } - - function updateMessageUsage(conversationId: string, messageId: string | undefined, usage: TokenUsage): void { - const conv = state.conversations[conversationId]; - if (!conv) return; - - if (messageId) { - const messageIndex = conv.messages.findIndex((msg) => msg.id === messageId); - if (messageIndex !== -1) { - setState("conversations", conversationId, "messages", messageIndex, "usage", usage); - return; - } - } - - for (let i = conv.messages.length - 1; i >= 0; i--) { - const message = conv.messages[i]; - if (message && message.role === "assistant") { - setState("conversations", conversationId, "messages", i, "usage", usage); - return; - } - } - } - - // Public actions - function clearAllConversations() { - setState("conversations", {}); - setState("activeConversationId", null); - streamToConversation.clear(); - requestToConversation.clear(); - pendingConversationChunks.clear(); - pendingMessageChunks.clear(); - } - - function selectConversation(id: string) { - setState("activeConversationId", id); - } - - // Additional optimized helper functions - function updateConversation(conversationId: string, updates: Partial): void { - if (!state.conversations[conversationId]) return; - for (const [key, value] of Object.entries(updates)) { - setState("conversations", conversationId, key as keyof Conversation, value as Conversation[keyof Conversation]); - } - } - - function updateMessage(conversationId: string, messageIndex: number, updates: Partial): void { - const conv = state.conversations[conversationId]; - if (!conv || !conv.messages[messageIndex]) return; - for (const [key, value] of Object.entries(updates)) { - setState( - "conversations", - conversationId, - "messages", - messageIndex, - key as keyof Message, - value as Message[keyof Message] - ); - } - } - - function updateToolCall( - conversationId: string, - messageIndex: number, - toolCallIndex: number, - updates: Partial - ): void { - const conv = state.conversations[conversationId]; - if (!conv?.messages[messageIndex]?.toolCalls?.[toolCallIndex]) return; - setState( - "conversations", - conversationId, - "messages", - messageIndex, - "toolCalls", - toolCallIndex, - produce((tc: ToolCall) => Object.assign(tc, updates)) - ); - } - - function setToolCalls(conversationId: string, messageIndex: number, toolCalls: ToolCall[]): void { - if (!state.conversations[conversationId]?.messages[messageIndex]) return; - setState("conversations", conversationId, "messages", messageIndex, "toolCalls", toolCalls); - } - - function addChunk(conversationId: string, chunk: Chunk): void { - if (!state.conversations[conversationId]) return; - queueChunk(conversationId, chunk); - } - - // Register all event listeners on mount - onMount(() => { - const cleanupFns: Array<() => void> = []; - - // ============= Client Events ============= - - cleanupFns.push( - aiEventClient.on( - "client:created", - (e) => { - const clientId = e.payload.clientId; - getOrCreateConversation(clientId, "client", `Client Chat (${clientId.substring(0, 8)})`); - updateConversation(clientId, { model: undefined, provider: "Client" }); - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:message-sent", - (e) => { - const clientId = e.payload.clientId; - if (!state.conversations[clientId]) { - getOrCreateConversation(clientId, "client", `Client Chat (${clientId.substring(0, 8)})`); - } - addMessage(clientId, { - id: e.payload.messageId, - role: "user", - content: e.payload.content, - timestamp: e.payload.timestamp, - }); - updateConversation(clientId, { status: "active" }); - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:message-appended", - (e) => { - const clientId = e.payload.clientId; - const role = e.payload.role; - - if (role === "user") return; - if (!state.conversations[clientId]) return; - - if (role === "assistant") { - addMessage(clientId, { - id: e.payload.messageId, - role: "assistant", - content: e.payload.contentPreview, - timestamp: e.payload.timestamp, - }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:loading-changed", - (e) => { - const clientId = e.payload.clientId; - if (state.conversations[clientId]) { - updateConversation(clientId, { status: e.payload.isLoading ? "active" : "completed" }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:stopped", - (e) => { - const clientId = e.payload.clientId; - if (state.conversations[clientId]) { - updateConversation(clientId, { status: "completed", completedAt: e.payload.timestamp }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:messages-cleared", - (e) => { - const clientId = e.payload.clientId; - if (state.conversations[clientId]) { - updateConversation(clientId, { messages: [], chunks: [], usage: undefined }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:reloaded", - (e) => { - const clientId = e.payload.clientId; - const conv = state.conversations[clientId]; - if (conv) { - updateConversation(clientId, { - messages: conv.messages.slice(0, e.payload.fromMessageIndex), - status: "active", - }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:error-changed", - (e) => { - const clientId = e.payload.clientId; - if (state.conversations[clientId] && e.payload.error) { - updateConversation(clientId, { status: "error" }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:assistant-message-updated", - (e) => { - const clientId = e.payload.clientId; - const messageId = e.payload.messageId; - const content = e.payload.content; - - if (!state.conversations[clientId]) return; - - const conv = state.conversations[clientId]; - const lastMessage = conv.messages[conv.messages.length - 1]; - - if (lastMessage && lastMessage.role === "assistant" && lastMessage.id === messageId) { - updateMessage(clientId, conv.messages.length - 1, { content, model: conv.model }); - } else { - addMessage(clientId, { - id: messageId, - role: "assistant", - content: content, - timestamp: e.payload.timestamp, - model: conv.model, - chunks: [], - }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:tool-call-updated", - (e) => { - const { - clientId, - messageId, - toolCallId, - toolName, - state: toolCallState, - arguments: args, - } = e.payload as { - clientId: string; - messageId: string; - toolCallId: string; - toolName: string; - state: string; - arguments: unknown; - timestamp: number; - }; - - if (!state.conversations[clientId]) return; - - const conv = state.conversations[clientId]; - const messageIndex = conv.messages.findIndex((m: Message) => m.id === messageId); - if (messageIndex === -1) return; - - const message = conv.messages[messageIndex]; - if (!message) return; - - const toolCalls = message.toolCalls || []; - const existingToolIndex = toolCalls.findIndex((t: ToolCall) => t.id === toolCallId); - - const toolCall: ToolCall = { - id: toolCallId, - name: toolName, - arguments: JSON.stringify(args, null, 2), - state: toolCallState, - }; - - if (existingToolIndex >= 0) { - updateToolCall(clientId, messageIndex, existingToolIndex, toolCall); - } else { - setToolCalls(clientId, messageIndex, [...toolCalls, toolCall]); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "client:approval-requested", - (e) => { - const { clientId, messageId, toolCallId, approvalId } = e.payload; - - if (!state.conversations[clientId]) return; - - const conv = state.conversations[clientId]; - const messageIndex = conv.messages.findIndex((m) => m.id === messageId); - if (messageIndex === -1) return; - - const message = conv.messages[messageIndex]; - if (!message?.toolCalls) return; - - const toolCallIndex = message.toolCalls.findIndex((t) => t.id === toolCallId); - if (toolCallIndex === -1) return; - - updateToolCall(clientId, messageIndex, toolCallIndex, { - approvalRequired: true, - approvalId, - state: "approval-requested", - }); - }, - { withEventTarget: false } - ) - ); - - // ============= Tool Events ============= - - cleanupFns.push( - aiEventClient.on( - "tool:result-added", - (e) => { - const { clientId, toolCallId, output, state: resultState } = e.payload; - - if (!state.conversations[clientId]) return; - - const conv = state.conversations[clientId]; - - for (let messageIndex = conv.messages.length - 1; messageIndex >= 0; messageIndex--) { - const message = conv.messages[messageIndex]; - if (!message?.toolCalls) continue; - - const toolCallIndex = message.toolCalls.findIndex((t: ToolCall) => t.id === toolCallId); - if (toolCallIndex >= 0) { - updateToolCall(clientId, messageIndex, toolCallIndex, { - result: output, - state: resultState === "output-error" ? "error" : "complete", - }); - return; - } - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "tool:approval-responded", - (e) => { - const { clientId, toolCallId, approved } = e.payload; - - if (!state.conversations[clientId]) return; - - const conv = state.conversations[clientId]; - - for (let messageIndex = conv.messages.length - 1; messageIndex >= 0; messageIndex--) { - const message = conv.messages[messageIndex]; - if (!message?.toolCalls) continue; - - const toolCallIndex = message.toolCalls.findIndex((t: ToolCall) => t.id === toolCallId); - if (toolCallIndex >= 0) { - updateToolCall(clientId, messageIndex, toolCallIndex, { - state: approved ? "approved" : "denied", - approvalRequired: false, - }); - return; - } - } - }, - { withEventTarget: false } - ) - ); - - // ============= Stream Events ============= - - cleanupFns.push( - aiEventClient.on( - "stream:started", - (e) => { - const streamId = e.payload.streamId; - const model = e.payload.model; - const provider = e.payload.provider; - const clientId = e.payload.clientId; - - if (clientId && state.conversations[clientId]) { - streamToConversation.set(streamId, clientId); - updateConversation(clientId, { model, provider, status: "active" }); - return; - } - - const activeClient = Object.values(state.conversations).find( - (c) => c.type === "client" && c.status === "active" && !c.model - ); - - if (activeClient) { - streamToConversation.set(streamId, activeClient.id); - updateConversation(activeClient.id, { model, provider }); - } else { - const existingServerConv = Object.values(state.conversations).find( - (c) => c.type === "server" && c.model === model - ); - - if (existingServerConv) { - streamToConversation.set(streamId, existingServerConv.id); - updateConversation(existingServerConv.id, { status: "active" }); - } else { - const serverId = `server-${model}`; - getOrCreateConversation(serverId, "server", `${model} Server`); - streamToConversation.set(streamId, serverId); - updateConversation(serverId, { model, provider }); - } - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:chunk:content", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "content", - messageId: e.payload.messageId, - content: e.payload.content, - delta: e.payload.delta, - timestamp: e.payload.timestamp, - chunkCount: 1, - }; - - const conv = state.conversations[conversationId]; - if (conv?.type === "client") { - addChunkToMessage(conversationId, chunk); - } else { - addChunk(conversationId, chunk); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:chunk:tool-call", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "tool_call", - messageId: e.payload.messageId, - toolCallId: e.payload.toolCallId, - toolName: e.payload.toolName, - timestamp: e.payload.timestamp, - chunkCount: 1, - }; - - const conv = state.conversations[conversationId]; - if (conv?.type === "client") { - addChunkToMessage(conversationId, chunk); - } else { - addChunk(conversationId, chunk); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:chunk:tool-result", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "tool_result", - messageId: e.payload.messageId, - toolCallId: e.payload.toolCallId, - content: e.payload.result, - timestamp: e.payload.timestamp, - chunkCount: 1, - }; - - const conv = state.conversations[conversationId]; - if (conv?.type === "client") { - addChunkToMessage(conversationId, chunk); - } else { - addChunk(conversationId, chunk); - } - - // Also update the toolCalls array with the result - if (conv && e.payload.toolCallId) { - for (let i = conv.messages.length - 1; i >= 0; i--) { - const message = conv.messages[i]; - if (!message?.toolCalls) continue; - - const toolCallIndex = message.toolCalls.findIndex((t) => t.id === e.payload.toolCallId); - if (toolCallIndex >= 0) { - updateToolCall(conversationId, i, toolCallIndex, { - result: e.payload.result, - state: "complete", - }); - break; - } - } - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:chunk:thinking", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "thinking", - messageId: e.payload.messageId, - content: e.payload.content, - delta: e.payload.delta, - timestamp: e.payload.timestamp, - chunkCount: 1, - }; - - const conv = state.conversations[conversationId]; - if (conv?.type === "client") { - addChunkToMessage(conversationId, chunk); - - if (e.payload.messageId) { - const messageIndex = conv.messages.findIndex((msg) => msg.id === e.payload.messageId); - if (messageIndex !== -1) { - updateMessage(conversationId, messageIndex, { thinkingContent: e.payload.content }); - } - } - } else { - addChunk(conversationId, chunk); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:chunk:done", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "done", - messageId: e.payload.messageId, - finishReason: e.payload.finishReason || undefined, - timestamp: e.payload.timestamp, - chunkCount: 1, - }; - - if (e.payload.usage) { - updateConversation(conversationId, { usage: e.payload.usage }); - updateMessageUsage(conversationId, e.payload.messageId, e.payload.usage); - } - - const conv = state.conversations[conversationId]; - if (conv?.type === "client") { - addChunkToMessage(conversationId, chunk); - } else { - addChunk(conversationId, chunk); - } - - // Mark as completed when we receive a done chunk with a terminal finish reason - const finishReason = e.payload.finishReason; - if (finishReason === "stop" || finishReason === "end_turn" || finishReason === "length") { - updateConversation(conversationId, { status: "completed", completedAt: e.payload.timestamp }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:chunk:error", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "error", - messageId: e.payload.messageId, - error: e.payload.error, - timestamp: e.payload.timestamp, - chunkCount: 1, - }; - - const conv = state.conversations[conversationId]; - if (conv?.type === "client") { - addChunkToMessage(conversationId, chunk); - } else { - addChunk(conversationId, chunk); - } - - updateConversation(conversationId, { status: "error", completedAt: e.payload.timestamp }); - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:ended", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - updateConversation(conversationId, { status: "completed", completedAt: e.payload.timestamp }); - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "stream:approval-requested", - (e) => { - const { streamId, messageId, toolCallId, toolName, input, approvalId, timestamp } = e.payload; - - const conversationId = streamToConversation.get(streamId); - if (!conversationId) return; - - const conv = state.conversations[conversationId]; - if (!conv) return; - - const chunk: Chunk = { - id: `chunk-${Date.now()}-${Math.random()}`, - type: "approval", - messageId: messageId, - toolCallId, - toolName, - approvalId, - input, - timestamp, - chunkCount: 1, - }; - - if (conv.type === "client") { - addChunkToMessage(conversationId, chunk); - } else { - addChunk(conversationId, chunk); - } - - for (let i = conv.messages.length - 1; i >= 0; i--) { - const message = conv.messages[i]; - if (!message) continue; - - if (message.role === "assistant" && message.toolCalls) { - const toolCallIndex = message.toolCalls.findIndex((t: ToolCall) => t.id === toolCallId); - if (toolCallIndex >= 0) { - updateToolCall(conversationId, i, toolCallIndex, { - approvalRequired: true, - approvalId, - state: "approval-requested", - }); - return; - } - } - } - }, - { withEventTarget: false } - ) - ); - - // ============= Processor Events ============= - - cleanupFns.push( - aiEventClient.on( - "processor:text-updated", - (e) => { - const streamId = e.payload.streamId; - - let conversationId = streamToConversation.get(streamId); - - if (!conversationId) { - const activeClients = Object.values(state.conversations) - .filter((c) => c.type === "client" && c.status === "active") - .sort((a, b) => b.startedAt - a.startedAt); - - if (activeClients.length > 0 && activeClients[0]) { - conversationId = activeClients[0].id; - streamToConversation.set(streamId, conversationId); - } - } - - if (!conversationId) return; - - const conv = state.conversations[conversationId]; - if (!conv) return; - - const lastMessage = conv.messages[conv.messages.length - 1]; - if (lastMessage && lastMessage.role === "assistant") { - updateMessage(conversationId, conv.messages.length - 1, { content: e.payload.content }); - } else { - addMessage(conversationId, { - id: `msg-assistant-${Date.now()}`, - role: "assistant", - content: e.payload.content, - timestamp: e.payload.timestamp, - }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "processor:tool-call-state-changed", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - - if (!conversationId || !state.conversations[conversationId]) return; - - const conv = state.conversations[conversationId]; - const lastMessage = conv.messages[conv.messages.length - 1]; - - if (lastMessage && lastMessage.role === "assistant") { - const toolCalls = lastMessage.toolCalls || []; - const existingToolIndex = toolCalls.findIndex((t) => t.id === e.payload.toolCallId); - - const toolCall: ToolCall = { - id: e.payload.toolCallId, - name: e.payload.toolName, - arguments: JSON.stringify(e.payload.arguments, null, 2), - state: e.payload.state, - }; - - if (existingToolIndex >= 0) { - updateToolCall(conversationId, conv.messages.length - 1, existingToolIndex, toolCall); - } else { - setToolCalls(conversationId, conv.messages.length - 1, [...toolCalls, toolCall]); - } - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "processor:tool-result-state-changed", - (e) => { - const streamId = e.payload.streamId; - const conversationId = streamToConversation.get(streamId); - - if (!conversationId || !state.conversations[conversationId]) return; - - const conv = state.conversations[conversationId]; - - for (let i = conv.messages.length - 1; i >= 0; i--) { - const message = conv.messages[i]; - if (!message?.toolCalls) continue; - - const toolCallIndex = message.toolCalls.findIndex((t) => t.id === e.payload.toolCallId); - if (toolCallIndex >= 0) { - updateToolCall(conversationId, i, toolCallIndex, { - result: e.payload.content, - state: e.payload.error ? "error" : e.payload.state, - }); - return; - } - } - }, - { withEventTarget: false } - ) - ); - - // ============= Chat Events (for usage tracking) ============= - - cleanupFns.push( - aiEventClient.on( - "chat:started", - (e) => { - const { requestId, model, clientId, toolNames, options, providerOptions } = e.payload; - - if (clientId && state.conversations[clientId]) { - requestToConversation.set(requestId, clientId); - updateConversation(clientId, { model, status: "active", toolNames, options, providerOptions }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "chat:completed", - (e) => { - const { requestId, usage } = e.payload; - - const conversationId = requestToConversation.get(requestId); - if (conversationId && state.conversations[conversationId] && usage) { - updateConversation(conversationId, { usage }); - updateMessageUsage(conversationId, undefined, usage); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "chat:iteration", - (e) => { - const { requestId, iterationNumber } = e.payload; - - const conversationId = requestToConversation.get(requestId); - if (conversationId && state.conversations[conversationId]) { - updateConversation(conversationId, { iterationCount: iterationNumber }); - } - }, - { withEventTarget: false } - ) - ); - - cleanupFns.push( - aiEventClient.on( - "usage:tokens", - (e) => { - const { requestId, usage, messageId } = e.payload; - - const conversationId = requestToConversation.get(requestId); - if (conversationId && state.conversations[conversationId]) { - updateConversation(conversationId, { usage }); - updateMessageUsage(conversationId, messageId, usage); - } - }, - { withEventTarget: false } - ) - ); - - // Cleanup all listeners on unmount - onCleanup(() => { - for (const cleanup of cleanupFns) { - cleanup(); - } - streamToConversation.clear(); - requestToConversation.clear(); - }); - }); - - const contextValue: AIContextValue = { - state, - clearAllConversations, - selectConversation, - }; - - return {props.children}; -}; +import { batch, createContext, onCleanup, onMount, useContext } from 'solid-js' +import { createStore, produce } from 'solid-js/store' +import { aiEventClient } from '@tanstack/ai/event-client' +import type { ParentComponent } from 'solid-js' + +interface MessagePart { + type: 'text' | 'tool-call' | 'tool-result' + content?: string + toolCallId?: string + toolName?: string + arguments?: string + state?: string + output?: unknown + error?: string +} + +export interface ToolCall { + id: string + name: string + arguments: string + state: string + result?: unknown + approvalRequired?: boolean + approvalId?: string +} + +interface TokenUsage { + promptTokens: number + completionTokens: number + totalTokens: number +} + +export interface Message { + id: string + role: 'user' | 'assistant' | 'system' + content: string + timestamp: number + parts?: Array + toolCalls?: Array + /** Consolidated chunks - consecutive same-type chunks are merged into one entry */ + chunks?: Array + /** Total number of raw chunks received (before consolidation) */ + totalChunkCount?: number + model?: string + usage?: TokenUsage + thinkingContent?: string +} + +/** + * Consolidated chunk - represents one or more raw chunks of the same type. + * Consecutive content/thinking chunks are merged into a single entry with accumulated content. + */ +export interface Chunk { + id: string + type: + | 'content' + | 'tool_call' + | 'tool_result' + | 'done' + | 'error' + | 'approval' + | 'thinking' + timestamp: number + messageId?: string + /** Accumulated content from all merged chunks */ + content?: string + /** The last delta received (kept for debugging) */ + delta?: string + toolName?: string + toolCallId?: string + finishReason?: string + error?: string + approvalId?: string + input?: unknown + /** Number of raw chunks that were merged into this consolidated chunk */ + chunkCount: number +} + +export interface Conversation { + id: string + type: 'client' | 'server' + label: string + messages: Array + chunks: Array + model?: string + provider?: string + status: 'active' | 'completed' | 'error' + startedAt: number + completedAt?: number + usage?: TokenUsage + iterationCount?: number + toolNames?: Array + options?: Record + providerOptions?: Record +} + +interface AIStoreState { + conversations: Record + activeConversationId: string | null +} + +interface AIContextValue { + state: AIStoreState + clearAllConversations: () => void + selectConversation: (id: string) => void +} + +const AIContext = createContext() + +export function useAIStore(): AIContextValue { + const context = useContext(AIContext) + if (!context) { + throw new Error('useAIStore must be used within an AIProvider') + } + return context +} + +export const AIProvider: ParentComponent = (props) => { + const [state, setState] = createStore({ + conversations: {}, + activeConversationId: null, + }) + + const streamToConversation = new Map() + const requestToConversation = new Map() + + // Batching system for high-frequency chunk updates with consolidated chunk merging + // Stores: conversationId -> { chunks to merge, totalNewChunks count } + const pendingConversationChunks = new Map< + string, + { chunks: Array; newChunkCount: number } + >() + // Stores: conversationId -> messageIndex -> { chunks to merge, totalNewChunks count } + const pendingMessageChunks = new Map< + string, + Map; newChunkCount: number }> + >() + let batchScheduled = false + + function scheduleBatchFlush(): void { + if (batchScheduled) return + batchScheduled = true + queueMicrotask(flushPendingChunks) + } + + /** Check if a chunk type can be merged with consecutive same-type chunks */ + function isMergeableChunkType(type: Chunk['type']): boolean { + return type === 'content' || type === 'thinking' + } + + /** Merge pending chunks into existing chunks array, consolidating consecutive same-type chunks */ + function mergeChunks(existing: Array, pending: Array): void { + for (const chunk of pending) { + const lastChunk = existing[existing.length - 1] + + // If last chunk exists, is the same type, and both are mergeable types, merge them + if ( + lastChunk && + lastChunk.type === chunk.type && + isMergeableChunkType(chunk.type) && + lastChunk.messageId === chunk.messageId + ) { + // Merge: append content, update delta, increment count + lastChunk.content = + (lastChunk.content || '') + (chunk.delta || chunk.content || '') + lastChunk.delta = chunk.delta + lastChunk.chunkCount += chunk.chunkCount + } else { + // Different type or not mergeable - add as new entry + existing.push(chunk) + } + } + } + + function flushPendingChunks(): void { + batchScheduled = false + + batch(() => { + // Flush conversation-level chunks + for (const [ + conversationId, + { chunks, newChunkCount }, + ] of pendingConversationChunks) { + const conv = state.conversations[conversationId] + if (conv) { + setState( + 'conversations', + conversationId, + 'chunks', + produce((arr: Array) => { + mergeChunks(arr, chunks) + }), + ) + } + } + pendingConversationChunks.clear() + + // Flush message-level chunks + for (const [conversationId, messageMap] of pendingMessageChunks) { + const conv = state.conversations[conversationId] + if (!conv) continue + + for (const [messageIndex, { chunks, newChunkCount }] of messageMap) { + const message = conv.messages[messageIndex] + if (message) { + // Update chunks array with merging + setState( + 'conversations', + conversationId, + 'messages', + messageIndex, + 'chunks', + produce((arr: Array | undefined) => { + if (!arr) { + // First time - just set the chunks (they're already consolidated in pending) + return chunks + } + mergeChunks(arr, chunks) + return arr + }), + ) + // Update total chunk count + const currentTotal = message.totalChunkCount || 0 + setState( + 'conversations', + conversationId, + 'messages', + messageIndex, + 'totalChunkCount', + currentTotal + newChunkCount, + ) + } + } + } + pendingMessageChunks.clear() + }) + } + + function queueChunk(conversationId: string, chunk: Chunk): void { + if (!pendingConversationChunks.has(conversationId)) { + pendingConversationChunks.set(conversationId, { + chunks: [], + newChunkCount: 0, + }) + } + const pending = pendingConversationChunks.get(conversationId)! + + // Pre-merge in pending buffer to reduce array operations during flush + const lastPending = pending.chunks[pending.chunks.length - 1] + if ( + lastPending && + lastPending.type === chunk.type && + isMergeableChunkType(chunk.type) && + lastPending.messageId === chunk.messageId + ) { + // Merge into pending buffer + lastPending.content = + (lastPending.content || '') + (chunk.delta || chunk.content || '') + lastPending.delta = chunk.delta + lastPending.chunkCount += chunk.chunkCount + } else { + pending.chunks.push(chunk) + } + pending.newChunkCount += chunk.chunkCount + scheduleBatchFlush() + } + + function queueMessageChunk( + conversationId: string, + messageIndex: number, + chunk: Chunk, + ): void { + if (!pendingMessageChunks.has(conversationId)) { + pendingMessageChunks.set(conversationId, new Map()) + } + const messageMap = pendingMessageChunks.get(conversationId)! + if (!messageMap.has(messageIndex)) { + messageMap.set(messageIndex, { chunks: [], newChunkCount: 0 }) + } + const pending = messageMap.get(messageIndex)! + + // Pre-merge in pending buffer + const lastPending = pending.chunks[pending.chunks.length - 1] + if ( + lastPending && + lastPending.type === chunk.type && + isMergeableChunkType(chunk.type) && + lastPending.messageId === chunk.messageId + ) { + // Merge into pending buffer + lastPending.content = + (lastPending.content || '') + (chunk.delta || chunk.content || '') + lastPending.delta = chunk.delta + lastPending.chunkCount += chunk.chunkCount + } else { + pending.chunks.push(chunk) + } + pending.newChunkCount += chunk.chunkCount + scheduleBatchFlush() + } + + // Optimized helper functions using path-based updates + function getOrCreateConversation( + id: string, + type: 'client' | 'server', + label: string, + ): void { + if (!state.conversations[id]) { + setState('conversations', id, { + id, + type, + label, + messages: [], + chunks: [], + status: 'active', + startedAt: Date.now(), + }) + if (!state.activeConversationId) { + setState('activeConversationId', id) + } + } + } + + function addMessage(conversationId: string, message: Message): void { + const conv = state.conversations[conversationId] + if (!conv) return + setState( + 'conversations', + conversationId, + 'messages', + conv.messages.length, + message, + ) + } + + function addChunkToMessage(conversationId: string, chunk: Chunk): void { + const conv = state.conversations[conversationId] + if (!conv) return + + if (chunk.messageId) { + const messageIndex = conv.messages.findIndex( + (msg) => msg.id === chunk.messageId, + ) + + if (messageIndex !== -1) { + queueMessageChunk(conversationId, messageIndex, chunk) + return + } else { + // Create new message with the chunk + const newMessage: Message = { + id: chunk.messageId, + role: 'assistant', + content: '', + timestamp: chunk.timestamp, + model: conv.model, + chunks: [chunk], + } + setState( + 'conversations', + conversationId, + 'messages', + conv.messages.length, + newMessage, + ) + return + } + } + + // Find last assistant message + for (let i = conv.messages.length - 1; i >= 0; i--) { + const message = conv.messages[i] + if (message && message.role === 'assistant') { + queueMessageChunk(conversationId, i, chunk) + return + } + } + } + + function updateMessageUsage( + conversationId: string, + messageId: string | undefined, + usage: TokenUsage, + ): void { + const conv = state.conversations[conversationId] + if (!conv) return + + if (messageId) { + const messageIndex = conv.messages.findIndex( + (msg) => msg.id === messageId, + ) + if (messageIndex !== -1) { + setState( + 'conversations', + conversationId, + 'messages', + messageIndex, + 'usage', + usage, + ) + return + } + } + + for (let i = conv.messages.length - 1; i >= 0; i--) { + const message = conv.messages[i] + if (message && message.role === 'assistant') { + setState('conversations', conversationId, 'messages', i, 'usage', usage) + return + } + } + } + + // Public actions + function clearAllConversations() { + setState('conversations', {}) + setState('activeConversationId', null) + streamToConversation.clear() + requestToConversation.clear() + pendingConversationChunks.clear() + pendingMessageChunks.clear() + } + + function selectConversation(id: string) { + setState('activeConversationId', id) + } + + // Additional optimized helper functions + function updateConversation( + conversationId: string, + updates: Partial, + ): void { + if (!state.conversations[conversationId]) return + for (const [key, value] of Object.entries(updates)) { + setState( + 'conversations', + conversationId, + key as keyof Conversation, + value as Conversation[keyof Conversation], + ) + } + } + + function updateMessage( + conversationId: string, + messageIndex: number, + updates: Partial, + ): void { + const conv = state.conversations[conversationId] + if (!conv || !conv.messages[messageIndex]) return + for (const [key, value] of Object.entries(updates)) { + setState( + 'conversations', + conversationId, + 'messages', + messageIndex, + key as keyof Message, + value as Message[keyof Message], + ) + } + } + + function updateToolCall( + conversationId: string, + messageIndex: number, + toolCallIndex: number, + updates: Partial, + ): void { + const conv = state.conversations[conversationId] + if (!conv?.messages[messageIndex]?.toolCalls?.[toolCallIndex]) return + setState( + 'conversations', + conversationId, + 'messages', + messageIndex, + 'toolCalls', + toolCallIndex, + produce((tc: ToolCall) => Object.assign(tc, updates)), + ) + } + + function setToolCalls( + conversationId: string, + messageIndex: number, + toolCalls: Array, + ): void { + if (!state.conversations[conversationId]?.messages[messageIndex]) return + setState( + 'conversations', + conversationId, + 'messages', + messageIndex, + 'toolCalls', + toolCalls, + ) + } + + function addChunk(conversationId: string, chunk: Chunk): void { + if (!state.conversations[conversationId]) return + queueChunk(conversationId, chunk) + } + + // Register all event listeners on mount + onMount(() => { + const cleanupFns: Array<() => void> = [] + + // ============= Client Events ============= + + cleanupFns.push( + aiEventClient.on( + 'client:created', + (e) => { + const clientId = e.payload.clientId + getOrCreateConversation( + clientId, + 'client', + `Client Chat (${clientId.substring(0, 8)})`, + ) + updateConversation(clientId, { model: undefined, provider: 'Client' }) + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:message-sent', + (e) => { + const clientId = e.payload.clientId + if (!state.conversations[clientId]) { + getOrCreateConversation( + clientId, + 'client', + `Client Chat (${clientId.substring(0, 8)})`, + ) + } + addMessage(clientId, { + id: e.payload.messageId, + role: 'user', + content: e.payload.content, + timestamp: e.payload.timestamp, + }) + updateConversation(clientId, { status: 'active' }) + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:message-appended', + (e) => { + const clientId = e.payload.clientId + const role = e.payload.role + + if (role === 'user') return + if (!state.conversations[clientId]) return + + if (role === 'assistant') { + addMessage(clientId, { + id: e.payload.messageId, + role: 'assistant', + content: e.payload.contentPreview, + timestamp: e.payload.timestamp, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:loading-changed', + (e) => { + const clientId = e.payload.clientId + if (state.conversations[clientId]) { + updateConversation(clientId, { + status: e.payload.isLoading ? 'active' : 'completed', + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:stopped', + (e) => { + const clientId = e.payload.clientId + if (state.conversations[clientId]) { + updateConversation(clientId, { + status: 'completed', + completedAt: e.payload.timestamp, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:messages-cleared', + (e) => { + const clientId = e.payload.clientId + if (state.conversations[clientId]) { + updateConversation(clientId, { + messages: [], + chunks: [], + usage: undefined, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:reloaded', + (e) => { + const clientId = e.payload.clientId + const conv = state.conversations[clientId] + if (conv) { + updateConversation(clientId, { + messages: conv.messages.slice(0, e.payload.fromMessageIndex), + status: 'active', + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:error-changed', + (e) => { + const clientId = e.payload.clientId + if (state.conversations[clientId] && e.payload.error) { + updateConversation(clientId, { status: 'error' }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:assistant-message-updated', + (e) => { + const clientId = e.payload.clientId + const messageId = e.payload.messageId + const content = e.payload.content + + if (!state.conversations[clientId]) return + + const conv = state.conversations[clientId] + const lastMessage = conv.messages[conv.messages.length - 1] + + if ( + lastMessage && + lastMessage.role === 'assistant' && + lastMessage.id === messageId + ) { + updateMessage(clientId, conv.messages.length - 1, { + content, + model: conv.model, + }) + } else { + addMessage(clientId, { + id: messageId, + role: 'assistant', + content: content, + timestamp: e.payload.timestamp, + model: conv.model, + chunks: [], + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:tool-call-updated', + (e) => { + const { + clientId, + messageId, + toolCallId, + toolName, + state: toolCallState, + arguments: args, + } = e.payload as { + clientId: string + messageId: string + toolCallId: string + toolName: string + state: string + arguments: unknown + timestamp: number + } + + if (!state.conversations[clientId]) return + + const conv = state.conversations[clientId] + const messageIndex = conv.messages.findIndex( + (m: Message) => m.id === messageId, + ) + if (messageIndex === -1) return + + const message = conv.messages[messageIndex] + if (!message) return + + const toolCalls = message.toolCalls || [] + const existingToolIndex = toolCalls.findIndex( + (t: ToolCall) => t.id === toolCallId, + ) + + const toolCall: ToolCall = { + id: toolCallId, + name: toolName, + arguments: JSON.stringify(args, null, 2), + state: toolCallState, + } + + if (existingToolIndex >= 0) { + updateToolCall(clientId, messageIndex, existingToolIndex, toolCall) + } else { + setToolCalls(clientId, messageIndex, [...toolCalls, toolCall]) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'client:approval-requested', + (e) => { + const { clientId, messageId, toolCallId, approvalId } = e.payload + + if (!state.conversations[clientId]) return + + const conv = state.conversations[clientId] + const messageIndex = conv.messages.findIndex( + (m) => m.id === messageId, + ) + if (messageIndex === -1) return + + const message = conv.messages[messageIndex] + if (!message?.toolCalls) return + + const toolCallIndex = message.toolCalls.findIndex( + (t) => t.id === toolCallId, + ) + if (toolCallIndex === -1) return + + updateToolCall(clientId, messageIndex, toolCallIndex, { + approvalRequired: true, + approvalId, + state: 'approval-requested', + }) + }, + { withEventTarget: false }, + ), + ) + + // ============= Tool Events ============= + + cleanupFns.push( + aiEventClient.on( + 'tool:result-added', + (e) => { + const { clientId, toolCallId, output, state: resultState } = e.payload + + if (!state.conversations[clientId]) return + + const conv = state.conversations[clientId] + + for ( + let messageIndex = conv.messages.length - 1; + messageIndex >= 0; + messageIndex-- + ) { + const message = conv.messages[messageIndex] + if (!message?.toolCalls) continue + + const toolCallIndex = message.toolCalls.findIndex( + (t: ToolCall) => t.id === toolCallId, + ) + if (toolCallIndex >= 0) { + updateToolCall(clientId, messageIndex, toolCallIndex, { + result: output, + state: resultState === 'output-error' ? 'error' : 'complete', + }) + return + } + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'tool:approval-responded', + (e) => { + const { clientId, toolCallId, approved } = e.payload + + if (!state.conversations[clientId]) return + + const conv = state.conversations[clientId] + + for ( + let messageIndex = conv.messages.length - 1; + messageIndex >= 0; + messageIndex-- + ) { + const message = conv.messages[messageIndex] + if (!message?.toolCalls) continue + + const toolCallIndex = message.toolCalls.findIndex( + (t: ToolCall) => t.id === toolCallId, + ) + if (toolCallIndex >= 0) { + updateToolCall(clientId, messageIndex, toolCallIndex, { + state: approved ? 'approved' : 'denied', + approvalRequired: false, + }) + return + } + } + }, + { withEventTarget: false }, + ), + ) + + // ============= Stream Events ============= + + cleanupFns.push( + aiEventClient.on( + 'stream:started', + (e) => { + const streamId = e.payload.streamId + const model = e.payload.model + const provider = e.payload.provider + const clientId = e.payload.clientId + + if (clientId && state.conversations[clientId]) { + streamToConversation.set(streamId, clientId) + updateConversation(clientId, { model, provider, status: 'active' }) + return + } + + const activeClient = Object.values(state.conversations).find( + (c) => c.type === 'client' && c.status === 'active' && !c.model, + ) + + if (activeClient) { + streamToConversation.set(streamId, activeClient.id) + updateConversation(activeClient.id, { model, provider }) + } else { + const existingServerConv = Object.values(state.conversations).find( + (c) => c.type === 'server' && c.model === model, + ) + + if (existingServerConv) { + streamToConversation.set(streamId, existingServerConv.id) + updateConversation(existingServerConv.id, { status: 'active' }) + } else { + const serverId = `server-${model}` + getOrCreateConversation(serverId, 'server', `${model} Server`) + streamToConversation.set(streamId, serverId) + updateConversation(serverId, { model, provider }) + } + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:chunk:content', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'content', + messageId: e.payload.messageId, + content: e.payload.content, + delta: e.payload.delta, + timestamp: e.payload.timestamp, + chunkCount: 1, + } + + const conv = state.conversations[conversationId] + if (conv?.type === 'client') { + addChunkToMessage(conversationId, chunk) + } else { + addChunk(conversationId, chunk) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:chunk:tool-call', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'tool_call', + messageId: e.payload.messageId, + toolCallId: e.payload.toolCallId, + toolName: e.payload.toolName, + timestamp: e.payload.timestamp, + chunkCount: 1, + } + + const conv = state.conversations[conversationId] + if (conv?.type === 'client') { + addChunkToMessage(conversationId, chunk) + } else { + addChunk(conversationId, chunk) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:chunk:tool-result', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'tool_result', + messageId: e.payload.messageId, + toolCallId: e.payload.toolCallId, + content: e.payload.result, + timestamp: e.payload.timestamp, + chunkCount: 1, + } + + const conv = state.conversations[conversationId] + if (conv?.type === 'client') { + addChunkToMessage(conversationId, chunk) + } else { + addChunk(conversationId, chunk) + } + + // Also update the toolCalls array with the result + if (conv && e.payload.toolCallId) { + for (let i = conv.messages.length - 1; i >= 0; i--) { + const message = conv.messages[i] + if (!message?.toolCalls) continue + + const toolCallIndex = message.toolCalls.findIndex( + (t) => t.id === e.payload.toolCallId, + ) + if (toolCallIndex >= 0) { + updateToolCall(conversationId, i, toolCallIndex, { + result: e.payload.result, + state: 'complete', + }) + break + } + } + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:chunk:thinking', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'thinking', + messageId: e.payload.messageId, + content: e.payload.content, + delta: e.payload.delta, + timestamp: e.payload.timestamp, + chunkCount: 1, + } + + const conv = state.conversations[conversationId] + if (conv?.type === 'client') { + addChunkToMessage(conversationId, chunk) + + if (e.payload.messageId) { + const messageIndex = conv.messages.findIndex( + (msg) => msg.id === e.payload.messageId, + ) + if (messageIndex !== -1) { + updateMessage(conversationId, messageIndex, { + thinkingContent: e.payload.content, + }) + } + } + } else { + addChunk(conversationId, chunk) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:chunk:done', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'done', + messageId: e.payload.messageId, + finishReason: e.payload.finishReason || undefined, + timestamp: e.payload.timestamp, + chunkCount: 1, + } + + if (e.payload.usage) { + updateConversation(conversationId, { usage: e.payload.usage }) + updateMessageUsage( + conversationId, + e.payload.messageId, + e.payload.usage, + ) + } + + const conv = state.conversations[conversationId] + if (conv?.type === 'client') { + addChunkToMessage(conversationId, chunk) + } else { + addChunk(conversationId, chunk) + } + + // Mark as completed when we receive a done chunk with a terminal finish reason + const finishReason = e.payload.finishReason + if ( + finishReason === 'stop' || + finishReason === 'end_turn' || + finishReason === 'length' + ) { + updateConversation(conversationId, { + status: 'completed', + completedAt: e.payload.timestamp, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:chunk:error', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'error', + messageId: e.payload.messageId, + error: e.payload.error, + timestamp: e.payload.timestamp, + chunkCount: 1, + } + + const conv = state.conversations[conversationId] + if (conv?.type === 'client') { + addChunkToMessage(conversationId, chunk) + } else { + addChunk(conversationId, chunk) + } + + updateConversation(conversationId, { + status: 'error', + completedAt: e.payload.timestamp, + }) + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:ended', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + updateConversation(conversationId, { + status: 'completed', + completedAt: e.payload.timestamp, + }) + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'stream:approval-requested', + (e) => { + const { + streamId, + messageId, + toolCallId, + toolName, + input, + approvalId, + timestamp, + } = e.payload + + const conversationId = streamToConversation.get(streamId) + if (!conversationId) return + + const conv = state.conversations[conversationId] + if (!conv) return + + const chunk: Chunk = { + id: `chunk-${Date.now()}-${Math.random()}`, + type: 'approval', + messageId: messageId, + toolCallId, + toolName, + approvalId, + input, + timestamp, + chunkCount: 1, + } + + if (conv.type === 'client') { + addChunkToMessage(conversationId, chunk) + } else { + addChunk(conversationId, chunk) + } + + for (let i = conv.messages.length - 1; i >= 0; i--) { + const message = conv.messages[i] + if (!message) continue + + if (message.role === 'assistant' && message.toolCalls) { + const toolCallIndex = message.toolCalls.findIndex( + (t: ToolCall) => t.id === toolCallId, + ) + if (toolCallIndex >= 0) { + updateToolCall(conversationId, i, toolCallIndex, { + approvalRequired: true, + approvalId, + state: 'approval-requested', + }) + return + } + } + } + }, + { withEventTarget: false }, + ), + ) + + // ============= Processor Events ============= + + cleanupFns.push( + aiEventClient.on( + 'processor:text-updated', + (e) => { + const streamId = e.payload.streamId + + let conversationId = streamToConversation.get(streamId) + + if (!conversationId) { + const activeClients = Object.values(state.conversations) + .filter((c) => c.type === 'client' && c.status === 'active') + .sort((a, b) => b.startedAt - a.startedAt) + + if (activeClients.length > 0 && activeClients[0]) { + conversationId = activeClients[0].id + streamToConversation.set(streamId, conversationId) + } + } + + if (!conversationId) return + + const conv = state.conversations[conversationId] + if (!conv) return + + const lastMessage = conv.messages[conv.messages.length - 1] + if (lastMessage && lastMessage.role === 'assistant') { + updateMessage(conversationId, conv.messages.length - 1, { + content: e.payload.content, + }) + } else { + addMessage(conversationId, { + id: `msg-assistant-${Date.now()}`, + role: 'assistant', + content: e.payload.content, + timestamp: e.payload.timestamp, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'processor:tool-call-state-changed', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + + if (!conversationId || !state.conversations[conversationId]) return + + const conv = state.conversations[conversationId] + const lastMessage = conv.messages[conv.messages.length - 1] + + if (lastMessage && lastMessage.role === 'assistant') { + const toolCalls = lastMessage.toolCalls || [] + const existingToolIndex = toolCalls.findIndex( + (t) => t.id === e.payload.toolCallId, + ) + + const toolCall: ToolCall = { + id: e.payload.toolCallId, + name: e.payload.toolName, + arguments: JSON.stringify(e.payload.arguments, null, 2), + state: e.payload.state, + } + + if (existingToolIndex >= 0) { + updateToolCall( + conversationId, + conv.messages.length - 1, + existingToolIndex, + toolCall, + ) + } else { + setToolCalls(conversationId, conv.messages.length - 1, [ + ...toolCalls, + toolCall, + ]) + } + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'processor:tool-result-state-changed', + (e) => { + const streamId = e.payload.streamId + const conversationId = streamToConversation.get(streamId) + + if (!conversationId || !state.conversations[conversationId]) return + + const conv = state.conversations[conversationId] + + for (let i = conv.messages.length - 1; i >= 0; i--) { + const message = conv.messages[i] + if (!message?.toolCalls) continue + + const toolCallIndex = message.toolCalls.findIndex( + (t) => t.id === e.payload.toolCallId, + ) + if (toolCallIndex >= 0) { + updateToolCall(conversationId, i, toolCallIndex, { + result: e.payload.content, + state: e.payload.error ? 'error' : e.payload.state, + }) + return + } + } + }, + { withEventTarget: false }, + ), + ) + + // ============= Chat Events (for usage tracking) ============= + + cleanupFns.push( + aiEventClient.on( + 'chat:started', + (e) => { + const { + requestId, + model, + clientId, + toolNames, + options, + providerOptions, + } = e.payload + + if (clientId && state.conversations[clientId]) { + requestToConversation.set(requestId, clientId) + updateConversation(clientId, { + model, + status: 'active', + toolNames, + options, + providerOptions, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'chat:completed', + (e) => { + const { requestId, usage } = e.payload + + const conversationId = requestToConversation.get(requestId) + if (conversationId && state.conversations[conversationId] && usage) { + updateConversation(conversationId, { usage }) + updateMessageUsage(conversationId, undefined, usage) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'chat:iteration', + (e) => { + const { requestId, iterationNumber } = e.payload + + const conversationId = requestToConversation.get(requestId) + if (conversationId && state.conversations[conversationId]) { + updateConversation(conversationId, { + iterationCount: iterationNumber, + }) + } + }, + { withEventTarget: false }, + ), + ) + + cleanupFns.push( + aiEventClient.on( + 'usage:tokens', + (e) => { + const { requestId, usage, messageId } = e.payload + + const conversationId = requestToConversation.get(requestId) + if (conversationId && state.conversations[conversationId]) { + updateConversation(conversationId, { usage }) + updateMessageUsage(conversationId, messageId, usage) + } + }, + { withEventTarget: false }, + ), + ) + + // Cleanup all listeners on unmount + onCleanup(() => { + for (const cleanup of cleanupFns) { + cleanup() + } + streamToConversation.clear() + requestToConversation.clear() + }) + }) + + const contextValue: AIContextValue = { + state, + clearAllConversations, + selectConversation, + } + + return ( + + {props.children} + + ) +} diff --git a/packages/typescript/ai-devtools/src/store/ai-store.ts b/packages/typescript/ai-devtools/src/store/ai-store.ts index deddda286..600b9ffba 100644 --- a/packages/typescript/ai-devtools/src/store/ai-store.ts +++ b/packages/typescript/ai-devtools/src/store/ai-store.ts @@ -1,14 +1,2 @@ -// Re-export types from ai-context for backward compatibility -export type { - MessagePart, - ToolCall, - TokenUsage, - Message, - Chunk, - Conversation, - AIStoreState, -} from "./ai-context"; - -// Re-export the context and provider for components that need the full store -export { AIProvider, useAIStore } from "./ai-context"; - +// Re-export types from ai-context for backward compatibility +export type { ToolCall, Message, Chunk, Conversation } from './ai-context' diff --git a/packages/typescript/ai-devtools/src/styles/use-styles.ts b/packages/typescript/ai-devtools/src/styles/use-styles.ts index d92fcbee6..9a73fce27 100644 --- a/packages/typescript/ai-devtools/src/styles/use-styles.ts +++ b/packages/typescript/ai-devtools/src/styles/use-styles.ts @@ -304,8 +304,8 @@ const stylesFactory = (theme: 'light' | 'dark') => { background: ${t(colors.gray[100], colors.darkGray[800])}; border-radius: ${border.radius.lg}; box-shadow: ${tokens.shadow.md( - t(colors.gray[400] + alpha[80], colors.black + alpha[80]), - )}; + t(colors.gray[400] + alpha[80], colors.black + alpha[80]), + )}; padding: ${size[4]}; margin-bottom: ${size[4]}; border: 1px solid ${t(colors.gray[200], colors.darkGray[700])}; @@ -629,11 +629,19 @@ const stylesFactory = (theme: 'light' | 'dark') => { box-shadow: 0 1px 3px rgba(0, 0, 0, 0.12); `, messageCardUser: css` - background: linear-gradient(135deg, oklch(0.25 0.04 260) 0%, oklch(0.22 0.03 260) 100%); + background: linear-gradient( + 135deg, + oklch(0.25 0.04 260) 0%, + oklch(0.22 0.03 260) 100% + ); border: 1.5px solid oklch(0.5 0.15 260); `, messageCardAssistant: css` - background: linear-gradient(135deg, oklch(0.25 0.04 142) 0%, oklch(0.22 0.03 142) 100%); + background: linear-gradient( + 135deg, + oklch(0.25 0.04 142) 0%, + oklch(0.22 0.03 142) 100% + ); border: 1.5px solid oklch(0.5 0.15 142); `, messageHeader: css` @@ -738,7 +746,10 @@ const stylesFactory = (theme: 'light' | 'dark') => { white-space: pre-wrap; word-break: break-word; color: oklch(0.85 0.02 260); - font-family: system-ui, -apple-system, sans-serif; + font-family: + system-ui, + -apple-system, + sans-serif; `, toolCallsContainer: css` margin-top: ${size[2]}; diff --git a/packages/typescript/ai-devtools/vite.config.ts b/packages/typescript/ai-devtools/vite.config.ts index c70016ffc..c3fff9d83 100644 --- a/packages/typescript/ai-devtools/vite.config.ts +++ b/packages/typescript/ai-devtools/vite.config.ts @@ -1,26 +1,25 @@ -import { defineConfig, mergeConfig } from 'vitest/config' -import { tanstackViteConfig } from '@tanstack/config/vite' -import solid from 'vite-plugin-solid' -import packageJson from './package.json' - -const config = defineConfig({ - plugins: [solid()], - test: { - name: packageJson.name, - dir: './tests', - watch: false, - environment: 'jsdom', - setupFiles: ['./tests/test-setup.ts'], - coverage: { enabled: true, provider: 'istanbul', include: ['src/**/*'] }, - typecheck: { enabled: true }, - }, -}) - -export default mergeConfig( - config, - tanstackViteConfig({ - entry: ['./src/index.ts', './src/production.ts'], - srcDir: './src', - cjs: false, - }), -) +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/config/vite' +import solid from 'vite-plugin-solid' +import packageJson from './package.json' + +const config = defineConfig({ + plugins: [solid()], + test: { + name: packageJson.name, + dir: './tests', + watch: false, + environment: 'jsdom', + coverage: { enabled: true, include: ['src/**/*'] }, + typecheck: { enabled: true }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: ['./src/index.ts', './src/production.ts'], + srcDir: './src', + cjs: false, + }), +) diff --git a/packages/typescript/ai-devtools/vitest.config.ts b/packages/typescript/ai-devtools/vitest.config.ts index 68ef5ebfb..57223ef4f 100644 --- a/packages/typescript/ai-devtools/vitest.config.ts +++ b/packages/typescript/ai-devtools/vitest.config.ts @@ -1,10 +1,9 @@ -import { defineConfig } from "vitest/config"; +import { defineConfig } from 'vitest/config' export default defineConfig({ test: { globals: true, - environment: "node", - include: ["tests/**/*.test.ts"], + environment: 'node', + include: ['tests/**/*.test.ts'], }, -}); - +}) diff --git a/packages/typescript/ai-fallback/README.md b/packages/typescript/ai-fallback/README.md deleted file mode 100644 index 883653482..000000000 --- a/packages/typescript/ai-fallback/README.md +++ /dev/null @@ -1,143 +0,0 @@ -# @tanstack/ai-fallback - -Automatic fallback wrapper for TanStack AI - try multiple adapters in sequence until one succeeds. - -## Installation - -```bash -npm install @tanstack/ai-fallback -``` - -## Quick Start - -```typescript -import { ai } from '@tanstack/ai'; -import { openai } from '@tanstack/ai-openai'; -import { anthropic } from '@tanstack/ai-anthropic'; -import { fallback, withModel } from '@tanstack/ai-fallback'; - -// Create AI instances with model and options pre-bound -const openAI = withModel(ai(openai()), { - model: 'gpt-4', - temperature: 0.7, -}); - -const anthropicAI = withModel(ai(anthropic()), { - model: 'claude-3-5-sonnet-20241022', - temperature: 0.8, -}); - -// Create fallback wrapper - tries openAI first, then anthropicAI -const aiWithFallback = fallback([openAI, anthropicAI]); - -// Use it - only need to pass messages now! -const stream = aiWithFallback.chat({ - messages: [{ role: 'user', content: 'Hello!' }], -}); - -for await (const chunk of stream) { - if (chunk.type === 'content') { - console.log(chunk.delta); - } -} -``` - -## API - -### `withModel(ai, options)` - -Creates a `BoundAI` instance with model and options pre-configured. - -**Parameters:** -- `ai`: An `AI` instance -- `options`: Model and options to bind (everything except `messages`/`input`) - -**Returns:** `BoundAI` instance - -### `fallback(instances, config?)` - -Creates a `FallbackAI` instance that tries multiple `BoundAI` instances in sequence. - -**Parameters:** -- `instances`: Array of `BoundAI` instances to try in order -- `config`: Optional configuration: - - `onError?: (adapterName: string, error: Error) => void` - Called when an adapter fails - - `stopOnError?: (error: Error) => boolean` - Return `true` to stop trying other adapters - -**Returns:** `FallbackAI` instance - -## Use Cases - -### Rate Limit Protection - -```typescript -const openAI = withModel(ai(openai()), { model: 'gpt-4' }); -const anthropicAI = withModel(ai(anthropic()), { model: 'claude-3-5-sonnet-20241022' }); - -const aiWithFallback = fallback([openAI, anthropicAI]); - -// If OpenAI hits rate limit, automatically uses Anthropic -const result = await aiWithFallback.chatCompletion({ - messages: [{ role: 'user', content: 'Hello!' }], -}); -``` - -### Cost Optimization - -```typescript -const localAI = withModel(ai(ollama()), { model: 'llama3' }); -const cloudAI = withModel(ai(openai()), { model: 'gpt-4' }); - -// Try cheap local option first, fall back to cloud if needed -const aiWithFallback = fallback([localAI, cloudAI]); -``` - -### Error Handling - -```typescript -const aiWithFallback = fallback([openAI, anthropicAI], { - onError: (adapterName, error) => { - console.error(`Adapter ${adapterName} failed:`, error); - // Send to monitoring service, etc. - }, - stopOnError: (error) => { - // Stop trying if it's a 401 (auth error) - won't work with other adapters - return error.message.includes('401'); - }, -}); -``` - -## Supported Methods - -All methods work the same as the regular `AI` class, but only require `messages`/`input`: - -- `chat({ messages, ... })` - Stream chat with automatic tool execution -- `chatCompletion({ messages, ... })` - Complete chat with optional structured output -- `embed({ input, ... })` - Generate embeddings -- `summarize({ text, ... })` - Summarize text -- `image({ prompt, ... })` - Generate images -- `audio({ file, ... })` - Transcribe audio -- `speak({ input, voice, ... })` - Generate speech -- `video({ prompt, ... })` - Generate videos - -## How It Works - -1. When you call a method, `FallbackAI` tries the first `BoundAI` instance -2. If it fails (throws an error), it automatically tries the next one -3. Continues until one succeeds or all fail -4. If all fail, throws a comprehensive error listing all failures - -## Streaming Behavior - -For streaming methods (`chat`), the fallback works as follows: - -- The stream must succeed before yielding chunks (can't retry mid-stream) -- If an error occurs before the first chunk, it tries the next adapter -- Once streaming starts, errors are forwarded (no retry) - -This means fallback only happens before streaming begins, not during the stream. - -## License - -MIT - diff --git a/packages/typescript/ai-fallback/package.json b/packages/typescript/ai-fallback/package.json deleted file mode 100644 index 60ff99efd..000000000 --- a/packages/typescript/ai-fallback/package.json +++ /dev/null @@ -1,53 +0,0 @@ -{ - "name": "@tanstack/ai-fallback", - "version": "0.1.0", - "description": "Fallback wrapper for TanStack AI - automatically try multiple adapters in sequence", - "author": "", - "license": "MIT", - "repository": { - "type": "git", - "url": "git+https://github.com/TanStack/ai.git", - "directory": "packages/typescript/ai-fallback" - }, - "type": "module", - "module": "./dist/index.js", - "types": "./dist/index.d.ts", - "exports": { - ".": { - "types": "./dist/index.d.ts", - "import": "./dist/index.js" - } - }, - "files": [ - "dist", - "src" - ], - "scripts": { - "build": "tsdown", - "dev": "tsdown --watch", - "test": "exit 0", - "test:watch": "vitest", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" - }, - "keywords": [ - "ai", - "fallback", - "tanstack", - "adapter", - "retry" - ], - "dependencies": { - "@tanstack/ai": "workspace:*" - }, - "devDependencies": { - "@types/node": "^22.10.2", - "tsdown": "^0.15.9", - "typescript": "^5.7.2", - "vitest": "^4.0.13" - }, - "peerDependencies": { - "@tanstack/ai": "workspace:*" - } -} \ No newline at end of file diff --git a/packages/typescript/ai-fallback/src/bound-ai.ts b/packages/typescript/ai-fallback/src/bound-ai.ts deleted file mode 100644 index 116f00b46..000000000 --- a/packages/typescript/ai-fallback/src/bound-ai.ts +++ /dev/null @@ -1,235 +0,0 @@ -import type { - AIAdapter, - ChatCompletionOptions, - StreamChunk, - SummarizationOptions, - SummarizationResult, - EmbeddingOptions, - EmbeddingResult, - ImageGenerationOptions, - ImageGenerationResult, - AudioTranscriptionOptions, - AudioTranscriptionResult, - TextToSpeechOptions, - TextToSpeechResult, - VideoGenerationOptions, - VideoGenerationResult, - Tool, - ResponseFormat, -} from "@tanstack/ai"; -import type { BoundOptions, ChatCompletionReturnType, AI } from "./types"; - -type ExtractChatProviderOptions = T extends AIAdapter< - any, - any, - any, - any, - any, - infer P, - any, - any, - any, - any -> - ? P - : Record; - -type ExtractImageProviderOptions = T extends AIAdapter< - any, - any, - any, - any, - any, - any, - infer P, - any, - any, - any -> - ? P - : Record; - -type ExtractAudioProviderOptions = T extends AIAdapter< - any, - any, - any, - any, - any, - any, - any, - any, - infer P, - any -> - ? P - : Record; - -type ExtractVideoProviderOptions = T extends AIAdapter< - any, - any, - any, - any, - any, - any, - any, - any, - any, - infer P -> - ? P - : Record; - -/** - * BoundAI - Wraps an AI instance with pre-bound model and options - * - * This allows you to create AI instances with model and options already configured, - * so you only need to pass messages/input at call time. - */ -export class BoundAI< - TAdapter extends AIAdapter -> { - private ai: AI; - private boundOptions: BoundOptions; - - constructor(ai: AI, boundOptions: BoundOptions) { - this.ai = ai; - this.boundOptions = boundOptions; - } - - /** - * Get the adapter name for logging/debugging - */ - get adapterName(): string { - // Access private adapter field via type assertion - // This is safe since we're just reading the name property - return ((this.ai as any).adapter as TAdapter).name || "unknown"; - } - - /** - * Stream a chat conversation with automatic tool execution - */ - async *chat( - options: Omit & { - messages: ChatCompletionOptions["messages"]; - tools?: ReadonlyArray; - systemPrompts?: string[]; - providerOptions?: ExtractChatProviderOptions; - } - ): AsyncIterable { - yield* this.ai.chat({ - ...this.boundOptions, - ...options, - } as any); - } - - /** - * Complete a chat conversation with optional structured output - */ - async chatCompletion< - TOptions extends { - output?: ResponseFormat; - providerOptions?: ExtractChatProviderOptions; - } - >( - options: Omit & { - messages: ChatCompletionOptions["messages"]; - tools?: ReadonlyArray; - systemPrompts?: string[]; - } & TOptions - ): Promise> { - return this.ai.chatCompletion({ - ...this.boundOptions, - ...options, - } as any) as Promise>; - } - - /** - * Summarize text - */ - async summarize( - options: Omit & { - text: string; - } - ): Promise { - return this.ai.summarize({ - ...this.boundOptions, - ...options, - } as any); - } - - /** - * Generate embeddings - */ - async embed( - options: Omit & { - input: string | string[]; - } - ): Promise { - return this.ai.embed({ - ...this.boundOptions, - ...options, - } as any); - } - - /** - * Generate an image - */ - async image( - options: Omit & { - prompt: string; - providerOptions?: ExtractImageProviderOptions; - } - ): Promise { - return this.ai.image({ - ...this.boundOptions, - ...options, - } as any); - } - - /** - * Transcribe audio - */ - async audio( - options: Omit & { - file: Blob | Buffer; - providerOptions?: ExtractAudioProviderOptions; - } - ): Promise { - return this.ai.audio({ - ...this.boundOptions, - ...options, - } as any); - } - - /** - * Generate speech from text - */ - async speak( - options: Omit & { - input: string; - voice: string; - providerOptions?: ExtractChatProviderOptions; - } - ): Promise { - return this.ai.speak({ - ...this.boundOptions, - ...options, - } as any); - } - - /** - * Generate a video - */ - async video( - options: Omit & { - prompt: string; - providerOptions?: ExtractVideoProviderOptions; - } - ): Promise { - return this.ai.video({ - ...this.boundOptions, - ...options, - } as any); - } -} - diff --git a/packages/typescript/ai-fallback/src/fallback-ai.ts b/packages/typescript/ai-fallback/src/fallback-ai.ts deleted file mode 100644 index f992dad6c..000000000 --- a/packages/typescript/ai-fallback/src/fallback-ai.ts +++ /dev/null @@ -1,263 +0,0 @@ -import type { - ChatCompletionOptions, - StreamChunk, - SummarizationOptions, - SummarizationResult, - EmbeddingOptions, - EmbeddingResult, - ImageGenerationOptions, - ImageGenerationResult, - AudioTranscriptionOptions, - AudioTranscriptionResult, - TextToSpeechOptions, - TextToSpeechResult, - VideoGenerationOptions, - VideoGenerationResult, - Tool, - ResponseFormat, -} from "@tanstack/ai"; -import type { BoundAI } from "./bound-ai"; -import type { ChatCompletionReturnType, FallbackConfig } from "./types"; - -/** - * FallbackAI - Wraps multiple BoundAI instances and tries them in sequence - * - * When a method is called, it tries the first BoundAI instance. If it fails, - * it automatically tries the next one, and so on, until one succeeds or all fail. - */ -export class FallbackAI { - private instances: BoundAI[]; - private config: FallbackConfig; - - constructor(instances: BoundAI[], config: FallbackConfig = {}) { - if (instances.length === 0) { - throw new Error("At least one AI instance is required for fallback"); - } - this.instances = instances; - this.config = config; - } - - /** - * Try multiple adapters in order until one succeeds - */ - private async tryWithFallback( - operation: (instance: BoundAI) => Promise, - operationName: string - ): Promise { - const errors: Array<{ adapter: string; error: Error }> = []; - - for (const instance of this.instances) { - try { - return await operation(instance); - } catch (error: any) { - const err = error instanceof Error ? error : new Error(String(error)); - errors.push({ - adapter: instance.adapterName, - error: err, - }); - - // Call error handler if provided - if (this.config.onError) { - this.config.onError(instance.adapterName, err); - } - - // Check if we should stop trying - if (this.config.stopOnError && this.config.stopOnError(err)) { - throw err; - } - - // Log warning - console.warn( - `[AI Fallback] Adapter "${instance.adapterName}" failed for ${operationName}:`, - err.message - ); - } - } - - // All adapters failed, throw comprehensive error - const errorMessage = errors - .map((e) => ` - ${e.adapter}: ${e.error.message}`) - .join("\n"); - throw new Error( - `All adapters failed for ${operationName}:\n${errorMessage}` - ); - } - - /** - * Try multiple adapters in order until one succeeds (async generator version) - */ - private async *tryStreamWithFallback( - operation: (instance: BoundAI) => AsyncIterable, - operationName: string - ): AsyncIterable { - const errors: Array<{ adapter: string; error: Error }> = []; - - for (const instance of this.instances) { - try { - for await (const chunk of operation(instance)) { - yield chunk; - } - // If we got here, the stream completed successfully - return; - } catch (error: any) { - const err = error instanceof Error ? error : new Error(String(error)); - errors.push({ - adapter: instance.adapterName, - error: err, - }); - - // Call error handler if provided - if (this.config.onError) { - this.config.onError(instance.adapterName, err); - } - - // Check if we should stop trying - if (this.config.stopOnError && this.config.stopOnError(err)) { - throw err; - } - - // Log warning - console.warn( - `[AI Fallback] Adapter "${instance.adapterName}" failed for ${operationName}:`, - err.message - ); - } - } - - // All adapters failed - const errorMessage = errors - .map((e) => ` - ${e.adapter}: ${e.error.message}`) - .join("\n"); - throw new Error( - `All adapters failed for ${operationName}:\n${errorMessage}` - ); - } - - /** - * Stream a chat conversation with automatic tool execution - */ - async *chat( - options: Omit & { - messages: ChatCompletionOptions["messages"]; - tools?: ReadonlyArray; - systemPrompts?: string[]; - providerOptions?: Record; - } - ): AsyncIterable { - yield* this.tryStreamWithFallback( - (instance) => instance.chat(options), - "chat" - ); - } - - /** - * Complete a chat conversation with optional structured output - */ - async chatCompletion< - TOptions extends { - output?: ResponseFormat; - providerOptions?: Record; - } - >( - options: Omit & { - messages: ChatCompletionOptions["messages"]; - tools?: ReadonlyArray; - systemPrompts?: string[]; - } & TOptions - ): Promise> { - return this.tryWithFallback( - (instance) => instance.chatCompletion(options), - "chatCompletion" - ); - } - - /** - * Summarize text - */ - async summarize( - options: Omit & { - text: string; - } - ): Promise { - return this.tryWithFallback( - (instance) => instance.summarize(options), - "summarize" - ); - } - - /** - * Generate embeddings - */ - async embed( - options: Omit & { - input: string | string[]; - } - ): Promise { - return this.tryWithFallback( - (instance) => instance.embed(options), - "embed" - ); - } - - /** - * Generate an image - */ - async image( - options: Omit & { - prompt: string; - providerOptions?: Record; - } - ): Promise { - return this.tryWithFallback( - (instance) => instance.image(options), - "image" - ); - } - - /** - * Transcribe audio - */ - async audio( - options: Omit & { - file: Blob | Buffer; - providerOptions?: Record; - } - ): Promise { - return this.tryWithFallback( - (instance) => instance.audio(options), - "audio" - ); - } - - /** - * Generate speech from text - */ - async speak( - options: Omit & { - input: string; - voice: string; - providerOptions?: Record; - } - ): Promise { - return this.tryWithFallback( - (instance) => instance.speak(options), - "speak" - ); - } - - /** - * Generate a video - */ - async video( - options: Omit & { - prompt: string; - providerOptions?: Record; - } - ): Promise { - return this.tryWithFallback( - (instance) => instance.video(options), - "video" - ); - } -} - diff --git a/packages/typescript/ai-fallback/src/index.ts b/packages/typescript/ai-fallback/src/index.ts deleted file mode 100644 index 6fd74cda1..000000000 --- a/packages/typescript/ai-fallback/src/index.ts +++ /dev/null @@ -1,107 +0,0 @@ -import type { AIAdapter, ChatCompletionOptions, Tool } from "@tanstack/ai"; -import { BoundAI } from "./bound-ai"; -import { FallbackAI } from "./fallback-ai"; -import type { BoundOptions, FallbackConfig, AI } from "./types"; - -// Extract types from adapter -type ExtractModels = T extends AIAdapter< - infer M, - any, - any, - any, - any, - any, - any, - any, - any, - any -> - ? M[number] - : string; - -type ExtractChatProviderOptions = T extends AIAdapter< - any, - any, - any, - any, - any, - infer P, - any, - any, - any, - any -> - ? P - : Record; - -/** - * Create a BoundAI instance that wraps an AI instance with pre-bound model and options - * - * @param ai - The AI instance to wrap - * @param options - Model and options to bind (everything except messages/input) - * @returns A BoundAI instance with model and options pre-configured - * - * @example - * ```typescript - * import { ai } from '@tanstack/ai'; - * import { openai } from '@tanstack/ai-openai'; - * import { withModel } from '@tanstack/ai-fallback'; - * - * const openAI = withModel(ai(openai()), { - * model: 'gpt-4', - * temperature: 0.7, - * }); - * - * // Now you can call chat without specifying model - * const stream = openAI.chat({ messages: [...] }); - * ``` - */ -export function withModel< - TAdapter extends AIAdapter ->( - aiInstance: AI, - options: Omit & { - model: ExtractModels; - providerOptions?: ExtractChatProviderOptions; - tools?: ReadonlyArray; - systemPrompts?: string[]; - } -): BoundAI { - return new BoundAI(aiInstance, options as BoundOptions); -} - -/** - * Create a FallbackAI instance that tries multiple BoundAI instances in sequence - * - * @param instances - Array of BoundAI instances to try in order - * @param config - Optional fallback configuration - * @returns A FallbackAI instance that tries each adapter until one succeeds - * - * @example - * ```typescript - * import { ai } from '@tanstack/ai'; - * import { openai } from '@tanstack/ai-openai'; - * import { anthropic } from '@tanstack/ai-anthropic'; - * import { fallback, withModel } from '@tanstack/ai-fallback'; - * - * const openAI = withModel(ai(openai()), { model: 'gpt-4' }); - * const anthropicAI = withModel(ai(anthropic()), { model: 'claude-3-5-sonnet-20241022' }); - * - * const aiWithFallback = fallback([openAI, anthropicAI]); - * - * // Tries openAI first, then anthropicAI if it fails - * const stream = aiWithFallback.chat({ messages: [...] }); - * ``` - */ -export function fallback( - instances: BoundAI[], - config?: FallbackConfig -): FallbackAI { - return new FallbackAI(instances, config); -} - -// Re-export types -export type { BoundAI } from "./bound-ai"; -export type { FallbackAI } from "./fallback-ai"; -export type { BoundOptions, BoundChatOptions, FallbackConfig, ChatCompletionReturnType } from "./types"; - diff --git a/packages/typescript/ai-fallback/src/types.ts b/packages/typescript/ai-fallback/src/types.ts deleted file mode 100644 index 2dba6d893..000000000 --- a/packages/typescript/ai-fallback/src/types.ts +++ /dev/null @@ -1,76 +0,0 @@ -import type { ai } from "@tanstack/ai"; -import type { - AIAdapter, - ChatCompletionOptions, - ResponseFormat, - Tool, - ChatCompletionResult, -} from "@tanstack/ai"; - -// Extract AI type from the ai function -export type AI = AIAdapter> = ReturnType>; - -// Extract adapter type from AI instance -export type ExtractAdapter = T extends AI ? A : never; - -// Extract model types from adapter -type ExtractModels = T extends AIAdapter< - infer M, - any, - any, - any, - any, - any, - any, - any, - any, - any -> - ? M[number] - : string; - -type ExtractChatProviderOptions = T extends AIAdapter< - any, - any, - any, - any, - any, - infer P, - any, - any, - any, - any -> - ? P - : Record; - -// Bound options type - all chat options except messages and model -export type BoundChatOptions> = Omit< - ChatCompletionOptions, - "model" | "messages" | "providerOptions" | "responseFormat" -> & { - model: ExtractModels; - providerOptions?: ExtractChatProviderOptions; - tools?: ReadonlyArray; - systemPrompts?: string[]; -}; - -// Options that can be bound (excludes messages/input) -export type BoundOptions> = Omit< - BoundChatOptions, - "messages" ->; - -// Helper type for chatCompletion return type -export type ChatCompletionReturnType< - TOptions extends { output?: ResponseFormat } -> = TOptions["output"] extends ResponseFormat - ? ChatCompletionResult - : ChatCompletionResult; - -// Fallback configuration -export interface FallbackConfig { - onError?: (adapterName: string, error: Error) => void; - stopOnError?: (error: Error) => boolean; -} - diff --git a/packages/typescript/ai-fallback/tests/fallback.test.ts b/packages/typescript/ai-fallback/tests/fallback.test.ts deleted file mode 100644 index 50034826b..000000000 --- a/packages/typescript/ai-fallback/tests/fallback.test.ts +++ /dev/null @@ -1,459 +0,0 @@ -import { describe, it, expect, vi, beforeEach } from "vitest"; -import { ai } from "@tanstack/ai"; -import { BaseAdapter } from "@tanstack/ai"; -import type { - ChatCompletionOptions, - ChatCompletionResult, - StreamChunk, - SummarizationOptions, - SummarizationResult, - EmbeddingOptions, - EmbeddingResult, -} from "@tanstack/ai"; -import { fallback, withModel } from "../src/index"; - -// Mock adapter that can be configured to succeed or fail -class MockAdapter extends BaseAdapter< - readonly ["test-model"], - readonly [], - readonly [], - readonly [], - readonly [] -> { - name: string; - models = ["test-model"] as const; - private shouldFail: boolean; - private errorMessage: string; - private succeedWith: any; - - constructor( - name: string, - shouldFail: boolean = false, - errorMessage: string = "Adapter failed", - succeedWith?: any - ) { - super(); - this.name = name; - this.shouldFail = shouldFail; - this.errorMessage = errorMessage; - this.succeedWith = succeedWith || { - id: `${name}-123`, - model: "test-model", - content: `Success from ${name}`, - usage: { - promptTokens: 10, - completionTokens: 20, - totalTokens: 30, - }, - }; - } - - async *chatStream( - _options: ChatCompletionOptions - ): AsyncIterable { - if (this.shouldFail) { - throw new Error(this.errorMessage); - } - const id = `${this.name}-123`; - const timestamp = Date.now(); - yield { - type: "content", - id, - model: "test-model", - timestamp, - delta: "Hello", - content: "Hello", - role: "assistant", - }; - yield { - type: "content", - id, - model: "test-model", - timestamp, - delta: " World", - content: "Hello World", - role: "assistant", - }; - yield { - type: "done", - id, - model: "test-model", - timestamp, - finishReason: "stop", - usage: { - promptTokens: 10, - completionTokens: 20, - totalTokens: 30, - }, - }; - } - - async generateText(_options: any): Promise { - if (this.shouldFail) { - throw new Error(this.errorMessage); - } - return { text: `Text from ${this.name}` }; - } - - async *generateTextStream(_options: any): AsyncIterable { - if (this.shouldFail) { - throw new Error(this.errorMessage); - } - yield "text"; - yield " chunk"; - } - - async summarize(options: SummarizationOptions): Promise { - if (this.shouldFail) { - throw new Error(this.errorMessage); - } - return { - summary: `Summary from ${this.name}`, - id: `${this.name}-123`, - model: options.model, - usage: { - promptTokens: 10, - completionTokens: 20, - totalTokens: 30, - }, - }; - } - - async createEmbeddings(options: EmbeddingOptions): Promise { - if (this.shouldFail) { - throw new Error(this.errorMessage); - } - return { - embeddings: [[0.1, 0.2, 0.3]], - id: `${this.name}-123`, - model: options.model, - usage: { - promptTokens: 10, - completionTokens: 0, - totalTokens: 10, - }, - }; - } -} - -describe("ai-fallback", () => { - beforeEach(() => { - vi.clearAllMocks(); - vi.spyOn(console, "warn").mockImplementation(() => {}); - }); - - describe("withModel", () => { - it("should create a BoundAI instance", () => { - const adapter = new MockAdapter("test-adapter"); - const aiInstance = ai(adapter); - const bound = withModel(aiInstance, { - model: "test-model", - temperature: 0.7, - }); - - expect(bound).toBeDefined(); - expect(bound.adapterName).toBe("test-adapter"); - }); - }); - - describe("fallback - chatCompletion", () => { - it("should use first adapter when it succeeds", async () => { - const adapter1 = new MockAdapter("adapter1", false); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const result = await fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }); - - expect(result.content).toBe("Success from adapter1"); - expect(console.warn).not.toHaveBeenCalled(); - }); - - it("should fallback to second adapter when first fails", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Rate limit exceeded"); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const result = await fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }); - - expect(result.content).toBe("Success from adapter2"); - expect(console.warn).toHaveBeenCalledWith( - expect.stringContaining('Adapter "adapter1" failed'), - "Rate limit exceeded" - ); - }); - - it("should throw comprehensive error when all adapters fail", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Rate limit exceeded"); - const adapter2 = new MockAdapter("adapter2", true, "Service unavailable"); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - await expect( - fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }) - ).rejects.toThrow("All adapters failed for chatCompletion"); - - await expect( - fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }) - ).rejects.toThrow(/adapter1: Rate limit exceeded/); - - await expect( - fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }) - ).rejects.toThrow(/adapter2: Service unavailable/); - }); - - it("should call onError callback when adapter fails", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Rate limit exceeded"); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const onError = vi.fn(); - const fallbackAI = fallback([bound1, bound2], { onError }); - - await fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }); - - expect(onError).toHaveBeenCalledWith("adapter1", expect.any(Error)); - expect(onError).toHaveBeenCalledTimes(1); - }); - - it("should stop trying when stopOnError returns true", async () => { - const adapter1 = new MockAdapter("adapter1", true, "401 Unauthorized"); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const stopOnError = (error: Error) => error.message.includes("401"); - const fallbackAI = fallback([bound1, bound2], { stopOnError }); - - await expect( - fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }) - ).rejects.toThrow("401 Unauthorized"); - - // When stopOnError returns true, we throw immediately without logging warning - // Should not have tried adapter2 (no warnings at all when stopping early) - expect(console.warn).not.toHaveBeenCalled(); - }); - }); - - describe("fallback - chat (streaming)", () => { - it("should use first adapter when it succeeds", async () => { - const adapter1 = new MockAdapter("adapter1", false); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const chunks: StreamChunk[] = []; - for await (const chunk of fallbackAI.chat({ - messages: [{ role: "user", content: "Hello" }], - })) { - chunks.push(chunk); - } - - expect(chunks.length).toBeGreaterThan(0); - expect(chunks[0].model).toBe("test-model"); - expect(console.warn).not.toHaveBeenCalled(); - }); - - it("should fallback to second adapter when first fails before streaming", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Connection error"); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const chunks: StreamChunk[] = []; - for await (const chunk of fallbackAI.chat({ - messages: [{ role: "user", content: "Hello" }], - })) { - chunks.push(chunk); - } - - expect(chunks.length).toBeGreaterThan(0); - expect(chunks[0].model).toBe("test-model"); - expect(console.warn).toHaveBeenCalledWith( - expect.stringContaining('Adapter "adapter1" failed'), - "Connection error" - ); - }); - - it("should throw when all adapters fail", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Error 1"); - const adapter2 = new MockAdapter("adapter2", true, "Error 2"); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const chunks: StreamChunk[] = []; - let error: Error | null = null; - - try { - for await (const chunk of fallbackAI.chat({ - messages: [{ role: "user", content: "Hello" }], - })) { - chunks.push(chunk); - } - } catch (e) { - error = e as Error; - } - - expect(error).toBeTruthy(); - expect(error!.message).toContain("All adapters failed for chat"); - expect(error!.message).toContain("adapter1: Error 1"); - expect(error!.message).toContain("adapter2: Error 2"); - }); - }); - - describe("fallback - embed", () => { - it("should use first adapter when it succeeds", async () => { - const adapter1 = new MockAdapter("adapter1", false); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const result = await fallbackAI.embed({ - input: "test text", - }); - - expect(result.embeddings).toBeDefined(); - expect(result.model).toBe("test-model"); - expect(console.warn).not.toHaveBeenCalled(); - }); - - it("should fallback to second adapter when first fails", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Embedding error"); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const result = await fallbackAI.embed({ - input: "test text", - }); - - expect(result.embeddings).toBeDefined(); - expect(console.warn).toHaveBeenCalledWith( - expect.stringContaining('Adapter "adapter1" failed'), - "Embedding error" - ); - }); - }); - - describe("fallback - summarize", () => { - it("should fallback to second adapter when first fails", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Summarization error"); - const adapter2 = new MockAdapter("adapter2", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2]); - - const result = await fallbackAI.summarize({ - text: "Long text to summarize", - }); - - expect(result.summary).toBe("Summary from adapter2"); - expect(console.warn).toHaveBeenCalledWith( - expect.stringContaining('Adapter "adapter1" failed'), - "Summarization error" - ); - }); - }); - - describe("fallback - multiple adapters", () => { - it("should try all adapters in order until one succeeds", async () => { - const adapter1 = new MockAdapter("adapter1", true, "Error 1"); - const adapter2 = new MockAdapter("adapter2", true, "Error 2"); - const adapter3 = new MockAdapter("adapter3", false); - - const ai1 = ai(adapter1); - const ai2 = ai(adapter2); - const ai3 = ai(adapter3); - - const bound1 = withModel(ai1, { model: "test-model" }); - const bound2 = withModel(ai2, { model: "test-model" }); - const bound3 = withModel(ai3, { model: "test-model" }); - - const fallbackAI = fallback([bound1, bound2, bound3]); - - const result = await fallbackAI.chatCompletion({ - messages: [{ role: "user", content: "Hello" }], - }); - - expect(result.content).toBe("Success from adapter3"); - expect(console.warn).toHaveBeenCalledTimes(2); - }); - }); -}); diff --git a/packages/typescript/ai-fallback/tsconfig.json b/packages/typescript/ai-fallback/tsconfig.json deleted file mode 100644 index 204ca8d3f..000000000 --- a/packages/typescript/ai-fallback/tsconfig.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "extends": "../../../tsconfig.json", - "compilerOptions": { - "outDir": "dist", - "rootDir": "src" - }, - "include": ["src/**/*.ts", "src/**/*.tsx"], - "exclude": ["node_modules", "dist", "**/*.config.ts"], - "references": [{ "path": "../ai" }] -} diff --git a/packages/typescript/ai-fallback/tsdown.config.ts b/packages/typescript/ai-fallback/tsdown.config.ts deleted file mode 100644 index 01597a963..000000000 --- a/packages/typescript/ai-fallback/tsdown.config.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { defineConfig } from "tsdown"; - -export default defineConfig({ - entry: ["./src/index.ts"], - format: ["esm"], - unbundle: true, - dts: true, - sourcemap: true, - clean: true, - minify: false, -}); - diff --git a/packages/typescript/ai-fallback/vitest.config.ts b/packages/typescript/ai-fallback/vitest.config.ts deleted file mode 100644 index 4583b2da7..000000000 --- a/packages/typescript/ai-fallback/vitest.config.ts +++ /dev/null @@ -1,18 +0,0 @@ -import { defineConfig } from "vitest/config"; -import { resolve } from "path"; -import { fileURLToPath } from "url"; - -const __dirname = fileURLToPath(new URL(".", import.meta.url)); - -export default defineConfig({ - test: { - globals: true, - environment: "node", - }, - resolve: { - alias: { - "@tanstack/ai": resolve(__dirname, "../ai/src/index.ts"), - }, - }, -}); - diff --git a/packages/typescript/ai-gemini/README.md b/packages/typescript/ai-gemini/README.md new file mode 100644 index 000000000..7c4143074 --- /dev/null +++ b/packages/typescript/ai-gemini/README.md @@ -0,0 +1,104 @@ +
+ +
+ +
+ +
+ + + + + + + + + +
+ + + +
+ +### [Become a Sponsor!](https://github.com/sponsors/tannerlinsley/) +
+ +# TanStack AI + +A powerful, type-safe AI SDK for building AI-powered applications. + +- Provider-agnostic adapters (OpenAI, Anthropic, Gemini, Ollama, etc.) +- Chat completion, streaming, and agent loop strategies +- Headless chat state management with adapters (SSE, HTTP stream, custom) +- Type-safe tools with server/client execution + +### Read the docs → + +## Get Involved + +- We welcome issues and pull requests! +- Participate in [GitHub discussions](https://github.com/TanStack/ai/discussions) +- Chat with the community on [Discord](https://discord.com/invite/WrRKjPJ) +- See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions + +## Partners + + + + + + +
+ + + + + CodeRabbit + + + + + + + + Cloudflare + + +
+ +
+AI & you? +

+We're looking for TanStack AI Partners to join our mission! Partner with us to push the boundaries of TanStack AI and build amazing things together. +

+LET'S CHAT +
+ +## Explore the TanStack Ecosystem + +- TanStack Config – Tooling for JS/TS packages +- TanStack DB – Reactive sync client store +- TanStack Devtools – Unified devtools panel +- TanStack Form – Type‑safe form state +- TanStack Pacer – Debouncing, throttling, batching +- TanStack Query – Async state & caching +- TanStack Ranger – Range & slider primitives +- TanStack Router – Type‑safe routing, caching & URL state +- TanStack Start – Full‑stack SSR & streaming +- TanStack Store – Reactive data store +- TanStack Table – Headless datagrids +- TanStack Virtual – Virtualized rendering + +… and more at TanStack.com Ā» + + diff --git a/packages/typescript/ai-gemini/package.json b/packages/typescript/ai-gemini/package.json index 28111ace4..5ec5c90f5 100644 --- a/packages/typescript/ai-gemini/package.json +++ b/packages/typescript/ai-gemini/package.json @@ -10,12 +10,12 @@ "directory": "packages/typescript/ai-gemini" }, "type": "module", - "module": "./dist/index.js", - "types": "./dist/index.d.ts", + "module": "./dist/esm/index.js", + "types": "./dist/esm/index.d.ts", "exports": { ".": { - "types": "./dist/index.d.ts", - "import": "./dist/index.js" + "types": "./dist/esm/index.d.ts", + "import": "./dist/esm/index.js" } }, "files": [ @@ -23,14 +23,14 @@ "src" ], "scripts": { - "build": "tsdown", - "dev": "tsdown --watch", - "test": "vitest run", - "test:watch": "vitest", - "test:coverage": "vitest run --coverage", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" + "build": "vite build", + "clean": "premove ./build ./dist", + "lint:fix": "eslint ./src --fix", + "test:build": "publint --strict", + "test:eslint": "eslint ./src", + "test:lib": "vitest", + "test:lib:dev": "pnpm test:lib --watch", + "test:types": "tsc" }, "keywords": [ "ai", @@ -44,13 +44,10 @@ "@tanstack/ai": "workspace:*" }, "devDependencies": { - "@types/node": "^22.10.2", - "@vitest/coverage-v8": "4.0.13", - "tsdown": "^0.15.9", - "typescript": "^5.7.2", - "vitest": "^4.0.13" + "@vitest/coverage-v8": "4.0.14", + "vite": "^7.2.4" }, "peerDependencies": { "@tanstack/ai": "workspace:*" } -} \ No newline at end of file +} diff --git a/packages/typescript/ai-gemini/src/gemini-adapter.ts b/packages/typescript/ai-gemini/src/gemini-adapter.ts index 123e76234..29ece9834 100644 --- a/packages/typescript/ai-gemini/src/gemini-adapter.ts +++ b/packages/typescript/ai-gemini/src/gemini-adapter.ts @@ -1,36 +1,41 @@ -import type { GenerateContentParameters } from "@google/genai"; -import { GoogleGenAI } from "@google/genai"; -import { - BaseAdapter, - type AIAdapterConfig, - type ChatCompletionOptions, - type ChatCompletionResult, - type SummarizationOptions, - type SummarizationResult, - type EmbeddingOptions, - type EmbeddingResult, - type ModelMessage, - type StreamChunk, -} from "@tanstack/ai"; -import { - GEMINI_MODELS, - GEMINI_EMBEDDING_MODELS, - type GeminiChatModelProviderOptionsByName, -} from "./model-meta"; -import { ExternalTextProviderOptions } from "./text/text-provider-options"; -import { convertToolsToProviderFormat } from "./tools/tool-converter"; +import { GoogleGenAI } from '@google/genai' +import { BaseAdapter } from '@tanstack/ai' +import { GEMINI_EMBEDDING_MODELS, GEMINI_MODELS } from './model-meta' +import { convertToolsToProviderFormat } from './tools/tool-converter' +import type { + AIAdapterConfig, + ChatStreamOptionsUnion, + EmbeddingOptions, + EmbeddingResult, + ModelMessage, + StreamChunk, + SummarizationOptions, + SummarizationResult, +} from '@tanstack/ai' +import type { GeminiChatModelProviderOptionsByName } from './model-meta' +import type { ExternalTextProviderOptions } from './text/text-provider-options' +import type { GenerateContentParameters } from '@google/genai' export interface GeminiAdapterConfig extends AIAdapterConfig { - apiKey: string; + apiKey: string } -export type GeminiModel = (typeof GEMINI_MODELS)[number]; /** * Gemini-specific provider options * Based on Google Generative AI SDK * @see https://ai.google.dev/api/rest/v1/GenerationConfig */ -export type GeminiProviderOptions = ExternalTextProviderOptions; +export type GeminiProviderOptions = ExternalTextProviderOptions + +type ChatOptions = ChatStreamOptionsUnion< + BaseAdapter< + typeof GEMINI_MODELS, + typeof GEMINI_EMBEDDING_MODELS, + GeminiProviderOptions, + Record, + GeminiChatModelProviderOptionsByName + > +> export class GeminiAdapter extends BaseAdapter< typeof GEMINI_MODELS, @@ -39,208 +44,190 @@ export class GeminiAdapter extends BaseAdapter< Record, GeminiChatModelProviderOptionsByName > { - name = "gemini"; - models = GEMINI_MODELS; - embeddingModels = GEMINI_EMBEDDING_MODELS; - declare _modelProviderOptionsByName: GeminiChatModelProviderOptionsByName; - private client: GoogleGenAI; + name = 'gemini' + models = GEMINI_MODELS + embeddingModels = GEMINI_EMBEDDING_MODELS + declare _modelProviderOptionsByName: GeminiChatModelProviderOptionsByName + private client: GoogleGenAI constructor(config: GeminiAdapterConfig) { - super(config); + super(config) this.client = new GoogleGenAI({ apiKey: config.apiKey, - }); + }) } - async *chatStream( - options: ChatCompletionOptions - ): AsyncIterable { + async *chatStream(options: ChatOptions): AsyncIterable { // Map common options to Gemini format - const mappedOptions = this.mapCommonOptionsToGemini(options); + const mappedOptions = this.mapCommonOptionsToGemini(options) - const result = await this.client.models.generateContentStream( - mappedOptions - ); + const result = await this.client.models.generateContentStream(mappedOptions) - yield* this.processStreamChunks(result, options.model); + yield* this.processStreamChunks(result, options.model) } async summarize(options: SummarizationOptions): Promise { - const prompt = this.buildSummarizationPrompt(options, options.text); + const prompt = this.buildSummarizationPrompt(options, options.text) // Use models API like chatCompletion const result = await this.client.models.generateContent({ - model: options.model || "gemini-pro", - contents: [{ role: "user", parts: [{ text: prompt }] }], + model: options.model, + contents: [{ role: 'user', parts: [{ text: prompt }] }], config: { temperature: 0.3, maxOutputTokens: options.maxLength || 500, }, - }); - - // Handle response structure (might have .response property or be direct) - let response: any; - if (result.response && typeof result.response.then === "function") { - response = await result.response; - } else if (result.candidates) { - response = result; - } else { - response = (result as any).response || result; - } + }) // Extract text from candidates or use .text() method - let summary = ""; - if (response.candidates?.[0]?.content?.parts) { - const parts = response.candidates[0].content.parts; + let summary = '' + if (result.candidates?.[0]?.content?.parts) { + const parts = result.candidates[0].content.parts for (const part of parts) { if (part.text) { - summary += part.text; + summary += part.text } } } - if (!summary && typeof response.text === "function") { - try { - summary = response.text() || ""; - } catch { - // If .text() fails, summary remains empty - } + if (!summary && typeof result.text === 'string') { + summary = result.text } - const promptTokens = this.estimateTokens(prompt); - const completionTokens = this.estimateTokens(summary); + const promptTokens = this.estimateTokens(prompt) + const completionTokens = this.estimateTokens(summary) return { id: this.generateId(), - model: options.model || "gemini-pro", + model: options.model || 'gemini-pro', summary, usage: { promptTokens, completionTokens, totalTokens: promptTokens + completionTokens, }, - }; + } } async createEmbeddings(options: EmbeddingOptions): Promise { const inputs = Array.isArray(options.input) ? options.input - : [options.input]; + : [options.input] // According to docs: contents can be a string or array of strings // Response has embeddings (plural) array with values property const result = await this.client.models.embedContent({ model: options.model, contents: inputs, - }); + }) // Extract embeddings from result.embeddings array - const embeddings: number[][] = []; + const embeddings: Array> = [] if (result.embeddings && Array.isArray(result.embeddings)) { for (const embedding of result.embeddings) { if (embedding.values && Array.isArray(embedding.values)) { - embeddings.push(embedding.values); + embeddings.push(embedding.values) } else if (Array.isArray(embedding)) { - embeddings.push(embedding); + embeddings.push(embedding) } } } const promptTokens = inputs.reduce( (sum, input) => sum + this.estimateTokens(input), - 0 - ); + 0, + ) return { id: this.generateId(), - model: options.model || "gemini-embedding-001", + model: options.model || 'gemini-embedding-001', embeddings, usage: { promptTokens, totalTokens: promptTokens, }, - }; + } } private buildSummarizationPrompt( options: SummarizationOptions, - text: string + text: string, ): string { - let prompt = "You are a professional summarizer. "; + let prompt = 'You are a professional summarizer. ' switch (options.style) { - case "bullet-points": - prompt += "Provide a summary in bullet point format. "; - break; - case "paragraph": - prompt += "Provide a summary in paragraph format. "; - break; - case "concise": - prompt += "Provide a very concise summary in 1-2 sentences. "; - break; + case 'bullet-points': + prompt += 'Provide a summary in bullet point format. ' + break + case 'paragraph': + prompt += 'Provide a summary in paragraph format. ' + break + case 'concise': + prompt += 'Provide a very concise summary in 1-2 sentences. ' + break default: - prompt += "Provide a clear and concise summary. "; + prompt += 'Provide a clear and concise summary. ' } if (options.focus && options.focus.length > 0) { - prompt += `Focus on the following aspects: ${options.focus.join(", ")}. `; + prompt += `Focus on the following aspects: ${options.focus.join(', ')}. ` } - prompt += `\n\nText to summarize:\n${text}\n\nSummary:`; + prompt += `\n\nText to summarize:\n${text}\n\nSummary:` - return prompt; + return prompt } private estimateTokens(text: string): number { // Rough approximation: 1 token ā‰ˆ 4 characters - return Math.ceil(text.length / 4); + return Math.ceil(text.length / 4) } // TODO the proper type here is AsyncGenerator private async *processStreamChunks( result: AsyncIterable, - model: string + model: string, ): AsyncIterable { - const timestamp = Date.now(); - let accumulatedContent = ""; + const timestamp = Date.now() + let accumulatedContent = '' const toolCallMap = new Map< string, { name: string; args: string; index: number } - >(); - let nextToolIndex = 0; + >() + let nextToolIndex = 0 // Iterate over the stream result (it's already an AsyncGenerator) for await (const chunk of result) { // Check for errors in the chunk if (chunk.error) { - console.log("[GeminiAdapter] Error in chunk:", chunk.error); + console.log('[GeminiAdapter] Error in chunk:', chunk.error) yield { - type: "error", + type: 'error', id: this.generateId(), model, timestamp, error: { - message: chunk.error.message || "Unknown error", + message: chunk.error.message || 'Unknown error', code: chunk.error.code, }, - }; - return; + } + return } // Check if candidates array exists and has entries if (!chunk.candidates || chunk.candidates.length === 0) { // Skip empty chunks or check for finish reason in other places if (chunk.finishReason) { - const finishReason = chunk.finishReason as string; - let mappedFinishReason = finishReason; + const finishReason = chunk.finishReason as string + let mappedFinishReason = finishReason if ( - finishReason === "UNEXPECTED_TOOL_CALL" || - finishReason === "STOP" + finishReason === 'UNEXPECTED_TOOL_CALL' || + finishReason === 'STOP' ) { - mappedFinishReason = toolCallMap.size > 0 ? "tool_calls" : "stop"; + mappedFinishReason = toolCallMap.size > 0 ? 'tool_calls' : 'stop' } yield { - type: "done", + type: 'done', id: this.generateId(), model, timestamp, @@ -252,160 +239,160 @@ export class GeminiAdapter extends BaseAdapter< totalTokens: chunk.usageMetadata.totalTokenCount ?? 0, } : undefined, - }; + } } - continue; + continue } // Extract content from candidates[0].content.parts // Parts can contain text or functionCall if (chunk.candidates?.[0]?.content?.parts) { - const parts = chunk.candidates[0].content.parts; + const parts = chunk.candidates[0].content.parts for (const part of parts) { // Handle text content if (part.text) { - accumulatedContent += part.text; + accumulatedContent += part.text yield { - type: "content", + type: 'content', id: this.generateId(), model, timestamp, delta: part.text, content: accumulatedContent, - role: "assistant", - }; + role: 'assistant', + } } // Handle function calls (tool calls) // Check both camelCase (SDK) and snake_case (direct API) formats - const functionCall = part.functionCall || part.function_call; + const functionCall = part.functionCall || part.function_call if (functionCall) { const toolCallId = - functionCall.name || `call_${Date.now()}_${nextToolIndex}`; + functionCall.name || `call_${Date.now()}_${nextToolIndex}` const functionArgs = - functionCall.args || functionCall.arguments || {}; + functionCall.args || functionCall.arguments || {} // Check if we've seen this tool call before (for streaming args) - let toolCallData = toolCallMap.get(toolCallId); + let toolCallData = toolCallMap.get(toolCallId) if (!toolCallData) { toolCallData = { - name: functionCall.name || "", + name: functionCall.name || '', args: - typeof functionArgs === "string" + typeof functionArgs === 'string' ? functionArgs : JSON.stringify(functionArgs || {}), index: nextToolIndex++, - }; - toolCallMap.set(toolCallId, toolCallData); + } + toolCallMap.set(toolCallId, toolCallData) } else { // Merge arguments if streaming if (functionArgs) { try { - const existingArgs = JSON.parse(toolCallData.args); + const existingArgs = JSON.parse(toolCallData.args) const newArgs = - typeof functionArgs === "string" + typeof functionArgs === 'string' ? JSON.parse(functionArgs) - : functionArgs; - const mergedArgs = { ...existingArgs, ...newArgs }; - toolCallData.args = JSON.stringify(mergedArgs); + : functionArgs + const mergedArgs = { ...existingArgs, ...newArgs } + toolCallData.args = JSON.stringify(mergedArgs) } catch { // If parsing fails, use new args toolCallData.args = - typeof functionArgs === "string" + typeof functionArgs === 'string' ? functionArgs - : JSON.stringify(functionArgs); + : JSON.stringify(functionArgs) } } } yield { - type: "tool_call", + type: 'tool_call', id: this.generateId(), model, timestamp, toolCall: { id: toolCallId, - type: "function", + type: 'function', function: { name: toolCallData.name, arguments: toolCallData.args, }, }, index: toolCallData.index, - }; + } } } } else if (chunk.data) { // Fallback to chunk.data if available - accumulatedContent += chunk.data; + accumulatedContent += chunk.data yield { - type: "content", + type: 'content', id: this.generateId(), model, timestamp, delta: chunk.data, content: accumulatedContent, - role: "assistant", - }; + role: 'assistant', + } } // Check for finish reason if (chunk.candidates?.[0]?.finishReason) { - const finishReason = chunk.candidates[0].finishReason as string; + const finishReason = chunk.candidates[0].finishReason as string // UNEXPECTED_TOOL_CALL means Gemini tried to call a function but it wasn't properly declared // This typically means there's an issue with the tool declaration format // We should map it to tool_calls to try to process it anyway - let mappedFinishReason = finishReason; - if (finishReason === "UNEXPECTED_TOOL_CALL") { + let mappedFinishReason = finishReason + if (finishReason === 'UNEXPECTED_TOOL_CALL') { // Try to extract function call from content.parts if available if (chunk.candidates[0].content?.parts) { for (const part of chunk.candidates[0].content.parts) { - const functionCall = part.functionCall || part.function_call; + const functionCall = part.functionCall || part.function_call if (functionCall) { // We found a function call - process it const toolCallId = - functionCall.name || `call_${Date.now()}_${nextToolIndex}`; + functionCall.name || `call_${Date.now()}_${nextToolIndex}` const functionArgs = - functionCall.args || functionCall.arguments || {}; + functionCall.args || functionCall.arguments || {} toolCallMap.set(toolCallId, { - name: functionCall.name || "", + name: functionCall.name || '', args: - typeof functionArgs === "string" + typeof functionArgs === 'string' ? functionArgs : JSON.stringify(functionArgs || {}), index: nextToolIndex++, - }); + }) yield { - type: "tool_call", + type: 'tool_call', id: this.generateId(), model, timestamp, toolCall: { id: toolCallId, - type: "function", + type: 'function', function: { - name: functionCall.name || "", + name: functionCall.name || '', arguments: - typeof functionArgs === "string" + typeof functionArgs === 'string' ? functionArgs : JSON.stringify(functionArgs || {}), }, }, index: nextToolIndex - 1, - }; + } } } } - mappedFinishReason = toolCallMap.size > 0 ? "tool_calls" : "stop"; - } else if (finishReason === "STOP") { - mappedFinishReason = toolCallMap.size > 0 ? "tool_calls" : "stop"; + mappedFinishReason = toolCallMap.size > 0 ? 'tool_calls' : 'stop' + } else if (finishReason === 'STOP') { + mappedFinishReason = toolCallMap.size > 0 ? 'tool_calls' : 'stop' } yield { - type: "done", + type: 'done', id: this.generateId(), model, timestamp, @@ -417,45 +404,45 @@ export class GeminiAdapter extends BaseAdapter< totalTokens: chunk.usageMetadata.totalTokenCount ?? 0, } : undefined, - }; + } } } } - private formatMessages(messages: ModelMessage[]): Array<{ - role: "user" | "model"; + private formatMessages(messages: Array): Array<{ + role: 'user' | 'model' parts: Array<{ - text?: string; - functionCall?: { name: string; args: Record }; - functionResponse?: { name: string; response: Record }; - }>; + text?: string + functionCall?: { name: string; args: Record } + functionResponse?: { name: string; response: Record } + }> }> { return messages - .filter((m) => m.role !== "system") // Skip system messages + .filter((m) => m.role !== 'system') // Skip system messages .map((msg) => { - const role: "user" | "model" = - msg.role === "assistant" ? "model" : "user"; + const role: 'user' | 'model' = + msg.role === 'assistant' ? 'model' : 'user' const parts: Array<{ - text?: string; - functionCall?: { name: string; args: Record }; - functionResponse?: { name: string; response: Record }; - }> = []; + text?: string + functionCall?: { name: string; args: Record } + functionResponse?: { name: string; response: Record } + }> = [] // Add text content if present if (msg.content) { - parts.push({ text: msg.content }); + parts.push({ text: msg.content }) } // Handle tool calls (from assistant) - if (msg.role === "assistant" && msg.toolCalls?.length) { + if (msg.role === 'assistant' && msg.toolCalls?.length) { for (const toolCall of msg.toolCalls) { - let parsedArgs: Record = {}; + let parsedArgs: Record = {} try { parsedArgs = toolCall.function.arguments ? JSON.parse(toolCall.function.arguments) - : {}; + : {} } catch { - parsedArgs = toolCall.function.arguments as any; + parsedArgs = toolCall.function.arguments as any } parts.push({ @@ -463,37 +450,35 @@ export class GeminiAdapter extends BaseAdapter< name: toolCall.function.name, args: parsedArgs, }, - }); + }) } } // Handle tool results (from tool role) - if (msg.role === "tool" && msg.toolCallId) { + if (msg.role === 'tool' && msg.toolCallId) { parts.push({ functionResponse: { name: msg.toolCallId, // Gemini uses function name here response: { - content: msg.content || "", + content: msg.content || '', }, }, - }); + }) } return { role, - parts: parts.length > 0 ? parts : [{ text: "" }], - }; - }); + parts: parts.length > 0 ? parts : [{ text: '' }], + } + }) } /** * Maps common options to Gemini-specific format * Handles translation of normalized options to Gemini's API format */ - private mapCommonOptionsToGemini( - options: ChatCompletionOptions - ) { - const providerOpts = options.providerOptions; + private mapCommonOptionsToGemini(options: ChatOptions) { + const providerOpts = options.providerOptions const requestOptions: GenerateContentParameters = { model: options.model, contents: this.formatMessages(options.messages), @@ -502,13 +487,13 @@ export class GeminiAdapter extends BaseAdapter< temperature: options.options?.temperature, topP: options.options?.topP, maxOutputTokens: options.options?.maxTokens, - systemInstruction: options.systemPrompts?.join("\n"), + systemInstruction: options.systemPrompts?.join('\n'), ...providerOpts?.generationConfig, tools: convertToolsToProviderFormat(options.tools), }, - }; + } - return requestOptions; + return requestOptions } } @@ -530,9 +515,9 @@ export class GeminiAdapter extends BaseAdapter< */ export function createGemini( apiKey: string, - config?: Omit + config?: Omit, ): GeminiAdapter { - return new GeminiAdapter({ apiKey, ...config }); + return new GeminiAdapter({ apiKey, ...config }) } /** @@ -553,21 +538,21 @@ export function createGemini( * ``` */ export function gemini( - config?: Omit + config?: Omit, ): GeminiAdapter { const env = - typeof globalThis !== "undefined" && (globalThis as any).window?.env + typeof globalThis !== 'undefined' && (globalThis as any).window?.env ? (globalThis as any).window.env - : typeof process !== "undefined" - ? process.env - : undefined; - const key = env?.GOOGLE_API_KEY || env?.GEMINI_API_KEY; + : typeof process !== 'undefined' + ? process.env + : undefined + const key = env?.GOOGLE_API_KEY || env?.GEMINI_API_KEY if (!key) { throw new Error( - "GOOGLE_API_KEY or GEMINI_API_KEY is required. Please set it in your environment variables or use createGemini(apiKey, config) instead." - ); + 'GOOGLE_API_KEY or GEMINI_API_KEY is required. Please set it in your environment variables or use createGemini(apiKey, config) instead.', + ) } - return createGemini(key, config); + return createGemini(key, config) } diff --git a/packages/typescript/ai-gemini/src/index.ts b/packages/typescript/ai-gemini/src/index.ts index 70aadec25..38b8359e4 100644 --- a/packages/typescript/ai-gemini/src/index.ts +++ b/packages/typescript/ai-gemini/src/index.ts @@ -1,4 +1,8 @@ -export { GeminiAdapter, createGemini, gemini } from "./gemini-adapter"; -export type { GeminiAdapterConfig } from "./gemini-adapter"; -export type { GeminiChatModelProviderOptionsByName } from "./model-meta"; -export type { GeminiStructuredOutputOptions, GeminiThinkingOptions } from "./text/text-provider-options"; +export { GeminiAdapter, createGemini, gemini } from './gemini-adapter' +export type { GeminiAdapterConfig } from './gemini-adapter' +export type { GeminiChatModelProviderOptionsByName } from './model-meta' +export type { + GeminiStructuredOutputOptions, + GeminiThinkingOptions, +} from './text/text-provider-options' +export type { GoogleGeminiTool } from './tools/index' diff --git a/packages/typescript/ai-gemini/src/model-meta.ts b/packages/typescript/ai-gemini/src/model-meta.ts index 954d6c7be..86017cfd9 100644 --- a/packages/typescript/ai-gemini/src/model-meta.ts +++ b/packages/typescript/ai-gemini/src/model-meta.ts @@ -1,764 +1,860 @@ -import type { - GeminiToolConfigOptions, - GeminiSafetyOptions, - GeminiGenerationConfigOptions, - GeminiCachedContentOptions, - GeminiStructuredOutputOptions, - GeminiThinkingOptions, -} from "./text/text-provider-options"; - -interface ModelMeta { - name: string; - supports: { - input: ("text" | "image" | "audio" | "video" | "pdf")[]; - output: ("text" | "image" | "audio" | "video")[]; - capabilities?: ("audio_generation" | "batch_api" | "caching" | "code_execution" | "file_search" | "function_calling" | "grounding_with_gmaps" | "image_generation" | "live_api" | "search_grounding" | "structured_output" | "thinking" | "url_context")[] - }; - max_input_tokens?: number; - max_output_tokens?: number; - knowledge_cutoff?: string; - pricing?: { - input: { - normal: number; - cached?: number; - }; - output: { - normal: number; - }; - }; - /** - * Type-level description of which provider options this model supports. - */ - providerOptions?: TProviderOptions; -} - - -const GEMINI_3_PRO = { - name: "gemini-3-pro-preview", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "image", "audio", "video", "pdf"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "file_search", - "function_calling", - "search_grounding", - "structured_output", - "thinking", - "url_context" - ] - }, - pricing: { - input: { - normal: 2.5, - }, - output: { - normal: 15 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions & - GeminiThinkingOptions -> - - -const GEMINI_2_5_PRO = { - name: "gemini-2.5-pro", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "image", "audio", "video", "pdf"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "file_search", - "function_calling", - "grounding_with_gmaps", - "search_grounding", - "structured_output", - "thinking", - "url_context" - ] - }, - pricing: { - input: { - normal: 2.5, - }, - output: { - normal: 15 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions & - GeminiThinkingOptions -> - -const GEMINI_2_5_PRO_TTS = { - name: "gemini-2.5-pro-preview-tts", - max_input_tokens: 8_192, - max_output_tokens: 16_384, - knowledge_cutoff: "2025-05-01", - supports: { - input: ["text",], - output: ["audio"], - capabilities: [ - "audio_generation", - "file_search" - ] - }, - pricing: { - input: { - normal: 2.5, - }, - output: { - normal: 15 - } - } -} as const satisfies ModelMeta - -const GEMINI_2_5_FLASH = { - name: "gemini-2.5-flash", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "image", "audio", "video"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "file_search", - "function_calling", - "grounding_with_gmaps", - "search_grounding", - "structured_output", - "thinking", - "url_context" - ] - }, - pricing: { - input: { - normal: 1, - }, - output: { - normal: 2.5 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions & - GeminiThinkingOptions -> - -const GEMINI_2_5_FLASH_PREVIEW = { - name: "gemini-2.5-flash-preview-09-2025", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "image", "audio", "video"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "file_search", - "function_calling", - "search_grounding", - "structured_output", - "thinking", - "url_context" - ] - }, - pricing: { - input: { - normal: 1, - }, - output: { - normal: 2.5 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions & - GeminiThinkingOptions -> - - -const GEMINI_2_5_FLASH_IMAGE = { - name: "gemini-2.5-flash-image", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-06-01", - supports: { - input: ["text", "image"], - output: ["text", "image"], - capabilities: [ - "batch_api", - "caching", - "file_search", - "image_generation", - "structured_output", - ] - }, - pricing: { - input: { - normal: 0.3, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta - - -const GEMINI_2_5_FLASH_LIVE = { - name: "gemini-2.5-flash-native-audio-preview-09-2025", - max_input_tokens: 141_072, - max_output_tokens: 8_192, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "audio", "video"], - output: ["text", "audio"], - capabilities: [ - "audio_generation", - "file_search", - "function_calling", - "live_api", - "search_grounding", - "thinking" - ] - }, - pricing: { - // todo find this info - input: { - normal: 0, - }, - output: { - normal: 0 - } - } -} as const satisfies ModelMeta - - -const GEMINI_2_5_FLASH_TTS = { - name: "gemini-2.5-flash-preview-tts", - max_input_tokens: 8_192, - max_output_tokens: 16_384, - knowledge_cutoff: "2025-05-01", - supports: { - input: ["text",], - output: ["audio"], - capabilities: [ - "audio_generation", - "batch_api", - "file_search" - ] - }, - pricing: { - input: { - normal: 1, - }, - output: { - normal: 2.5 - } - } -} as const satisfies ModelMeta - - -const GEMINI_2_5_FLASH_LITE = { - name: "gemini-2.5-flash-lite", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "image", "audio", "video", "pdf"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "function_calling", - "grounding_with_gmaps", - "search_grounding", - "structured_output", - "thinking", - "url_context" - ] - }, - pricing: { - input: { - normal: 0.1, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions & - GeminiThinkingOptions -> - -const GEMINI_2_5_FLASH_LITE_PREVIEW = { - name: "gemini-2.5-flash-lite-preview-09-2025", - max_input_tokens: 1_048_576, - max_output_tokens: 65_536, - knowledge_cutoff: "2025-01-01", - supports: { - input: ["text", "image", "audio", "video", "pdf"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "function_calling", - "search_grounding", - "structured_output", - "thinking", - "url_context" - ] - }, - pricing: { - input: { - normal: 0.1, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions & - GeminiThinkingOptions -> - - -const GEMINI_2_FLASH = { - name: "gemini-2.0-flash", - max_input_tokens: 1_048_576, - max_output_tokens: 8_192, - knowledge_cutoff: "2024-08-01", - supports: { - input: ["text", "image", "audio", "video"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "code_execution", - "function_calling", - "grounding_with_gmaps", - "live_api", - "search_grounding", - "structured_output" - ] - }, - pricing: { - input: { - normal: 0.1, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions -> - - -const GEMINI_2_FLASH_IMAGE = { - name: "gemini-2.0-flash-preview-image-generation", - max_input_tokens: 32_768, - max_output_tokens: 8_192, - knowledge_cutoff: "2024-08-01", - supports: { - input: ["text", "image", "audio", "video"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "image_generation", - "structured_output" - ] - }, - pricing: { - input: { - normal: 0.1, - }, - output: { - normal: 0.039 - } - } -} as const satisfies ModelMeta - - -const GEMINI_2_FLASH_LIVE = { - name: "gemini-2.0-flash-live-001", - max_input_tokens: 1_048_576, - max_output_tokens: 8_192, - knowledge_cutoff: "2024-08-01", - supports: { - input: ["text", "audio", "video"], - output: ["text", "audio"], - capabilities: [ - "audio_generation", - "code_execution", - "function_calling", - "live_api", - "search_grounding", - "structured_output", - "url_context" - ] - }, - pricing: { - // todo find this info - input: { - normal: 0, - }, - output: { - normal: 0 - } - } -} as const satisfies ModelMeta - - -const GEMINI_2_FLASH_LITE = { - name: "gemini-2.0-flash-lite", - max_input_tokens: 1_048_576, - max_output_tokens: 8_192, - knowledge_cutoff: "2024-08-01", - supports: { - input: ["text", "audio", "video", "image"], - output: ["text"], - capabilities: [ - "batch_api", - "caching", - "function_calling", - "structured_output" - ] - }, - pricing: { - input: { - normal: 0.075, - }, - output: { - normal: 0.3 - } - } -} as const satisfies ModelMeta< - GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions & - GeminiStructuredOutputOptions -> - -const IMAGEN_4_GENERATE = { - name: "imagen-4.0-generate-001", - max_input_tokens: 480, - max_output_tokens: 4, - supports: { - input: ["text"], - output: ["image"], - - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta - -const IMAGEN_4_GENERATE_ULTRA = { - name: "imagen-4.0-ultra-generate-001", - max_input_tokens: 480, - max_output_tokens: 4, - supports: { - input: ["text"], - output: ["image"], - - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.6 - } - } -} as const satisfies ModelMeta - - -const IMAGEN_4_GENERATE_FAST = { - name: "imagen-4.0-fast-generate-001", - max_input_tokens: 480, - max_output_tokens: 4, - supports: { - input: ["text"], - output: ["image"], - - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.2 - } - } -} as const satisfies ModelMeta - - -const IMAGEN_3 = { - name: "imagen-3.0-generate-002", - max_output_tokens: 4, - supports: { - input: ["text"], - output: ["image"], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.03 - } - } -} as const satisfies ModelMeta - -const VEO_3_1_PREVIEW = { - name: "veo-3.1-generate-preview", - max_input_tokens: 1024, - max_output_tokens: 1, - supports: { - input: ["text", "image"], - output: ["video", "audio"], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta - - -const VEO_3_1_FAST_PREVIEW = { - name: "veo-3.1-fast-generate-preview", - max_input_tokens: 1024, - max_output_tokens: 1, - supports: { - input: ["text", "image"], - output: ["video", "audio"], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.15 - } - } -} as const satisfies ModelMeta - -const VEO_3 = { - name: "veo-3.0-generate-001", - max_input_tokens: 1024, - max_output_tokens: 1, - supports: { - input: ["text", "image"], - output: ["video", "audio"], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.4 - } - } -} as const satisfies ModelMeta - - -const VEO_3_FAST = { - name: "veo-3.0-fast-generate-001", - max_input_tokens: 1024, - max_output_tokens: 1, - supports: { - input: ["text", "image"], - output: ["video", "audio"], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.15 - } - } -} as const satisfies ModelMeta - - -const VEO_2 = { - name: "veo-2.0-generate-001", - max_output_tokens: 2, - supports: { - input: ["text", "image"], - output: ["video",], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.35 - } - } -} as const satisfies ModelMeta - -const GEMINI_EMBEDDING = { - name: "gemini-embedding-001", - max_input_tokens: 2048, - supports: { - input: ["text"], - output: ["text"], - }, - pricing: { - input: { - normal: 0, - }, - output: { - normal: 0.15 - } - } -} as const satisfies ModelMeta; - -export const GEMINI_MODEL_META = { - [GEMINI_3_PRO.name]: GEMINI_3_PRO, - [GEMINI_2_5_PRO.name]: GEMINI_2_5_PRO, - [GEMINI_2_5_PRO_TTS.name]: GEMINI_2_5_PRO_TTS, - [GEMINI_2_5_FLASH.name]: GEMINI_2_5_FLASH, - [GEMINI_2_5_FLASH_PREVIEW.name]: GEMINI_2_5_FLASH_PREVIEW, - [GEMINI_2_5_FLASH_IMAGE.name]: GEMINI_2_5_FLASH_IMAGE, - [GEMINI_2_5_FLASH_LIVE.name]: GEMINI_2_5_FLASH_LIVE, - [GEMINI_2_5_FLASH_TTS.name]: GEMINI_2_5_FLASH_TTS, - [GEMINI_2_5_FLASH_LITE.name]: GEMINI_2_5_FLASH_LITE, - [GEMINI_2_5_FLASH_LITE_PREVIEW.name]: GEMINI_2_5_FLASH_LITE_PREVIEW, - [GEMINI_2_FLASH.name]: GEMINI_2_FLASH, - [GEMINI_2_FLASH_IMAGE.name]: GEMINI_2_FLASH_IMAGE, - [GEMINI_2_FLASH_LIVE.name]: GEMINI_2_FLASH_LIVE, - [GEMINI_2_FLASH_LITE.name]: GEMINI_2_FLASH_LITE, - [IMAGEN_4_GENERATE.name]: IMAGEN_4_GENERATE, - [IMAGEN_4_GENERATE_ULTRA.name]: IMAGEN_4_GENERATE_ULTRA, - [IMAGEN_4_GENERATE_FAST.name]: IMAGEN_4_GENERATE_FAST, - [IMAGEN_3.name]: IMAGEN_3, - [VEO_3_1_PREVIEW.name]: VEO_3_1_PREVIEW, - [VEO_3_1_FAST_PREVIEW.name]: VEO_3_1_FAST_PREVIEW, - [VEO_3.name]: VEO_3, - [VEO_3_FAST.name]: VEO_3_FAST, - [VEO_2.name]: VEO_2, - [GEMINI_EMBEDDING.name]: GEMINI_EMBEDDING, -} as const; - -export type GeminiModelMetaMap = typeof GEMINI_MODEL_META; - -export type GeminiModelProviderOptions< - TModel extends keyof GeminiModelMetaMap -> = GeminiModelMetaMap[TModel] extends ModelMeta - ? TProviderOptions - : unknown; - -export const GEMINI_MODELS = [ - GEMINI_3_PRO.name, - GEMINI_2_5_PRO.name, - GEMINI_2_5_FLASH.name, - GEMINI_2_5_FLASH_PREVIEW.name, - GEMINI_2_5_FLASH_LITE.name, - GEMINI_2_5_FLASH_LITE_PREVIEW.name, - GEMINI_2_FLASH.name, - GEMINI_2_FLASH_LITE.name, -] as const - - -export const GEMINI_IMAGE_MODELS = [ - GEMINI_2_5_FLASH_IMAGE.name, - GEMINI_2_FLASH_IMAGE.name, - IMAGEN_3.name, - IMAGEN_4_GENERATE.name, - IMAGEN_4_GENERATE_FAST.name, - IMAGEN_4_GENERATE_ULTRA.name - -] as const; - -export const GEMINI_EMBEDDING_MODELS = [ - GEMINI_EMBEDDING.name -] as const; - -export const GEMINI_AUDIO_MODELS = [ - GEMINI_2_5_PRO_TTS.name, - GEMINI_2_5_FLASH_TTS.name, - GEMINI_2_5_FLASH_LIVE.name, - GEMINI_2_FLASH_LIVE.name -] as const; - -export const GEMINI_VIDEO_MODELS = [ - VEO_3_1_PREVIEW.name, - VEO_3_1_FAST_PREVIEW.name, - VEO_3.name, - VEO_3_FAST.name, - VEO_2.name -] as const; - -export type GeminiChatModels = typeof GEMINI_MODELS[number]; - -// Manual type map for per-model provider options -export type GeminiChatModelProviderOptionsByName = { - // Models with thinking and structured output support - [GEMINI_3_PRO.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions & GeminiThinkingOptions; - [GEMINI_2_5_PRO.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions & GeminiThinkingOptions; - [GEMINI_2_5_FLASH.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions & GeminiThinkingOptions; - [GEMINI_2_5_FLASH_PREVIEW.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions & GeminiThinkingOptions; - [GEMINI_2_5_FLASH_LITE.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions & GeminiThinkingOptions; - [GEMINI_2_5_FLASH_LITE_PREVIEW.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions & GeminiThinkingOptions; - // Models with structured output but no thinking support - [GEMINI_2_FLASH.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions; - [GEMINI_2_FLASH_LITE.name]: GeminiToolConfigOptions & GeminiSafetyOptions & GeminiGenerationConfigOptions & GeminiCachedContentOptions & GeminiStructuredOutputOptions; -}; \ No newline at end of file +import type { + GeminiCachedContentOptions, + GeminiGenerationConfigOptions, + GeminiSafetyOptions, + GeminiStructuredOutputOptions, + GeminiThinkingOptions, + GeminiToolConfigOptions, +} from './text/text-provider-options' + +interface ModelMeta { + name: string + supports: { + input: Array<'text' | 'image' | 'audio' | 'video' | 'pdf'> + output: Array<'text' | 'image' | 'audio' | 'video'> + capabilities?: Array< + | 'audio_generation' + | 'batch_api' + | 'caching' + | 'code_execution' + | 'file_search' + | 'function_calling' + | 'grounding_with_gmaps' + | 'image_generation' + | 'live_api' + | 'search_grounding' + | 'structured_output' + | 'thinking' + | 'url_context' + > + } + max_input_tokens?: number + max_output_tokens?: number + knowledge_cutoff?: string + pricing?: { + input: { + normal: number + cached?: number + } + output: { + normal: number + } + } + /** + * Type-level description of which provider options this model supports. + */ + providerOptions?: TProviderOptions +} + +const GEMINI_3_PRO = { + name: 'gemini-3-pro-preview', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'image', 'audio', 'video', 'pdf'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'file_search', + 'function_calling', + 'search_grounding', + 'structured_output', + 'thinking', + 'url_context', + ], + }, + pricing: { + input: { + normal: 2.5, + }, + output: { + normal: 15, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions +> + +const GEMINI_2_5_PRO = { + name: 'gemini-2.5-pro', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'image', 'audio', 'video', 'pdf'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'file_search', + 'function_calling', + 'grounding_with_gmaps', + 'search_grounding', + 'structured_output', + 'thinking', + 'url_context', + ], + }, + pricing: { + input: { + normal: 2.5, + }, + output: { + normal: 15, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions +> + +/* const GEMINI_2_5_PRO_TTS = { + name: 'gemini-2.5-pro-preview-tts', + max_input_tokens: 8_192, + max_output_tokens: 16_384, + knowledge_cutoff: '2025-05-01', + supports: { + input: ['text'], + output: ['audio'], + capabilities: ['audio_generation', 'file_search'], + }, + pricing: { + input: { + normal: 2.5, + }, + output: { + normal: 15, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> */ + +const GEMINI_2_5_FLASH = { + name: 'gemini-2.5-flash', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'image', 'audio', 'video'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'file_search', + 'function_calling', + 'grounding_with_gmaps', + 'search_grounding', + 'structured_output', + 'thinking', + 'url_context', + ], + }, + pricing: { + input: { + normal: 1, + }, + output: { + normal: 2.5, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions +> + +const GEMINI_2_5_FLASH_PREVIEW = { + name: 'gemini-2.5-flash-preview-09-2025', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'image', 'audio', 'video'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'file_search', + 'function_calling', + 'search_grounding', + 'structured_output', + 'thinking', + 'url_context', + ], + }, + pricing: { + input: { + normal: 1, + }, + output: { + normal: 2.5, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions +> +/* +const GEMINI_2_5_FLASH_IMAGE = { + name: 'gemini-2.5-flash-image', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-06-01', + supports: { + input: ['text', 'image'], + output: ['text', 'image'], + capabilities: [ + 'batch_api', + 'caching', + 'file_search', + 'image_generation', + 'structured_output', + ], + }, + pricing: { + input: { + normal: 0.3, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const GEMINI_2_5_FLASH_LIVE = { + name: 'gemini-2.5-flash-native-audio-preview-09-2025', + max_input_tokens: 141_072, + max_output_tokens: 8_192, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'audio', 'video'], + output: ['text', 'audio'], + capabilities: [ + 'audio_generation', + 'file_search', + 'function_calling', + 'live_api', + 'search_grounding', + 'thinking', + ], + }, + pricing: { + // todo find this info + input: { + normal: 0, + }, + output: { + normal: 0, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiThinkingOptions +> + +const GEMINI_2_5_FLASH_TTS = { + name: 'gemini-2.5-flash-preview-tts', + max_input_tokens: 8_192, + max_output_tokens: 16_384, + knowledge_cutoff: '2025-05-01', + supports: { + input: ['text'], + output: ['audio'], + capabilities: ['audio_generation', 'batch_api', 'file_search'], + }, + pricing: { + input: { + normal: 1, + }, + output: { + normal: 2.5, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> */ + +const GEMINI_2_5_FLASH_LITE = { + name: 'gemini-2.5-flash-lite', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'image', 'audio', 'video', 'pdf'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'function_calling', + 'grounding_with_gmaps', + 'search_grounding', + 'structured_output', + 'thinking', + 'url_context', + ], + }, + pricing: { + input: { + normal: 0.1, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions +> + +const GEMINI_2_5_FLASH_LITE_PREVIEW = { + name: 'gemini-2.5-flash-lite-preview-09-2025', + max_input_tokens: 1_048_576, + max_output_tokens: 65_536, + knowledge_cutoff: '2025-01-01', + supports: { + input: ['text', 'image', 'audio', 'video', 'pdf'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'function_calling', + 'search_grounding', + 'structured_output', + 'thinking', + 'url_context', + ], + }, + pricing: { + input: { + normal: 0.1, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions +> + +const GEMINI_2_FLASH = { + name: 'gemini-2.0-flash', + max_input_tokens: 1_048_576, + max_output_tokens: 8_192, + knowledge_cutoff: '2024-08-01', + supports: { + input: ['text', 'image', 'audio', 'video'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'code_execution', + 'function_calling', + 'grounding_with_gmaps', + 'live_api', + 'search_grounding', + 'structured_output', + ], + }, + pricing: { + input: { + normal: 0.1, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions +> +/* +const GEMINI_2_FLASH_IMAGE = { + name: 'gemini-2.0-flash-preview-image-generation', + max_input_tokens: 32_768, + max_output_tokens: 8_192, + knowledge_cutoff: '2024-08-01', + supports: { + input: ['text', 'image', 'audio', 'video'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'image_generation', + 'structured_output', + ], + }, + pricing: { + input: { + normal: 0.1, + }, + output: { + normal: 0.039, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> */ +/* +const GEMINI_2_FLASH_LIVE = { + name: 'gemini-2.0-flash-live-001', + max_input_tokens: 1_048_576, + max_output_tokens: 8_192, + knowledge_cutoff: '2024-08-01', + supports: { + input: ['text', 'audio', 'video'], + output: ['text', 'audio'], + capabilities: [ + 'audio_generation', + 'code_execution', + 'function_calling', + 'live_api', + 'search_grounding', + 'structured_output', + 'url_context', + ], + }, + pricing: { + // todo find this info + input: { + normal: 0, + }, + output: { + normal: 0, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> */ + +const GEMINI_2_FLASH_LITE = { + name: 'gemini-2.0-flash-lite', + max_input_tokens: 1_048_576, + max_output_tokens: 8_192, + knowledge_cutoff: '2024-08-01', + supports: { + input: ['text', 'audio', 'video', 'image'], + output: ['text'], + capabilities: [ + 'batch_api', + 'caching', + 'function_calling', + 'structured_output', + ], + }, + pricing: { + input: { + normal: 0.075, + }, + output: { + normal: 0.3, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions +> + +/* const IMAGEN_4_GENERATE = { + name: 'imagen-4.0-generate-001', + max_input_tokens: 480, + max_output_tokens: 4, + supports: { + input: ['text'], + output: ['image'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const IMAGEN_4_GENERATE_ULTRA = { + name: 'imagen-4.0-ultra-generate-001', + max_input_tokens: 480, + max_output_tokens: 4, + supports: { + input: ['text'], + output: ['image'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.6, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const IMAGEN_4_GENERATE_FAST = { + name: 'imagen-4.0-fast-generate-001', + max_input_tokens: 480, + max_output_tokens: 4, + supports: { + input: ['text'], + output: ['image'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.2, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const IMAGEN_3 = { + name: 'imagen-3.0-generate-002', + max_output_tokens: 4, + supports: { + input: ['text'], + output: ['image'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.03, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const VEO_3_1_PREVIEW = { + name: 'veo-3.1-generate-preview', + max_input_tokens: 1024, + max_output_tokens: 1, + supports: { + input: ['text', 'image'], + output: ['video', 'audio'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const VEO_3_1_FAST_PREVIEW = { + name: 'veo-3.1-fast-generate-preview', + max_input_tokens: 1024, + max_output_tokens: 1, + supports: { + input: ['text', 'image'], + output: ['video', 'audio'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.15, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const VEO_3 = { + name: 'veo-3.0-generate-001', + max_input_tokens: 1024, + max_output_tokens: 1, + supports: { + input: ['text', 'image'], + output: ['video', 'audio'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.4, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const VEO_3_FAST = { + name: 'veo-3.0-fast-generate-001', + max_input_tokens: 1024, + max_output_tokens: 1, + supports: { + input: ['text', 'image'], + output: ['video', 'audio'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.15, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +const VEO_2 = { + name: 'veo-2.0-generate-001', + max_output_tokens: 2, + supports: { + input: ['text', 'image'], + output: ['video'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.35, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> */ + +const GEMINI_EMBEDDING = { + name: 'gemini-embedding-001', + max_input_tokens: 2048, + supports: { + input: ['text'], + output: ['text'], + }, + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0.15, + }, + }, +} as const satisfies ModelMeta< + GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions +> + +/* const GEMINI_MODEL_META = { + [GEMINI_3_PRO.name]: GEMINI_3_PRO, + [GEMINI_2_5_PRO.name]: GEMINI_2_5_PRO, + [GEMINI_2_5_PRO_TTS.name]: GEMINI_2_5_PRO_TTS, + [GEMINI_2_5_FLASH.name]: GEMINI_2_5_FLASH, + [GEMINI_2_5_FLASH_PREVIEW.name]: GEMINI_2_5_FLASH_PREVIEW, + [GEMINI_2_5_FLASH_IMAGE.name]: GEMINI_2_5_FLASH_IMAGE, + [GEMINI_2_5_FLASH_LIVE.name]: GEMINI_2_5_FLASH_LIVE, + [GEMINI_2_5_FLASH_TTS.name]: GEMINI_2_5_FLASH_TTS, + [GEMINI_2_5_FLASH_LITE.name]: GEMINI_2_5_FLASH_LITE, + [GEMINI_2_5_FLASH_LITE_PREVIEW.name]: GEMINI_2_5_FLASH_LITE_PREVIEW, + [GEMINI_2_FLASH.name]: GEMINI_2_FLASH, + [GEMINI_2_FLASH_IMAGE.name]: GEMINI_2_FLASH_IMAGE, + [GEMINI_2_FLASH_LIVE.name]: GEMINI_2_FLASH_LIVE, + [GEMINI_2_FLASH_LITE.name]: GEMINI_2_FLASH_LITE, + [IMAGEN_4_GENERATE.name]: IMAGEN_4_GENERATE, + [IMAGEN_4_GENERATE_ULTRA.name]: IMAGEN_4_GENERATE_ULTRA, + [IMAGEN_4_GENERATE_FAST.name]: IMAGEN_4_GENERATE_FAST, + [IMAGEN_3.name]: IMAGEN_3, + [VEO_3_1_PREVIEW.name]: VEO_3_1_PREVIEW, + [VEO_3_1_FAST_PREVIEW.name]: VEO_3_1_FAST_PREVIEW, + [VEO_3.name]: VEO_3, + [VEO_3_FAST.name]: VEO_3_FAST, + [VEO_2.name]: VEO_2, + [GEMINI_EMBEDDING.name]: GEMINI_EMBEDDING, +} as const */ + +export const GEMINI_MODELS = [ + GEMINI_3_PRO.name, + GEMINI_2_5_PRO.name, + GEMINI_2_5_FLASH.name, + GEMINI_2_5_FLASH_PREVIEW.name, + GEMINI_2_5_FLASH_LITE.name, + GEMINI_2_5_FLASH_LITE_PREVIEW.name, + GEMINI_2_FLASH.name, + GEMINI_2_FLASH_LITE.name, +] as const + +/* const GEMINI_IMAGE_MODELS = [ + GEMINI_2_5_FLASH_IMAGE.name, + GEMINI_2_FLASH_IMAGE.name, + IMAGEN_3.name, + IMAGEN_4_GENERATE.name, + IMAGEN_4_GENERATE_FAST.name, + IMAGEN_4_GENERATE_ULTRA.name, +] as const */ + +export const GEMINI_EMBEDDING_MODELS = [GEMINI_EMBEDDING.name] as const + +/* const GEMINI_AUDIO_MODELS = [ + GEMINI_2_5_PRO_TTS.name, + GEMINI_2_5_FLASH_TTS.name, + GEMINI_2_5_FLASH_LIVE.name, + GEMINI_2_FLASH_LIVE.name, +] as const + + const GEMINI_VIDEO_MODELS = [ + VEO_3_1_PREVIEW.name, + VEO_3_1_FAST_PREVIEW.name, + VEO_3.name, + VEO_3_FAST.name, + VEO_2.name, +] as const */ + +// export type GeminiChatModels = (typeof GEMINI_MODELS)[number] + +// Manual type map for per-model provider options +export type GeminiChatModelProviderOptionsByName = { + // Models with thinking and structured output support + [GEMINI_3_PRO.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions + [GEMINI_2_5_PRO.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions + [GEMINI_2_5_FLASH.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions + [GEMINI_2_5_FLASH_PREVIEW.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions + [GEMINI_2_5_FLASH_LITE.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions + [GEMINI_2_5_FLASH_LITE_PREVIEW.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions & + GeminiThinkingOptions + // Models with structured output but no thinking support + [GEMINI_2_FLASH.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions + [GEMINI_2_FLASH_LITE.name]: GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions & + GeminiStructuredOutputOptions +} diff --git a/packages/typescript/ai-gemini/src/text/text-provider-options.ts b/packages/typescript/ai-gemini/src/text/text-provider-options.ts index 6b58800a8..fe9373b8a 100644 --- a/packages/typescript/ai-gemini/src/text/text-provider-options.ts +++ b/packages/typescript/ai-gemini/src/text/text-provider-options.ts @@ -1,413 +1,246 @@ -import { GeminiChatModels } from "../model-meta"; -import { Schema } from "../tools/function-declaration-tool"; -import { GoogleGeminiTool } from "../tools"; -import { ContentListUnion, MediaResolution, SafetySetting, ThinkingLevel, ToolConfig } from "@google/genai"; - -export interface GeminiToolConfigOptions { - /** - * Tool configuration for any Tool specified in the request. - */ - toolConfig?: ToolConfig -} - -export interface GeminiSafetyOptions { - /** - * list of unique SafetySetting instances for blocking unsafe content. - -This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safetySettings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_CIVIC_INTEGRITY are supported - */ - safetySettings?: SafetySetting[] -} - -export interface GeminiGenerationConfigOptions { - /** - * Configuration options for model generation and outputs. - */ - generationConfig?: { - /** - * The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response. - */ - stopSequences?: string[]; - /** - * The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response. - -A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned. - */ - responseModalities?: ("MODALITY_UNSPECIFIED" | "TEXT" | "IMAGE" | "AUDIO")[] - /** - * Number of generated responses to return. If unset, this will default to 1. Please note that this doesn't work for previous generation models (Gemini 1.0 family) - */ - candidateCount?: number; - /** - * The maximum number of tokens to consider when sampling. - -Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of topK most probable tokens. Models running with nucleus sampling don't allow topK setting. - -Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty topK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting topK on requests. - */ - topK?: number; - /** - * Seed used in decoding. If not set, the request uses a randomly generated seed. - */ - seed?: number; - /** - * Presence penalty applied to the next token's logprobs if the token has already been seen in the response. - -This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use frequencyPenalty for a penalty that increases with each use. - -A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary. - -A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary. - */ - presencePenalty?: number; - /** - * Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far. - -A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again increasing the vocabulary of responses. - -Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the maxOutputTokens limit. - */ - frequencyPenalty?: number; - /** - * If true, export the logprobs results in response. - */ - responseLogprobs?: boolean; - - /** - * Only valid if responseLogprobs=True. This sets the number of top logprobs to return at each decoding step in the Candidate.logprobs_result. The number must be in the range of [0, 20]. - */ - logprobs?: number; - - /** - * Enables enhanced civic answers. It may not be available for all models. - */ - enableEnhancedCivicAnswers?: boolean; - - /** - * The speech generation config. - */ - speechConfig?: { - voiceConfig: { - prebuiltVoiceConfig: { - voiceName: string - } - } - - multiSpeakerVoiceConfig?: { - speakerVoiceConfigs?: { - speaker: string; - voiceConfig: { - prebuiltVoiceConfig: { - voiceName: string - } - } - }[] - } - /** - * Language code (in BCP 47 format, e.g. "en-US") for speech synthesis. - -Valid values are: de-DE, en-AU, en-GB, en-IN, en-US, es-US, fr-FR, hi-IN, pt-BR, ar-XA, es-ES, fr-CA, id-ID, it-IT, ja-JP, tr-TR, vi-VN, bn-IN, gu-IN, kn-IN, ml-IN, mr-IN, ta-IN, te-IN, nl-NL, ko-KR, cmn-CN, pl-PL, ru-RU, and th-TH. - */ - languageCode?: "de-DE" | "en-AU" | "en-GB" | "en-IN" | "en-US" | "es-US" | "fr-FR" | "hi-IN" | "pt-BR" | "ar-XA" | "es-ES" | "fr-CA" | "id-ID" | "it-IT" | "ja-JP" | "tr-TR" | "vi-VN" | "bn-IN" | "gu-IN" | "kn-IN" | "ml-IN" | "mr-IN" | "ta-IN" | "te-IN" | "nl-NL" | "ko-KR" | "cmn-CN" | "pl-PL" | "ru-RU" | "th-TH"; - } - /** - * Config for image generation. An error will be returned if this field is set for models that don't support these config options. - */ - imageConfig?: { - aspectRatio?: "1:1" | "2:3" | "3:2" | "3:4" | "4:3" | "9:16" | "16:9" | "21:9" - } - /** - * If specified, the media resolution specified will be used. - */ - mediaResolution?: MediaResolution - } & GeminiThinkingOptions & GeminiStructuredOutputOptions -} - -export interface GeminiCachedContentOptions { - /** - * The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent} - */ - cachedContent?: `cachedContents/${string}`; -} - -export interface GeminiStructuredOutputOptions { - /** - * MIME type of the generated candidate text. Supported MIME types are: text/plain: (default) Text output. application/json: JSON response in the response candidates. text/x.enum: ENUM as a string response in the response candidates. - */ - responseMimeType?: string; - /** - * Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays. - -If set, a compatible responseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. - */ - responseSchema?: Schema - /** - * Output schema of the generated response. This is an alternative to responseSchema that accepts JSON Schema. - -If set, responseSchema must be omitted, but responseMimeType is required. - -While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: - -$id -$defs -$ref -$anchor -type -format -title -description -enum (for strings and numbers) -items -prefixItems -minItems -maxItems -minimum -maximum -anyOf -oneOf (interpreted the same as anyOf) -properties -additionalProperties -required -The non-standard propertyOrdering property may also be set. - -Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If $ref is set on a sub-schema, no other properties, except for than those starting as a $, may be set. - */ - responseJsonSchema?: Schema -} - -export interface GeminiThinkingOptions { - /** - * Config for thinking features. An error will be returned if this field is set for models that don't support thinking. - */ - thinkingConfig?: { - /** - * Indicates whether to include thoughts in the response. If true, thoughts are returned only when available. - */ - includeThoughts: boolean; - - /** - * The number of thoughts tokens that the model should generate. - */ - thinkingBudget: number; - /** - * The level of thoughts tokens that the model should generate. - */ - thinkingLevel?: ThinkingLevel - } -} - - - -export type ExternalTextProviderOptions = GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions; -export interface InternalTextProviderOptions extends ExternalTextProviderOptions { - // path parameter - model: GeminiChatModels; - /** - * Developer set system instruction(s). - */ - systemInstruction?: string; - /** - * The content of the current conversation with the model. - -For single-turn queries, this is a single instance. For multi-turn queries like chat, this is a repeated field that contains the conversation history and the latest request. - */ - contents: string | ContentListUnion; - /** - * A list of Tools the Model may use to generate the next response. - * A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the Model. Supported Tools are Function and codeExecution. - */ - tools?: GoogleGeminiTool[]; - - - /** - * Configuration options for model generation and outputs. - */ - generationConfig?: { - /** - * The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response. - */ - stopSequences?: string[]; - /** - * MIME type of the generated candidate text. Supported MIME types are: text/plain: (default) Text output. application/json: JSON response in the response candidates. text/x.enum: ENUM as a string response in the response candidates. - */ - responseMimeType?: string; - /** - * Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays. - -If set, a compatible responseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. - */ - responseSchema?: Schema - /** - * Output schema of the generated response. This is an alternative to responseSchema that accepts JSON Schema. - -If set, responseSchema must be omitted, but responseMimeType is required. - -While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: - -$id -$defs -$ref -$anchor -type -format -title -description -enum (for strings and numbers) -items -prefixItems -minItems -maxItems -minimum -maximum -anyOf -oneOf (interpreted the same as anyOf) -properties -additionalProperties -required -The non-standard propertyOrdering property may also be set. - -Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If $ref is set on a sub-schema, no other properties, except for than those starting as a $, may be set. - */ - responseJsonSchema?: Schema - /** - * The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response. - -A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned. - */ - responseModalities?: ("MODALITY_UNSPECIFIED" | "TEXT" | "IMAGE" | "AUDIO")[] - /** - * Number of generated responses to return. If unset, this will default to 1. Please note that this doesn't work for previous generation models (Gemini 1.0 family) - */ - candidateCount?: number; - /** - * The maximum number of tokens to include in a response candidate. - -Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function. - */ - maxOutputTokens?: number; - /** - * Controls the randomness of the output. - -Note: The default value varies by model, see the Model.temperature attribute of the Model returned from the getModel function. - -Values can range from [0.0, 2.0]. - */ - temperature?: number; - /** - * The maximum cumulative probability of tokens to consider when sampling. - -The model uses combined Top-k and Top-p (nucleus) sampling. - -Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability. - -Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty topK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting topK on requests. - */ - topP?: number; - /** - * The maximum number of tokens to consider when sampling. - -Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of topK most probable tokens. Models running with nucleus sampling don't allow topK setting. - -Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty topK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting topK on requests. - */ - topK?: number; - /** - * Seed used in decoding. If not set, the request uses a randomly generated seed. - */ - seed?: number; - /** - * Presence penalty applied to the next token's logprobs if the token has already been seen in the response. - -This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use frequencyPenalty for a penalty that increases with each use. - -A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary. - -A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary. - */ - presencePenalty?: number; - /** - * Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far. - -A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again increasing the vocabulary of responses. - -Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the maxOutputTokens limit. - */ - frequencyPenalty?: number; - /** - * If true, export the logprobs results in response. - */ - responseLogprobs?: boolean; - - /** - * Only valid if responseLogprobs=True. This sets the number of top logprobs to return at each decoding step in the Candidate.logprobs_result. The number must be in the range of [0, 20]. - */ - logprobs?: number; - - /** - * Enables enhanced civic answers. It may not be available for all models. - */ - enableEnhancedCivicAnswers?: boolean; - - /** - * The speech generation config. - */ - speechConfig?: { - voiceConfig: { - prebuiltVoiceConfig: { - voiceName: string - } - } - - multiSpeakerVoiceConfig?: { - speakerVoiceConfigs?: { - speaker: string; - voiceConfig: { - prebuiltVoiceConfig: { - voiceName: string - } - } - }[] - } - /** - * Language code (in BCP 47 format, e.g. "en-US") for speech synthesis. - -Valid values are: de-DE, en-AU, en-GB, en-IN, en-US, es-US, fr-FR, hi-IN, pt-BR, ar-XA, es-ES, fr-CA, id-ID, it-IT, ja-JP, tr-TR, vi-VN, bn-IN, gu-IN, kn-IN, ml-IN, mr-IN, ta-IN, te-IN, nl-NL, ko-KR, cmn-CN, pl-PL, ru-RU, and th-TH. - */ - languageCode?: "de-DE" | "en-AU" | "en-GB" | "en-IN" | "en-US" | "es-US" | "fr-FR" | "hi-IN" | "pt-BR" | "ar-XA" | "es-ES" | "fr-CA" | "id-ID" | "it-IT" | "ja-JP" | "tr-TR" | "vi-VN" | "bn-IN" | "gu-IN" | "kn-IN" | "ml-IN" | "mr-IN" | "ta-IN" | "te-IN" | "nl-NL" | "ko-KR" | "cmn-CN" | "pl-PL" | "ru-RU" | "th-TH"; - } - /** - * Config for thinking features. An error will be returned if this field is set for models that don't support thinking. - */ - thinkingConfig?: { - /** - * Indicates whether to include thoughts in the response. If true, thoughts are returned only when available. - */ - includeThoughts: boolean; - - /** - * The number of thoughts tokens that the model should generate. - */ - thinkingBudget: number; - /** - * The level of thoughts tokens that the model should generate. - */ - thinkingLevel?: ThinkingLevel - } - /** - * Config for image generation. An error will be returned if this field is set for models that don't support these config options. - */ - imageConfig?: { - aspectRatio?: "1:1" | "2:3" | "3:2" | "3:4" | "4:3" | "9:16" | "16:9" | "21:9" - } - /** - * If specified, the media resolution specified will be used. - */ - mediaResolution?: MediaResolution - } - -} - - +import type { + MediaResolution, + SafetySetting, + Schema, + ThinkingLevel, + ToolConfig, +} from '@google/genai' + +export interface GeminiToolConfigOptions { + /** + * Tool configuration for any Tool specified in the request. + */ + toolConfig?: ToolConfig +} + +export interface GeminiSafetyOptions { + /** + * list of unique SafetySetting instances for blocking unsafe content. + +This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safetySettings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_CIVIC_INTEGRITY are supported + */ + safetySettings?: Array +} + +export interface GeminiGenerationConfigOptions { + /** + * Configuration options for model generation and outputs. + */ + generationConfig?: { + /** + * The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response. + */ + stopSequences?: Array + /** + * The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response. + +A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned. + */ + responseModalities?: Array< + 'MODALITY_UNSPECIFIED' | 'TEXT' | 'IMAGE' | 'AUDIO' + > + /** + * Number of generated responses to return. If unset, this will default to 1. Please note that this doesn't work for previous generation models (Gemini 1.0 family) + */ + candidateCount?: number + /** + * The maximum number of tokens to consider when sampling. + +Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of topK most probable tokens. Models running with nucleus sampling don't allow topK setting. + +Note: The default value varies by Model and is specified by theModel.top_p attribute returned from the getModel function. An empty topK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting topK on requests. + */ + topK?: number + /** + * Seed used in decoding. If not set, the request uses a randomly generated seed. + */ + seed?: number + /** + * Presence penalty applied to the next token's logprobs if the token has already been seen in the response. + +This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use frequencyPenalty for a penalty that increases with each use. + +A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary. + +A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary. + */ + presencePenalty?: number + /** + * Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the respponse so far. + +A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again increasing the vocabulary of responses. + +Caution: A negative penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the maxOutputTokens limit. + */ + frequencyPenalty?: number + /** + * If true, export the logprobs results in response. + */ + responseLogprobs?: boolean + + /** + * Only valid if responseLogprobs=True. This sets the number of top logprobs to return at each decoding step in the Candidate.logprobs_result. The number must be in the range of [0, 20]. + */ + logprobs?: number + + /** + * Enables enhanced civic answers. It may not be available for all models. + */ + enableEnhancedCivicAnswers?: boolean + + /** + * The speech generation config. + */ + speechConfig?: { + voiceConfig: { + prebuiltVoiceConfig: { + voiceName: string + } + } + + multiSpeakerVoiceConfig?: { + speakerVoiceConfigs?: Array<{ + speaker: string + voiceConfig: { + prebuiltVoiceConfig: { + voiceName: string + } + } + }> + } + /** + * Language code (in BCP 47 format, e.g. "en-US") for speech synthesis. + +Valid values are: de-DE, en-AU, en-GB, en-IN, en-US, es-US, fr-FR, hi-IN, pt-BR, ar-XA, es-ES, fr-CA, id-ID, it-IT, ja-JP, tr-TR, vi-VN, bn-IN, gu-IN, kn-IN, ml-IN, mr-IN, ta-IN, te-IN, nl-NL, ko-KR, cmn-CN, pl-PL, ru-RU, and th-TH. + */ + languageCode?: + | 'de-DE' + | 'en-AU' + | 'en-GB' + | 'en-IN' + | 'en-US' + | 'es-US' + | 'fr-FR' + | 'hi-IN' + | 'pt-BR' + | 'ar-XA' + | 'es-ES' + | 'fr-CA' + | 'id-ID' + | 'it-IT' + | 'ja-JP' + | 'tr-TR' + | 'vi-VN' + | 'bn-IN' + | 'gu-IN' + | 'kn-IN' + | 'ml-IN' + | 'mr-IN' + | 'ta-IN' + | 'te-IN' + | 'nl-NL' + | 'ko-KR' + | 'cmn-CN' + | 'pl-PL' + | 'ru-RU' + | 'th-TH' + } + /** + * Config for image generation. An error will be returned if this field is set for models that don't support these config options. + */ + imageConfig?: { + aspectRatio?: + | '1:1' + | '2:3' + | '3:2' + | '3:4' + | '4:3' + | '9:16' + | '16:9' + | '21:9' + } + /** + * If specified, the media resolution specified will be used. + */ + mediaResolution?: MediaResolution + } & GeminiThinkingOptions & + GeminiStructuredOutputOptions +} + +export interface GeminiCachedContentOptions { + /** + * The name of the content cached to use as context to serve the prediction. Format: cachedContents/{cachedContent} + */ + cachedContent?: `cachedContents/${string}` +} + +export interface GeminiStructuredOutputOptions { + /** + * MIME type of the generated candidate text. Supported MIME types are: text/plain: (default) Text output. application/json: JSON response in the response candidates. text/x.enum: ENUM as a string response in the response candidates. + */ + responseMimeType?: string + /** + * Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays. + +If set, a compatible responseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. + */ + responseSchema?: Schema + /** + * Output schema of the generated response. This is an alternative to responseSchema that accepts JSON Schema. + +If set, responseSchema must be omitted, but responseMimeType is required. + +While the full JSON Schema may be sent, not all features are supported. Specifically, only the following properties are supported: + +$id +$defs +$ref +$anchor +type +format +title +description +enum (for strings and numbers) +items +prefixItems +minItems +maxItems +minimum +maximum +anyOf +oneOf (interpreted the same as anyOf) +properties +additionalProperties +required +The non-standard propertyOrdering property may also be set. + +Cyclic references are unrolled to a limited degree and, as such, may only be used within non-required properties. (Nullable properties are not sufficient.) If $ref is set on a sub-schema, no other properties, except for than those starting as a $, may be set. + */ + responseJsonSchema?: Schema +} + +export interface GeminiThinkingOptions { + /** + * Config for thinking features. An error will be returned if this field is set for models that don't support thinking. + */ + thinkingConfig?: { + /** + * Indicates whether to include thoughts in the response. If true, thoughts are returned only when available. + */ + includeThoughts: boolean + + /** + * The number of thoughts tokens that the model should generate. + */ + thinkingBudget: number + /** + * The level of thoughts tokens that the model should generate. + */ + thinkingLevel?: ThinkingLevel + } +} + +export type ExternalTextProviderOptions = GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions diff --git a/packages/typescript/ai-gemini/src/tools/code-execution-tool.ts b/packages/typescript/ai-gemini/src/tools/code-execution-tool.ts index c249dcfac..f7eadd93d 100644 --- a/packages/typescript/ai-gemini/src/tools/code-execution-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/code-execution-tool.ts @@ -1,23 +1,21 @@ -import type { Tool } from "@tanstack/ai"; - -export interface CodeExecutionTool { - -} - -export function convertCodeExecutionToolToAdapterFormat(_tool: Tool) { - return { - codeExecution: {} - }; -} - -export function codeExecutionTool(): Tool { - return { - type: "function", - function: { - name: "code_execution", - description: "", - parameters: {} - }, - metadata: {} - } -} \ No newline at end of file +import type { Tool } from '@tanstack/ai' + +export interface CodeExecutionTool {} + +export function convertCodeExecutionToolToAdapterFormat(_tool: Tool) { + return { + codeExecution: {}, + } +} + +export function codeExecutionTool(): Tool { + return { + type: 'function', + function: { + name: 'code_execution', + description: '', + parameters: {}, + }, + metadata: {}, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/computer-use-tool.ts b/packages/typescript/ai-gemini/src/tools/computer-use-tool.ts index 542e128e8..9a1b65f6d 100644 --- a/packages/typescript/ai-gemini/src/tools/computer-use-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/computer-use-tool.ts @@ -1,29 +1,29 @@ -import { ComputerUse } from "@google/genai"; -import type { Tool } from "@tanstack/ai"; - -export type ComputerUseTool = ComputerUse - -export function convertComputerUseToolToAdapterFormat(tool: Tool) { - const metadata = tool.metadata as ComputerUseTool; - return { - computerUse: { - environment: metadata.environment, - excludedPredefinedFunctions: metadata.excludedPredefinedFunctions - } - }; -} - -export function computerUseTool(config: ComputerUseTool): Tool { - return { - type: "function", - function: { - name: "computer_use", - description: "", - parameters: {} - }, - metadata: { - environment: config.environment, - excludedPredefinedFunctions: config.excludedPredefinedFunctions - } - } -} \ No newline at end of file +import type { ComputerUse } from '@google/genai' +import type { Tool } from '@tanstack/ai' + +export type ComputerUseTool = ComputerUse + +export function convertComputerUseToolToAdapterFormat(tool: Tool) { + const metadata = tool.metadata as ComputerUseTool + return { + computerUse: { + environment: metadata.environment, + excludedPredefinedFunctions: metadata.excludedPredefinedFunctions, + }, + } +} + +export function computerUseTool(config: ComputerUseTool): Tool { + return { + type: 'function', + function: { + name: 'computer_use', + description: '', + parameters: {}, + }, + metadata: { + environment: config.environment, + excludedPredefinedFunctions: config.excludedPredefinedFunctions, + }, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/file-search-tool.ts b/packages/typescript/ai-gemini/src/tools/file-search-tool.ts index 5f249aba6..2c3816a10 100644 --- a/packages/typescript/ai-gemini/src/tools/file-search-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/file-search-tool.ts @@ -1,43 +1,23 @@ -import type { Tool } from "@tanstack/ai"; - -export interface FileSearchTool { - /** - * The names of the fileSearchStores to retrieve from. Example: fileSearchStores/my-file-search-store-123 - */ - fileSearchStoreNames: string[]; - /** - * Metadata filter to apply to the semantic retrieval documents and chunks. - */ - metadataFilter?: string; - /** - * The number of semantic retrieval chunks to retrieve. - */ - topK?: number; -} - -export function convertFileSearchToolToAdapterFormat(tool: Tool) { - const metadata = tool.metadata as { fileSearchStoreNames: string[]; metadataFilter?: string; topK?: number }; - return { - fileSearch: { - fileSearchStoreNames: metadata.fileSearchStoreNames, - metadataFilter: metadata.metadataFilter, - topK: metadata.topK - } - }; -} - -export function fileSearchTool(config: { fileSearchStoreNames: string[]; metadataFilter?: string; topK?: number }): Tool { - return { - type: "function", - function: { - name: "file_search", - description: "", - parameters: {} - }, - metadata: { - fileSearchStoreNames: config.fileSearchStoreNames, - metadataFilter: config.metadataFilter, - topK: config.topK - } - } -} \ No newline at end of file +import type { Tool } from '@tanstack/ai' +import type { FileSearch } from '@google/genai' + +export type FileSearchTool = FileSearch + +export function convertFileSearchToolToAdapterFormat(tool: Tool) { + const metadata = tool.metadata as FileSearchTool + return { + fileSearch: metadata, + } +} + +export function fileSearchTool(config: FileSearchTool): Tool { + return { + type: 'function', + function: { + name: 'file_search', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/function-declaration-tool.ts b/packages/typescript/ai-gemini/src/tools/function-declaration-tool.ts index 88d3170ef..5840c94da 100644 --- a/packages/typescript/ai-gemini/src/tools/function-declaration-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/function-declaration-tool.ts @@ -1,150 +1,34 @@ -export interface FunctionDeclarationTool { - /** - * The name of the function. Must be a-z, A-Z, 0-9, or contain underscores, colons, dots, and dashes, with a maximum length of 64. - */ - name: string; - /** - * A brief description of the function. - */ - description: string; - /** - * Defines the function behavior. - * - UNSPECIFIED: This value is unused. - * - BLOCKING: If set, the system will wait to receive the function response before continuing the conversation. - * - NON_BLOCKING: If set, the system will not wait to receive the function response. Instead, it will attempt to handle function responses as they become available while maintaining the conversation between the user and the model. - - */ - behavior?: "UNSPECIFIED" | "BLOCKING" | "NON_BLOCKING"; - - parameters?: Schema; - /** - * JSON Schema representation of the parameters. Mutually exclusive with 'parameters' field. - */ - parametersJsonSchema?: Schema; - /** - * Describes the output from this function in JSON Schema format. Reflects the Open API 3.03 Response Object. The Schema defines the type used for the response value of the function. - */ - response?: Schema; - /** - * Describes the output from this function in JSON Schema format. The value specified by the schema is the response value of the function. - -This field is mutually exclusive with response. - */ - responseJsonSchema?: Schema; -} - - -type Value = null | string | number | boolean | object | any[]; - -export interface Schema { - type: "TYPE_UNSPECIFIED" | "OBJECT" | "ARRAY" | "STRING" | "NUMBER" | "INTEGER" | "BOOLEAN" | "NULL"; - /** - * The format of the data. Any value is allowed, but most do not trigger any special functionality. - */ - format?: string; - /** - * Title of the schema - */ - title?: string; - /** - * A brief description of the parameter. This could contain examples of use. Parameter description may be formatted as Markdown. - */ - description?: string; - /** - * Indicates if the value may be null. - */ - nullable?: boolean; - /** - * Possible values of the element of Type.STRING with enum format. For example we can define an Enum Direction as : {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} - */ - enum?: any[]; - /** - * Maximum number of the elements for Type.ARRAY. - */ - maxItems?: number; - /** - * Minimum number of the elements for Type.ARRAY. - */ - minItems?: number; - /** - * An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }. - */ - properties?: Record; - /** - * Required properties of Type.OBJECT. - */ - required?: string[]; - /** - * Minimum number of the properties for Type.OBJECT. - */ - minProperties?: string; - /** - * Maximum number of the properties for Type.OBJECT. - */ - maxProperties?: string; - /** - * SCHEMA FIELDS FOR TYPE STRING Minimum length of the Type.STRING - */ - minLength?: number; - /** - * Maximum length of the Type.STRING - */ - maxLength?: number; - /** - * Pattern of the Type.STRING to restrict a string to a regular expression. - */ - pattern?: string; - /** - * Example of the object. Will only populated when the object is the root. - */ - example?: Value; - /** - * The value should be validated against any (one or more) of the subschemas in the list. - */ - anyOf?: [Schema, ...Schema[]]; - - /** - * The order of the properties. Not a standard field in open api spec. Used to determine the order of the properties in the response. - */ - propertyOrdering?: string[]; - /** - * Default value of the field. Per JSON Schema, this field is intended for documentation generators and doesn't affect validation. Thus it's included here and ignored so that developers who send schemas with a default field don't get unknown-field errors. - */ - default?: Value; - /** - * Schema of the elements of Type.ARRAY. - */ - items?: Schema; - /** - * SCHEMA FIELDS FOR TYPE INTEGER and NUMBER Minimum value of the Type.INTEGER and Type.NUMBER - */ - minimum?: number; - /** - * Maximum value of the Type.INTEGER and Type.NUMBER - */ - maximum?: number; -} - -export const validateFunctionDeclarationTool = (tool: FunctionDeclarationTool) => { - const nameRegex = /^[a-zA-Z0-9_:.-]{1,64}$/; - const valid = nameRegex.test(tool.name); - if (!valid) { - throw new Error(`Invalid function name: ${tool.name}. Must be 1-64 characters long and contain only a-z, A-Z, 0-9, underscores, colons, dots, and dashes.`); - } - - if (tool.parameters && tool.parametersJsonSchema) { - throw new Error(`FunctionDeclarationTool cannot have both 'parameters' and 'parametersJsonSchema' defined. Please use only one.`); - } - - if (tool.response && tool.responseJsonSchema) { - throw new Error(`FunctionDeclarationTool cannot have both 'response' and 'responseJsonSchema' defined. Please use only one.`); - } - -} - -export const functionDeclarationTools = (tools: FunctionDeclarationTool[]) => { - tools.forEach(validateFunctionDeclarationTool); - return { - functionDeclarations: tools - } -} \ No newline at end of file +import type { FunctionDeclaration } from '@google/genai' + +export type FunctionDeclarationTool = FunctionDeclaration + +const validateFunctionDeclarationTool = (tool: FunctionDeclarationTool) => { + const nameRegex = /^[a-zA-Z0-9_:.-]{1,64}$/ + const valid = nameRegex.test(tool.name!) + if (!valid) { + throw new Error( + `Invalid function name: ${tool.name}. Must be 1-64 characters long and contain only a-z, A-Z, 0-9, underscores, colons, dots, and dashes.`, + ) + } + + if (tool.parameters && tool.parametersJsonSchema) { + throw new Error( + `FunctionDeclarationTool cannot have both 'parameters' and 'parametersJsonSchema' defined. Please use only one.`, + ) + } + + if (tool.response && tool.responseJsonSchema) { + throw new Error( + `FunctionDeclarationTool cannot have both 'response' and 'responseJsonSchema' defined. Please use only one.`, + ) + } +} + +export function functionDeclarationTools( + tools: Array, +) { + tools.forEach(validateFunctionDeclarationTool) + return { + functionDeclarations: tools, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/google-maps-tool.ts b/packages/typescript/ai-gemini/src/tools/google-maps-tool.ts index 64df9fb98..00305f6dd 100644 --- a/packages/typescript/ai-gemini/src/tools/google-maps-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/google-maps-tool.ts @@ -1,30 +1,23 @@ -import type { Tool } from "@tanstack/ai"; - -export interface GoogleMapsTool { - /** - * Whether to return a widget context token in the GroundingMetadata of the response. Developers can use the widget context token to render a Google Maps widget with geospatial context related to the places that the model references in the response. - */ - enableWidget?: boolean; - -} - -export function convertGoogleMapsToolToAdapterFormat(tool: Tool) { - const metadata = tool.metadata as { enableWidget?: boolean }; - return { - googleMaps: metadata.enableWidget !== undefined ? { enableWidget: metadata.enableWidget } : {} - }; -} - -export function googleMapsTool(config?: { enableWidget?: boolean }): Tool { - return { - type: "function", - function: { - name: "google_maps", - description: "", - parameters: {} - }, - metadata: { - enableWidget: config?.enableWidget - } - } -} \ No newline at end of file +import type { GoogleMaps } from '@google/genai' +import type { Tool } from '@tanstack/ai' + +export type GoogleMapsTool = GoogleMaps + +export function convertGoogleMapsToolToAdapterFormat(tool: Tool) { + const metadata = tool.metadata as GoogleMapsTool + return { + googleMaps: metadata, + } +} + +export function googleMapsTool(config?: GoogleMapsTool): Tool { + return { + type: 'function', + function: { + name: 'google_maps', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/google-search-retriveal-tool.ts b/packages/typescript/ai-gemini/src/tools/google-search-retriveal-tool.ts index 81de0c6a6..4f235d70c 100644 --- a/packages/typescript/ai-gemini/src/tools/google-search-retriveal-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/google-search-retriveal-tool.ts @@ -1,26 +1,25 @@ -import { GoogleSearchRetrieval } from "@google/genai"; -import type { Tool } from "@tanstack/ai"; - -export type GoogleSearchRetrievalTool = GoogleSearchRetrieval - - -export function convertGoogleSearchRetrievalToolToAdapterFormat(tool: Tool) { - const metadata = tool.metadata as GoogleSearchRetrievalTool; - return { - googleSearchRetrieval: metadata.dynamicRetrievalConfig ? { dynamicRetrievalConfig: metadata.dynamicRetrievalConfig } : {} - }; -} - -export function googleSearchRetrievalTool(config?: GoogleSearchRetrievalTool): Tool { - return { - type: "function", - function: { - name: "google_search_retrieval", - description: "", - parameters: {} - }, - metadata: { - dynamicRetrievalConfig: config?.dynamicRetrievalConfig - } - } -} \ No newline at end of file +import type { GoogleSearchRetrieval } from '@google/genai' +import type { Tool } from '@tanstack/ai' + +export type GoogleSearchRetrievalTool = GoogleSearchRetrieval + +export function convertGoogleSearchRetrievalToolToAdapterFormat(tool: Tool) { + const metadata = tool.metadata as GoogleSearchRetrievalTool + return { + googleSearchRetrieval: metadata, + } +} + +export function googleSearchRetrievalTool( + config?: GoogleSearchRetrievalTool, +): Tool { + return { + type: 'function', + function: { + name: 'google_search_retrieval', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/google-search-tool.ts b/packages/typescript/ai-gemini/src/tools/google-search-tool.ts index 83debbe08..bc41b22ec 100644 --- a/packages/typescript/ai-gemini/src/tools/google-search-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/google-search-tool.ts @@ -1,29 +1,23 @@ -import type { Tool } from "@tanstack/ai"; - -export interface GoogleSearchTool { - timeRangeFilter?: { - startTime?: string; // ISO 8601 format - endTime?: string; // ISO 8601 format - } -} - -export function convertGoogleSearchToolToAdapterFormat(tool: Tool) { - const metadata = tool.metadata as { timeRangeFilter?: { startTime?: string; endTime?: string } }; - return { - googleSearch: metadata.timeRangeFilter ? { timeRangeFilter: metadata.timeRangeFilter } : {} - }; -} - -export function googleSearchTool(config?: { timeRangeFilter?: { startTime?: string; endTime?: string } }): Tool { - return { - type: "function", - function: { - name: "google_search", - description: "", - parameters: {} - }, - metadata: { - timeRangeFilter: config?.timeRangeFilter - } - } -} \ No newline at end of file +import type { GoogleSearch } from '@google/genai' +import type { Tool } from '@tanstack/ai' + +export type GoogleSearchTool = GoogleSearch + +export function convertGoogleSearchToolToAdapterFormat(tool: Tool) { + const metadata = tool.metadata as GoogleSearchTool + return { + googleSearch: metadata, + } +} + +export function googleSearchTool(config?: GoogleSearchTool): Tool { + return { + type: 'function', + function: { + name: 'google_search', + description: '', + parameters: {}, + }, + metadata: config, + } +} diff --git a/packages/typescript/ai-gemini/src/tools/index.ts b/packages/typescript/ai-gemini/src/tools/index.ts index d5aa0b93e..442a9bd70 100644 --- a/packages/typescript/ai-gemini/src/tools/index.ts +++ b/packages/typescript/ai-gemini/src/tools/index.ts @@ -1,10 +1,18 @@ -import { CodeExecutionTool } from "./code-execution-tool"; -import { ComputerUseTool } from "./computer-use-tool"; -import { FileSearchTool } from "./file-search-tool"; -import { FunctionDeclarationTool } from "./function-declaration-tool"; -import { GoogleMapsTool } from "./google-maps-tool"; -import { GoogleSearchRetrievalTool } from "./google-search-retriveal-tool"; -import { GoogleSearchTool } from "./google-search-tool"; -import { UrlContextTool } from "./url-context-tool"; - -export type GoogleGeminiTool = CodeExecutionTool | ComputerUseTool | FileSearchTool | FunctionDeclarationTool | GoogleMapsTool | GoogleSearchRetrievalTool | GoogleSearchTool | UrlContextTool; \ No newline at end of file +import type { CodeExecutionTool } from './code-execution-tool' +import type { ComputerUseTool } from './computer-use-tool' +import type { FileSearchTool } from './file-search-tool' +import type { FunctionDeclarationTool } from './function-declaration-tool' +import type { GoogleMapsTool } from './google-maps-tool' +import type { GoogleSearchRetrievalTool } from './google-search-retriveal-tool' +import type { GoogleSearchTool } from './google-search-tool' +import type { UrlContextTool } from './url-context-tool' + +export type GoogleGeminiTool = + | CodeExecutionTool + | ComputerUseTool + | FileSearchTool + | FunctionDeclarationTool + | GoogleMapsTool + | GoogleSearchRetrievalTool + | GoogleSearchTool + | UrlContextTool diff --git a/packages/typescript/ai-gemini/src/tools/tool-converter.ts b/packages/typescript/ai-gemini/src/tools/tool-converter.ts index 0622b1a69..6f34b0cdc 100644 --- a/packages/typescript/ai-gemini/src/tools/tool-converter.ts +++ b/packages/typescript/ai-gemini/src/tools/tool-converter.ts @@ -1,101 +1,99 @@ -import type { Tool } from "@tanstack/ai"; -import { convertCodeExecutionToolToAdapterFormat } from "./code-execution-tool"; -import { convertComputerUseToolToAdapterFormat } from "./computer-use-tool"; -import { convertFileSearchToolToAdapterFormat } from "./file-search-tool"; -import { convertGoogleMapsToolToAdapterFormat } from "./google-maps-tool"; -import { convertGoogleSearchRetrievalToolToAdapterFormat } from "./google-search-retriveal-tool"; -import { convertGoogleSearchToolToAdapterFormat } from "./google-search-tool"; -import { convertUrlContextToolToAdapterFormat } from "./url-context-tool"; -import { ToolUnion } from "@google/genai"; - -/** - * Converts standard Tool format to Gemini-specific tool format - * - * @param tools - Array of standard Tool objects - * @returns Array of Gemini-specific tool definitions - * - * @example - * ```typescript - * const tools: Tool[] = [{ - * type: "function", - * function: { - * name: "get_weather", - * description: "Get weather for a location", - * parameters: { - * type: "object", - * properties: { location: { type: "string" } }, - * required: ["location"] - * } - * } - * }]; - * - * const geminiTools = convertToolsToProviderFormat(tools); - * ``` - */ -export function convertToolsToProviderFormat( - tools: TTool[] | undefined, -): ToolUnion[] { - if (!tools || tools.length === 0) { - return []; - } - const result: ToolUnion[] = []; - const functionDeclarations: Array<{ - name: string; - description?: string; - parameters?: any; - }> = []; - - // Process each tool and group function declarations together - for (const tool of tools) { - const name = tool.function.name; - - switch (name) { - case "code_execution": - result.push(convertCodeExecutionToolToAdapterFormat(tool)); - break; - case "computer_use": - result.push(convertComputerUseToolToAdapterFormat(tool)); - break; - case "file_search": - result.push(convertFileSearchToolToAdapterFormat(tool)); - break; - case "google_maps": - result.push(convertGoogleMapsToolToAdapterFormat(tool)); - break; - case "google_search_retrieval": - result.push(convertGoogleSearchRetrievalToolToAdapterFormat(tool)); - break; - case "google_search": - result.push(convertGoogleSearchToolToAdapterFormat(tool)); - break; - case "url_context": - result.push(convertUrlContextToolToAdapterFormat(tool)); - break; - default: - // Collect function declarations to group together - // Description is required for Gemini function declarations - if (!tool.function.description) { - throw new Error(`Tool ${tool.function.name} requires a description for Gemini adapter`); - } - functionDeclarations.push({ - name: tool.function.name, - description: tool.function.description, - parameters: tool.function.parameters || { - type: "object", - properties: {}, - required: [] - } - }); - break; - } - } - - // If we have function declarations, add them as a single tool - if (functionDeclarations.length > 0) { - result.push({ - functionDeclarations: functionDeclarations - }); - } - - return result; -} +import { convertCodeExecutionToolToAdapterFormat } from './code-execution-tool' +import { convertComputerUseToolToAdapterFormat } from './computer-use-tool' +import { convertFileSearchToolToAdapterFormat } from './file-search-tool' +import { convertGoogleMapsToolToAdapterFormat } from './google-maps-tool' +import { convertGoogleSearchRetrievalToolToAdapterFormat } from './google-search-retriveal-tool' +import { convertGoogleSearchToolToAdapterFormat } from './google-search-tool' +import { convertUrlContextToolToAdapterFormat } from './url-context-tool' +import type { Tool } from '@tanstack/ai' +import type { ToolUnion } from '@google/genai' + +/** + * Converts standard Tool format to Gemini-specific tool format + * + * @param tools - Array of standard Tool objects + * @returns Array of Gemini-specific tool definitions + * + * @example + * ```typescript + * const tools: Tool[] = [{ + * type: "function", + * function: { + * name: "get_weather", + * description: "Get weather for a location", + * parameters: { + * type: "object", + * properties: { location: { type: "string" } }, + * required: ["location"] + * } + * } + * }]; + * + * const geminiTools = convertToolsToProviderFormat(tools); + * ``` + */ +export function convertToolsToProviderFormat( + tools: Array | undefined, +): Array { + if (!tools || tools.length === 0) { + return [] + } + const result: Array = [] + const functionDeclarations: Array<{ + name: string + description?: string + parameters?: any + }> = [] + + // Process each tool and group function declarations together + for (const tool of tools) { + const name = tool.function.name + + switch (name) { + case 'code_execution': + result.push(convertCodeExecutionToolToAdapterFormat(tool)) + break + case 'computer_use': + result.push(convertComputerUseToolToAdapterFormat(tool)) + break + case 'file_search': + result.push(convertFileSearchToolToAdapterFormat(tool)) + break + case 'google_maps': + result.push(convertGoogleMapsToolToAdapterFormat(tool)) + break + case 'google_search_retrieval': + result.push(convertGoogleSearchRetrievalToolToAdapterFormat(tool)) + break + case 'google_search': + result.push(convertGoogleSearchToolToAdapterFormat(tool)) + break + case 'url_context': + result.push(convertUrlContextToolToAdapterFormat(tool)) + break + default: + // Collect function declarations to group together + // Description is required for Gemini function declarations + if (!tool.function.description) { + throw new Error( + `Tool ${tool.function.name} requires a description for Gemini adapter`, + ) + } + functionDeclarations.push({ + name: tool.function.name, + description: tool.function.description, + parameters: tool.function.parameters, + }) + break + } + } + + // If we have function declarations, add them as a single tool + if (functionDeclarations.length > 0) { + result.push({ + functionDeclarations: functionDeclarations, + }) + } + + return result +} diff --git a/packages/typescript/ai-gemini/src/tools/url-context-tool.ts b/packages/typescript/ai-gemini/src/tools/url-context-tool.ts index a71ee9fff..dc3d69b7f 100644 --- a/packages/typescript/ai-gemini/src/tools/url-context-tool.ts +++ b/packages/typescript/ai-gemini/src/tools/url-context-tool.ts @@ -1,23 +1,21 @@ -import type { Tool } from "@tanstack/ai"; - -export interface UrlContextTool { - -} - -export function convertUrlContextToolToAdapterFormat(_tool: Tool) { - return { - urlContext: {} - }; -} - -export function urlContextTool(): Tool { - return { - type: "function", - function: { - name: "url_context", - description: "", - parameters: {} - }, - metadata: {} - } -} \ No newline at end of file +import type { Tool } from '@tanstack/ai' + +export interface UrlContextTool {} + +export function convertUrlContextToolToAdapterFormat(_tool: Tool) { + return { + urlContext: {}, + } +} + +export function urlContextTool(): Tool { + return { + type: 'function', + function: { + name: 'url_context', + description: '', + parameters: {}, + }, + metadata: {}, + } +} diff --git a/packages/typescript/ai-gemini/tests/gemini-adapter.test.ts b/packages/typescript/ai-gemini/tests/gemini-adapter.test.ts index 75b988241..0516401cc 100644 --- a/packages/typescript/ai-gemini/tests/gemini-adapter.test.ts +++ b/packages/typescript/ai-gemini/tests/gemini-adapter.test.ts @@ -1,368 +1,406 @@ -import { describe, it, expect, beforeEach, vi } from "vitest"; -import { chat, summarize, embedding } from "@tanstack/ai"; -import type { Tool, StreamChunk } from "@tanstack/ai"; -import type { HarmBlockThreshold, HarmCategory, SafetySetting } from "@google/genai"; -import type { Schema } from "../src/tools/function-declaration-tool"; -import { GeminiAdapter, type GeminiProviderOptions } from "../src/gemini-adapter"; - -const mocks = vi.hoisted(() => { - return { - constructorSpy: vi.fn<(options: { apiKey: string }) => void>(), - generateContentSpy: vi.fn(), - generateContentStreamSpy: vi.fn(), - embedContentSpy: vi.fn(), - getGenerativeModelSpy: vi.fn(), - }; -}); - -vi.mock("@google/genai", () => { - const { constructorSpy, generateContentSpy, generateContentStreamSpy, embedContentSpy, getGenerativeModelSpy } = mocks; - - class MockGoogleGenAI { - public models = { - generateContent: generateContentSpy, - generateContentStream: generateContentStreamSpy, - embedContent: embedContentSpy, - }; - - public getGenerativeModel = getGenerativeModelSpy; - - constructor(options: { apiKey: string }) { - constructorSpy(options); - } - } - - return { GoogleGenAI: MockGoogleGenAI }; -}); - -const createAdapter = () => new GeminiAdapter({ apiKey: "test-key" }); - -const weatherTool: Tool = { - type: "function", - function: { - name: "lookup_weather", - description: "Return the weather for a location", - parameters: { - type: "object", - properties: { - location: { type: "string" }, - }, - required: ["location"], - }, - }, -}; - -const createStream = (chunks: Array>) => { - return (async function* () { - for (const chunk of chunks) { - yield chunk; - } - })(); -}; - -describe("GeminiAdapter through AI", () => { - beforeEach(() => { - vi.clearAllMocks(); - }); - - it("maps provider options for chat streaming", async () => { - const streamChunks = [ - { - candidates: [ - { - content: { - parts: [{ text: "Sunny skies ahead" }], - }, - finishReason: "STOP", - }, - ], - usageMetadata: { - promptTokenCount: 3, - thoughtsTokenCount: 1, - totalTokenCount: 4, - }, - }, - ]; - - mocks.generateContentStreamSpy.mockResolvedValue(createStream(streamChunks)); - - const adapter = createAdapter(); - - // Consume the stream to trigger the API call - for await (const _ of chat({ - adapter, - model: "gemini-2.5-pro", - messages: [{ role: "user", content: "How is the weather in Madrid?" }], - providerOptions: { - generationConfig: { topK: 9 }, - }, - options: { - temperature: 0.4, - topP: 0.8, - maxTokens: 256, - }, - tools: [weatherTool], - })) { /* consume stream */ } - - expect(mocks.generateContentStreamSpy).toHaveBeenCalledTimes(1); - const [payload] = mocks.generateContentStreamSpy.mock.calls[0]; - expect(payload.model).toBe("gemini-2.5-pro"); - expect(payload.config).toMatchObject({ - temperature: 0.4, - topP: 0.8, - maxOutputTokens: 256, - topK: 9, - }); - expect(payload.config?.tools?.[0]?.functionDeclarations?.[0]?.name).toBe( - "lookup_weather", - ); - expect(payload.contents).toEqual([ - { - role: "user", - parts: [{ text: "How is the weather in Madrid?" }], - }, - ]); - }); - - it("maps every common and provider option into the Gemini payload", async () => { - const streamChunks = [ - { - candidates: [ - { - content: { - parts: [{ text: "" }], - }, - finishReason: "STOP", - }, - ], - usageMetadata: undefined, - }, - ]; - - mocks.generateContentStreamSpy.mockResolvedValue(createStream(streamChunks)); - - const safetySettings: SafetySetting[] = [ - { - category: "HARM_CATEGORY_HATE_SPEECH" as HarmCategory, - threshold: "BLOCK_LOW_AND_ABOVE" as HarmBlockThreshold, - }, - ]; - - const responseSchema: Schema = { - type: "OBJECT", - properties: { - summary: { type: "STRING" }, - }, - }; - - const responseJsonSchema: Schema = { - type: "OBJECT", - properties: { - ok: { type: "BOOLEAN" }, - }, - }; - - const providerOptions: GeminiProviderOptions = { - safetySettings, - generationConfig: { - stopSequences: ["", "###"], - responseMimeType: "application/json", - responseSchema, - responseJsonSchema, - responseModalities: ["TEXT"], - candidateCount: 2, - topK: 6, - seed: 7, - presencePenalty: 0.2, - frequencyPenalty: 0.4, - responseLogprobs: true, - logprobs: 3, - enableEnhancedCivicAnswers: true, - speechConfig: { - voiceConfig: { - prebuiltVoiceConfig: { - voiceName: "Studio", - }, - }, - }, - thinkingConfig: { - includeThoughts: true, - thinkingBudget: 128, - }, - imageConfig: { - aspectRatio: "1:1", - }, - }, - cachedContent: "cachedContents/weather-context", - } as const; - - const adapter = createAdapter(); - - // Consume the stream to trigger the API call - for await (const _ of chat({ - adapter, - model: "gemini-2.5-pro", - messages: [{ role: "user", content: "Provide structured response" }], - options: { - temperature: 0.61, - topP: 0.37, - maxTokens: 512, - }, - systemPrompts: ["Stay concise", "Return JSON"], - providerOptions, - })) { /* consume stream */ } - - expect(mocks.generateContentStreamSpy).toHaveBeenCalledTimes(1); - const [payload] = mocks.generateContentStreamSpy.mock.calls[0]; - const config = payload.config; - - expect(config.temperature).toBe(0.61); - expect(config.topP).toBe(0.37); - expect(config.maxOutputTokens).toBe(512); - expect(config.cachedContent).toBe(providerOptions.cachedContent); - expect(config.safetySettings).toEqual(providerOptions.safetySettings); - expect(config.generationConfig).toEqual(providerOptions.generationConfig); - expect(config.stopSequences).toEqual(providerOptions.generationConfig?.stopSequences); - expect(config.responseMimeType).toBe(providerOptions.generationConfig?.responseMimeType); - expect(config.responseSchema).toEqual(providerOptions.generationConfig?.responseSchema); - expect(config.responseJsonSchema).toEqual(providerOptions.generationConfig?.responseJsonSchema); - expect(config.responseModalities).toEqual(providerOptions.generationConfig?.responseModalities); - expect(config.candidateCount).toBe(providerOptions.generationConfig?.candidateCount); - expect(config.topK).toBe(providerOptions.generationConfig?.topK); - expect(config.seed).toBe(providerOptions.generationConfig?.seed); - expect(config.presencePenalty).toBe(providerOptions.generationConfig?.presencePenalty); - expect(config.frequencyPenalty).toBe(providerOptions.generationConfig?.frequencyPenalty); - expect(config.responseLogprobs).toBe(providerOptions.generationConfig?.responseLogprobs); - expect(config.logprobs).toBe(providerOptions.generationConfig?.logprobs); - expect(config.enableEnhancedCivicAnswers).toBe( - providerOptions.generationConfig?.enableEnhancedCivicAnswers, - ); - expect(config.speechConfig).toEqual(providerOptions.generationConfig?.speechConfig); - expect(config.thinkingConfig).toEqual(providerOptions.generationConfig?.thinkingConfig); - expect(config.imageConfig).toEqual(providerOptions.generationConfig?.imageConfig); - }); - - it("streams chat chunks using mapped provider config", async () => { - const streamChunks = [ - { - candidates: [ - { - content: { - parts: [{ text: "Partly " }], - }, - }, - ], - }, - { - candidates: [ - { - content: { - parts: [{ text: "cloudy" }], - }, - finishReason: "STOP", - }, - ], - usageMetadata: { - promptTokenCount: 4, - thoughtsTokenCount: 2, - totalTokenCount: 6, - }, - }, - ]; - - mocks.generateContentStreamSpy.mockResolvedValue(createStream(streamChunks)); - - const adapter = createAdapter(); - const received: StreamChunk[] = []; - for await (const chunk of chat({ - adapter, - model: "gemini-2.5-pro", - messages: [{ role: "user", content: "Tell me a joke" }], - providerOptions: { - generationConfig: { topK: 3 }, - }, - options: { temperature: 0.2 }, - })) { - received.push(chunk); - } - - expect(mocks.generateContentStreamSpy).toHaveBeenCalledTimes(1); - const [streamPayload] = mocks.generateContentStreamSpy.mock.calls[0]; - expect(streamPayload.config?.topK).toBe(3); - expect(received[0]).toMatchObject({ - type: "content", - delta: "Partly ", - content: "Partly ", - }); - expect(received[1]).toMatchObject({ - type: "content", - delta: "cloudy", - content: "Partly cloudy", - }); - expect(received.at(-1)).toMatchObject({ - type: "done", - finishReason: "stop", - usage: { - promptTokens: 4, - completionTokens: 2, - totalTokens: 6, - }, - }); - }); - - it("uses summarize function with models API", async () => { - const summaryText = "Short and sweet."; - mocks.generateContentSpy.mockResolvedValueOnce({ - candidates: [ - { - content: { - parts: [{ text: summaryText }], - }, - }, - ], - }); - - const adapter = createAdapter(); - const result = await summarize({ - adapter, - model: "gemini-2.5-flash", - text: "A very long passage that needs to be shortened", - maxLength: 123, - style: "paragraph", - }); - - expect(mocks.generateContentSpy).toHaveBeenCalledTimes(1); - const [payload] = mocks.generateContentSpy.mock.calls[0]; - expect(payload.model).toBe("gemini-2.5-flash"); - expect(payload.config).toMatchObject({ - temperature: 0.3, - maxOutputTokens: 123, - }); - expect(result.summary).toBe(summaryText); - }); - - it("creates embeddings via embedding function", async () => { - mocks.embedContentSpy.mockResolvedValueOnce({ - embeddings: [ - { values: [0.1, 0.2] }, - { values: [0.3, 0.4] }, - ], - }); - - const adapter = createAdapter(); - const result = await embedding({ - adapter, - model: "gemini-embedding-001" as "gemini-2.5-pro", // type workaround for embedding model - input: ["doc one", "doc two"], - }); - - expect(mocks.embedContentSpy).toHaveBeenCalledTimes(1); - const [payload] = mocks.embedContentSpy.mock.calls[0]; - expect(payload.model).toBe("gemini-embedding-001"); - expect(payload.contents).toEqual(["doc one", "doc two"]); - expect(result.embeddings).toEqual([ - [0.1, 0.2], - [0.3, 0.4], - ]); - }); -}); +import { describe, it, expect, beforeEach, vi } from 'vitest' +import { chat, summarize, embedding } from '@tanstack/ai' +import type { Tool, StreamChunk } from '@tanstack/ai' +import type { + HarmBlockThreshold, + HarmCategory, + SafetySetting, +} from '@google/genai' +import type { Schema } from '../src/tools/function-declaration-tool' +import { + GeminiAdapter, + type GeminiProviderOptions, +} from '../src/gemini-adapter' + +const mocks = vi.hoisted(() => { + return { + constructorSpy: vi.fn<(options: { apiKey: string }) => void>(), + generateContentSpy: vi.fn(), + generateContentStreamSpy: vi.fn(), + embedContentSpy: vi.fn(), + getGenerativeModelSpy: vi.fn(), + } +}) + +vi.mock('@google/genai', () => { + const { + constructorSpy, + generateContentSpy, + generateContentStreamSpy, + embedContentSpy, + getGenerativeModelSpy, + } = mocks + + class MockGoogleGenAI { + public models = { + generateContent: generateContentSpy, + generateContentStream: generateContentStreamSpy, + embedContent: embedContentSpy, + } + + public getGenerativeModel = getGenerativeModelSpy + + constructor(options: { apiKey: string }) { + constructorSpy(options) + } + } + + return { GoogleGenAI: MockGoogleGenAI } +}) + +const createAdapter = () => new GeminiAdapter({ apiKey: 'test-key' }) + +const weatherTool: Tool = { + type: 'function', + function: { + name: 'lookup_weather', + description: 'Return the weather for a location', + parameters: { + type: 'object', + properties: { + location: { type: 'string' }, + }, + required: ['location'], + }, + }, +} + +const createStream = (chunks: Array>) => { + return (async function* () { + for (const chunk of chunks) { + yield chunk + } + })() +} + +describe('GeminiAdapter through AI', () => { + beforeEach(() => { + vi.clearAllMocks() + }) + + it('maps provider options for chat streaming', async () => { + const streamChunks = [ + { + candidates: [ + { + content: { + parts: [{ text: 'Sunny skies ahead' }], + }, + finishReason: 'STOP', + }, + ], + usageMetadata: { + promptTokenCount: 3, + thoughtsTokenCount: 1, + totalTokenCount: 4, + }, + }, + ] + + mocks.generateContentStreamSpy.mockResolvedValue(createStream(streamChunks)) + + const adapter = createAdapter() + + // Consume the stream to trigger the API call + for await (const _ of chat({ + adapter, + model: 'gemini-2.5-pro', + messages: [{ role: 'user', content: 'How is the weather in Madrid?' }], + providerOptions: { + generationConfig: { topK: 9 }, + }, + options: { + temperature: 0.4, + topP: 0.8, + maxTokens: 256, + }, + tools: [weatherTool], + })) { + /* consume stream */ + } + + expect(mocks.generateContentStreamSpy).toHaveBeenCalledTimes(1) + const [payload] = mocks.generateContentStreamSpy.mock.calls[0] + expect(payload.model).toBe('gemini-2.5-pro') + expect(payload.config).toMatchObject({ + temperature: 0.4, + topP: 0.8, + maxOutputTokens: 256, + topK: 9, + }) + expect(payload.config?.tools?.[0]?.functionDeclarations?.[0]?.name).toBe( + 'lookup_weather', + ) + expect(payload.contents).toEqual([ + { + role: 'user', + parts: [{ text: 'How is the weather in Madrid?' }], + }, + ]) + }) + + it('maps every common and provider option into the Gemini payload', async () => { + const streamChunks = [ + { + candidates: [ + { + content: { + parts: [{ text: '' }], + }, + finishReason: 'STOP', + }, + ], + usageMetadata: undefined, + }, + ] + + mocks.generateContentStreamSpy.mockResolvedValue(createStream(streamChunks)) + + const safetySettings: SafetySetting[] = [ + { + category: 'HARM_CATEGORY_HATE_SPEECH' as HarmCategory, + threshold: 'BLOCK_LOW_AND_ABOVE' as HarmBlockThreshold, + }, + ] + + const responseSchema: Schema = { + type: 'OBJECT', + properties: { + summary: { type: 'STRING' }, + }, + } + + const responseJsonSchema: Schema = { + type: 'OBJECT', + properties: { + ok: { type: 'BOOLEAN' }, + }, + } + + const providerOptions: GeminiProviderOptions = { + safetySettings, + generationConfig: { + stopSequences: ['', '###'], + responseMimeType: 'application/json', + responseSchema, + responseJsonSchema, + responseModalities: ['TEXT'], + candidateCount: 2, + topK: 6, + seed: 7, + presencePenalty: 0.2, + frequencyPenalty: 0.4, + responseLogprobs: true, + logprobs: 3, + enableEnhancedCivicAnswers: true, + speechConfig: { + voiceConfig: { + prebuiltVoiceConfig: { + voiceName: 'Studio', + }, + }, + }, + thinkingConfig: { + includeThoughts: true, + thinkingBudget: 128, + }, + imageConfig: { + aspectRatio: '1:1', + }, + }, + cachedContent: 'cachedContents/weather-context', + } as const + + const adapter = createAdapter() + + // Consume the stream to trigger the API call + for await (const _ of chat({ + adapter, + model: 'gemini-2.5-pro', + messages: [{ role: 'user', content: 'Provide structured response' }], + options: { + temperature: 0.61, + topP: 0.37, + maxTokens: 512, + }, + systemPrompts: ['Stay concise', 'Return JSON'], + providerOptions, + })) { + /* consume stream */ + } + + expect(mocks.generateContentStreamSpy).toHaveBeenCalledTimes(1) + const [payload] = mocks.generateContentStreamSpy.mock.calls[0] + const config = payload.config + + expect(config.temperature).toBe(0.61) + expect(config.topP).toBe(0.37) + expect(config.maxOutputTokens).toBe(512) + expect(config.cachedContent).toBe(providerOptions.cachedContent) + expect(config.safetySettings).toEqual(providerOptions.safetySettings) + expect(config.generationConfig).toEqual(providerOptions.generationConfig) + expect(config.stopSequences).toEqual( + providerOptions.generationConfig?.stopSequences, + ) + expect(config.responseMimeType).toBe( + providerOptions.generationConfig?.responseMimeType, + ) + expect(config.responseSchema).toEqual( + providerOptions.generationConfig?.responseSchema, + ) + expect(config.responseJsonSchema).toEqual( + providerOptions.generationConfig?.responseJsonSchema, + ) + expect(config.responseModalities).toEqual( + providerOptions.generationConfig?.responseModalities, + ) + expect(config.candidateCount).toBe( + providerOptions.generationConfig?.candidateCount, + ) + expect(config.topK).toBe(providerOptions.generationConfig?.topK) + expect(config.seed).toBe(providerOptions.generationConfig?.seed) + expect(config.presencePenalty).toBe( + providerOptions.generationConfig?.presencePenalty, + ) + expect(config.frequencyPenalty).toBe( + providerOptions.generationConfig?.frequencyPenalty, + ) + expect(config.responseLogprobs).toBe( + providerOptions.generationConfig?.responseLogprobs, + ) + expect(config.logprobs).toBe(providerOptions.generationConfig?.logprobs) + expect(config.enableEnhancedCivicAnswers).toBe( + providerOptions.generationConfig?.enableEnhancedCivicAnswers, + ) + expect(config.speechConfig).toEqual( + providerOptions.generationConfig?.speechConfig, + ) + expect(config.thinkingConfig).toEqual( + providerOptions.generationConfig?.thinkingConfig, + ) + expect(config.imageConfig).toEqual( + providerOptions.generationConfig?.imageConfig, + ) + }) + + it('streams chat chunks using mapped provider config', async () => { + const streamChunks = [ + { + candidates: [ + { + content: { + parts: [{ text: 'Partly ' }], + }, + }, + ], + }, + { + candidates: [ + { + content: { + parts: [{ text: 'cloudy' }], + }, + finishReason: 'STOP', + }, + ], + usageMetadata: { + promptTokenCount: 4, + thoughtsTokenCount: 2, + totalTokenCount: 6, + }, + }, + ] + + mocks.generateContentStreamSpy.mockResolvedValue(createStream(streamChunks)) + + const adapter = createAdapter() + const received: StreamChunk[] = [] + for await (const chunk of chat({ + adapter, + model: 'gemini-2.5-pro', + messages: [{ role: 'user', content: 'Tell me a joke' }], + providerOptions: { + generationConfig: { topK: 3 }, + }, + options: { temperature: 0.2 }, + })) { + received.push(chunk) + } + + expect(mocks.generateContentStreamSpy).toHaveBeenCalledTimes(1) + const [streamPayload] = mocks.generateContentStreamSpy.mock.calls[0] + expect(streamPayload.config?.topK).toBe(3) + expect(received[0]).toMatchObject({ + type: 'content', + delta: 'Partly ', + content: 'Partly ', + }) + expect(received[1]).toMatchObject({ + type: 'content', + delta: 'cloudy', + content: 'Partly cloudy', + }) + expect(received.at(-1)).toMatchObject({ + type: 'done', + finishReason: 'stop', + usage: { + promptTokens: 4, + completionTokens: 2, + totalTokens: 6, + }, + }) + }) + + it('uses summarize function with models API', async () => { + const summaryText = 'Short and sweet.' + mocks.generateContentSpy.mockResolvedValueOnce({ + candidates: [ + { + content: { + parts: [{ text: summaryText }], + }, + }, + ], + }) + + const adapter = createAdapter() + const result = await summarize({ + adapter, + model: 'gemini-2.5-flash', + text: 'A very long passage that needs to be shortened', + maxLength: 123, + style: 'paragraph', + }) + + expect(mocks.generateContentSpy).toHaveBeenCalledTimes(1) + const [payload] = mocks.generateContentSpy.mock.calls[0] + expect(payload.model).toBe('gemini-2.5-flash') + expect(payload.config).toMatchObject({ + temperature: 0.3, + maxOutputTokens: 123, + }) + expect(result.summary).toBe(summaryText) + }) + + it('creates embeddings via embedding function', async () => { + mocks.embedContentSpy.mockResolvedValueOnce({ + embeddings: [{ values: [0.1, 0.2] }, { values: [0.3, 0.4] }], + }) + + const adapter = createAdapter() + const result = await embedding({ + adapter, + model: 'gemini-embedding-001' as 'gemini-2.5-pro', // type workaround for embedding model + input: ['doc one', 'doc two'], + }) + + expect(mocks.embedContentSpy).toHaveBeenCalledTimes(1) + const [payload] = mocks.embedContentSpy.mock.calls[0] + expect(payload.model).toBe('gemini-embedding-001') + expect(payload.contents).toEqual(['doc one', 'doc two']) + expect(result.embeddings).toEqual([ + [0.1, 0.2], + [0.3, 0.4], + ]) + }) +}) diff --git a/packages/typescript/ai-gemini/tests/model-meta.test.ts b/packages/typescript/ai-gemini/tests/model-meta.test.ts index 23d05144a..c9cdecd59 100644 --- a/packages/typescript/ai-gemini/tests/model-meta.test.ts +++ b/packages/typescript/ai-gemini/tests/model-meta.test.ts @@ -1,257 +1,331 @@ -import { describe, it, expectTypeOf } from "vitest"; -import type { - GeminiChatModelProviderOptionsByName, -} from "../src/model-meta"; -import type { - GeminiThinkingOptions, - GeminiStructuredOutputOptions, - GeminiToolConfigOptions, - GeminiSafetyOptions, - GeminiGenerationConfigOptions, - GeminiCachedContentOptions, -} from "../src/text/text-provider-options"; - -/** - * Type assertion tests for Gemini model provider options. - * - * These tests verify that: - * 1. Models with thinking support have GeminiThinkingOptions in their provider options - * 2. Models without thinking support do NOT have GeminiThinkingOptions - * 3. Models with structured output support have GeminiStructuredOutputOptions - * 4. All models have base options (tool config, safety, generation config, cached content) - */ - -// Base options that ALL chat models should have -type BaseOptions = GeminiToolConfigOptions & - GeminiSafetyOptions & - GeminiGenerationConfigOptions & - GeminiCachedContentOptions; - -describe("Gemini Model Provider Options Type Assertions", () => { - describe("Models WITH thinking support", () => { - it("gemini-3-pro-preview should support thinking options", () => { - type Model = "gemini-3-pro-preview"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - - // Verify specific properties exist - expectTypeOf().toHaveProperty("generationConfig"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("responseMimeType"); - expectTypeOf().toHaveProperty("responseSchema"); - }); - - it("gemini-2.5-pro should support thinking options", () => { - type Model = "gemini-2.5-pro"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - - it("gemini-2.5-flash should support thinking options", () => { - type Model = "gemini-2.5-flash"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - - it("gemini-2.5-flash-preview-09-2025 should support thinking options", () => { - type Model = "gemini-2.5-flash-preview-09-2025"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - - it("gemini-2.5-flash-lite should support thinking options", () => { - type Model = "gemini-2.5-flash-lite"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - - it("gemini-2.5-flash-lite-preview-09-2025 should support thinking options", () => { - type Model = "gemini-2.5-flash-lite-preview-09-2025"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should have thinking options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - }); - - describe("Models WITHOUT thinking support", () => { - it("gemini-2.0-flash should NOT have thinking options in type definition", () => { - type Model = "gemini-2.0-flash"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should NOT have thinking options - verify it's not assignable - // GeminiThinkingOptions has generationConfig.thinkingConfig which should not exist - expectTypeOf().not.toExtend< - GeminiThinkingOptions - >(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - - // Verify specific properties exist for structured output - expectTypeOf().toHaveProperty("responseMimeType"); - expectTypeOf().toHaveProperty("responseSchema"); - }); - - it("gemini-2.0-flash-lite should NOT have thinking options in type definition", () => { - type Model = "gemini-2.0-flash-lite"; - type Options = GeminiChatModelProviderOptionsByName[Model]; - - // Should NOT have thinking options - expectTypeOf().not.toExtend< - GeminiThinkingOptions - >(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - }); - - describe("Provider options type completeness", () => { - it("GeminiChatModelProviderOptionsByName should have entries for all chat models", () => { - // Verify the type map has all expected model keys - type Keys = keyof GeminiChatModelProviderOptionsByName; - - expectTypeOf<"gemini-3-pro-preview">().toExtend(); - expectTypeOf<"gemini-2.5-pro">().toExtend(); - expectTypeOf<"gemini-2.5-flash">().toExtend(); - expectTypeOf<"gemini-2.5-flash-preview-09-2025">().toExtend(); - expectTypeOf<"gemini-2.5-flash-lite">().toExtend(); - expectTypeOf<"gemini-2.5-flash-lite-preview-09-2025">().toExtend(); - expectTypeOf<"gemini-2.0-flash">().toExtend(); - expectTypeOf<"gemini-2.0-flash-lite">().toExtend(); - }); - }); - - describe("Detailed property type assertions", () => { - it("thinking models should allow thinkingConfig in generationConfig", () => { - type Options = GeminiChatModelProviderOptionsByName["gemini-2.5-pro"]; - - // The generationConfig should include thinkingConfig from GeminiGenerationConfigOptions - // which intersects with GeminiThinkingOptions - expectTypeOf().toHaveProperty("generationConfig"); - }); - - it("structured output options should have responseMimeType and responseSchema", () => { - type Options = GeminiChatModelProviderOptionsByName["gemini-2.0-flash"]; - - expectTypeOf().toHaveProperty("responseMimeType"); - expectTypeOf().toHaveProperty("responseSchema"); - expectTypeOf().toHaveProperty("responseJsonSchema"); - }); - - it("all models should have safety settings", () => { - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - expectTypeOf().toHaveProperty("safetySettings"); - }); - - it("all models should have tool config", () => { - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - expectTypeOf().toHaveProperty("toolConfig"); - }); - - it("all models should have cached content option", () => { - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - expectTypeOf().toHaveProperty("cachedContent"); - }); - }); - - describe("Type discrimination between model categories", () => { - it("models with thinking should extend GeminiThinkingOptions", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("models without thinking should NOT extend GeminiThinkingOptions", () => { - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - }); - - it("all models should extend GeminiStructuredOutputOptions", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); -}); +import { describe, it, expectTypeOf } from 'vitest' +import type { GeminiChatModelProviderOptionsByName } from '../src/model-meta' +import type { + GeminiThinkingOptions, + GeminiStructuredOutputOptions, + GeminiToolConfigOptions, + GeminiSafetyOptions, + GeminiGenerationConfigOptions, + GeminiCachedContentOptions, +} from '../src/text/text-provider-options' + +/** + * Type assertion tests for Gemini model provider options. + * + * These tests verify that: + * 1. Models with thinking support have GeminiThinkingOptions in their provider options + * 2. Models without thinking support do NOT have GeminiThinkingOptions + * 3. Models with structured output support have GeminiStructuredOutputOptions + * 4. All models have base options (tool config, safety, generation config, cached content) + */ + +// Base options that ALL chat models should have +type BaseOptions = GeminiToolConfigOptions & + GeminiSafetyOptions & + GeminiGenerationConfigOptions & + GeminiCachedContentOptions + +describe('Gemini Model Provider Options Type Assertions', () => { + describe('Models WITH thinking support', () => { + it('gemini-3-pro-preview should support thinking options', () => { + type Model = 'gemini-3-pro-preview' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + + // Verify specific properties exist + expectTypeOf().toHaveProperty('generationConfig') + expectTypeOf().toHaveProperty('safetySettings') + expectTypeOf().toHaveProperty('toolConfig') + expectTypeOf().toHaveProperty('cachedContent') + expectTypeOf().toHaveProperty('responseMimeType') + expectTypeOf().toHaveProperty('responseSchema') + }) + + it('gemini-2.5-pro should support thinking options', () => { + type Model = 'gemini-2.5-pro' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + + it('gemini-2.5-flash should support thinking options', () => { + type Model = 'gemini-2.5-flash' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + + it('gemini-2.5-flash-preview-09-2025 should support thinking options', () => { + type Model = 'gemini-2.5-flash-preview-09-2025' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + + it('gemini-2.5-flash-lite should support thinking options', () => { + type Model = 'gemini-2.5-flash-lite' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + + it('gemini-2.5-flash-lite-preview-09-2025 should support thinking options', () => { + type Model = 'gemini-2.5-flash-lite-preview-09-2025' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should have thinking options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + }) + + describe('Models WITHOUT thinking support', () => { + it('gemini-2.0-flash should NOT have thinking options in type definition', () => { + type Model = 'gemini-2.0-flash' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should NOT have thinking options - verify it's not assignable + // GeminiThinkingOptions has generationConfig.thinkingConfig which should not exist + expectTypeOf().not.toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + + // Verify specific properties exist for structured output + expectTypeOf().toHaveProperty('responseMimeType') + expectTypeOf().toHaveProperty('responseSchema') + }) + + it('gemini-2.0-flash-lite should NOT have thinking options in type definition', () => { + type Model = 'gemini-2.0-flash-lite' + type Options = GeminiChatModelProviderOptionsByName[Model] + + // Should NOT have thinking options + expectTypeOf().not.toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + }) + + describe('Provider options type completeness', () => { + it('GeminiChatModelProviderOptionsByName should have entries for all chat models', () => { + // Verify the type map has all expected model keys + type Keys = keyof GeminiChatModelProviderOptionsByName + + expectTypeOf<'gemini-3-pro-preview'>().toExtend() + expectTypeOf<'gemini-2.5-pro'>().toExtend() + expectTypeOf<'gemini-2.5-flash'>().toExtend() + expectTypeOf<'gemini-2.5-flash-preview-09-2025'>().toExtend() + expectTypeOf<'gemini-2.5-flash-lite'>().toExtend() + expectTypeOf<'gemini-2.5-flash-lite-preview-09-2025'>().toExtend() + expectTypeOf<'gemini-2.0-flash'>().toExtend() + expectTypeOf<'gemini-2.0-flash-lite'>().toExtend() + }) + }) + + describe('Detailed property type assertions', () => { + it('thinking models should allow thinkingConfig in generationConfig', () => { + type Options = GeminiChatModelProviderOptionsByName['gemini-2.5-pro'] + + // The generationConfig should include thinkingConfig from GeminiGenerationConfigOptions + // which intersects with GeminiThinkingOptions + expectTypeOf().toHaveProperty('generationConfig') + }) + + it('structured output options should have responseMimeType and responseSchema', () => { + type Options = GeminiChatModelProviderOptionsByName['gemini-2.0-flash'] + + expectTypeOf().toHaveProperty('responseMimeType') + expectTypeOf().toHaveProperty('responseSchema') + expectTypeOf().toHaveProperty('responseJsonSchema') + }) + + it('all models should have safety settings', () => { + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-3-pro-preview'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-pro'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-preview-09-2025'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite-preview-09-2025'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash'] + >().toHaveProperty('safetySettings') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash-lite'] + >().toHaveProperty('safetySettings') + }) + + it('all models should have tool config', () => { + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-3-pro-preview'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-pro'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-preview-09-2025'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite-preview-09-2025'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash'] + >().toHaveProperty('toolConfig') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash-lite'] + >().toHaveProperty('toolConfig') + }) + + it('all models should have cached content option', () => { + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-3-pro-preview'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-pro'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-preview-09-2025'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite-preview-09-2025'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash'] + >().toHaveProperty('cachedContent') + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash-lite'] + >().toHaveProperty('cachedContent') + }) + }) + + describe('Type discrimination between model categories', () => { + it('models with thinking should extend GeminiThinkingOptions', () => { + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-3-pro-preview'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-pro'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-preview-09-2025'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite-preview-09-2025'] + >().toExtend() + }) + + it('models without thinking should NOT extend GeminiThinkingOptions', () => { + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash'] + >().not.toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash-lite'] + >().not.toExtend() + }) + + it('all models should extend GeminiStructuredOutputOptions', () => { + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-3-pro-preview'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-pro'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-preview-09-2025'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.5-flash-lite-preview-09-2025'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash'] + >().toExtend() + expectTypeOf< + GeminiChatModelProviderOptionsByName['gemini-2.0-flash-lite'] + >().toExtend() + }) + }) +}) diff --git a/packages/typescript/ai-gemini/tsconfig.json b/packages/typescript/ai-gemini/tsconfig.json index 204ca8d3f..ea11c1096 100644 --- a/packages/typescript/ai-gemini/tsconfig.json +++ b/packages/typescript/ai-gemini/tsconfig.json @@ -5,6 +5,5 @@ "rootDir": "src" }, "include": ["src/**/*.ts", "src/**/*.tsx"], - "exclude": ["node_modules", "dist", "**/*.config.ts"], - "references": [{ "path": "../ai" }] + "exclude": ["node_modules", "dist", "**/*.config.ts"] } diff --git a/packages/typescript/ai-gemini/tsdown.config.ts b/packages/typescript/ai-gemini/tsdown.config.ts deleted file mode 100644 index e5d9cb35a..000000000 --- a/packages/typescript/ai-gemini/tsdown.config.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { defineConfig } from "tsdown"; - -export default defineConfig({ - entry: ["./src/index.ts"], - format: ["esm"], - unbundle: true, - dts: true, - sourcemap: true, - clean: true, - minify: false, - external: ["@google/generative-ai"], -}); diff --git a/packages/typescript/ai-gemini/vite.config.ts b/packages/typescript/ai-gemini/vite.config.ts new file mode 100644 index 000000000..e83c13eb9 --- /dev/null +++ b/packages/typescript/ai-gemini/vite.config.ts @@ -0,0 +1,35 @@ +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/config/vite' +import packageJson from './package.json' + +const config = defineConfig({ + test: { + name: packageJson.name, + dir: './', + watch: false, + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: ['./src/index.ts'], + srcDir: './src', + }), +) diff --git a/packages/typescript/ai-gemini/vitest.config.ts b/packages/typescript/ai-gemini/vitest.config.ts index 8fa8bfb9e..fa2531743 100644 --- a/packages/typescript/ai-gemini/vitest.config.ts +++ b/packages/typescript/ai-gemini/vitest.config.ts @@ -1,22 +1,22 @@ -import { defineConfig } from "vitest/config"; - -export default defineConfig({ - test: { - globals: true, - environment: "node", - include: ["tests/**/*.test.ts"], - coverage: { - provider: "v8", - reporter: ["text", "json", "html", "lcov"], - exclude: [ - "node_modules/", - "dist/", - "tests/", - "**/*.test.ts", - "**/*.config.ts", - "**/types.ts", - ], - include: ["src/**/*.ts"], - }, - }, -}); +import { defineConfig } from 'vitest/config' + +export default defineConfig({ + test: { + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) diff --git a/packages/typescript/ai-ollama/README.md b/packages/typescript/ai-ollama/README.md new file mode 100644 index 000000000..7c4143074 --- /dev/null +++ b/packages/typescript/ai-ollama/README.md @@ -0,0 +1,104 @@ +
+ +
+ +
+ +
+ + + + + + + + + +
+ + + +
+ +### [Become a Sponsor!](https://github.com/sponsors/tannerlinsley/) +
+ +# TanStack AI + +A powerful, type-safe AI SDK for building AI-powered applications. + +- Provider-agnostic adapters (OpenAI, Anthropic, Gemini, Ollama, etc.) +- Chat completion, streaming, and agent loop strategies +- Headless chat state management with adapters (SSE, HTTP stream, custom) +- Type-safe tools with server/client execution + +### Read the docs → + +## Get Involved + +- We welcome issues and pull requests! +- Participate in [GitHub discussions](https://github.com/TanStack/ai/discussions) +- Chat with the community on [Discord](https://discord.com/invite/WrRKjPJ) +- See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions + +## Partners + + + + + + +
+ + + + + CodeRabbit + + + + + + + + Cloudflare + + +
+ +
+AI & you? +

+We're looking for TanStack AI Partners to join our mission! Partner with us to push the boundaries of TanStack AI and build amazing things together. +

+LET'S CHAT +
+ +## Explore the TanStack Ecosystem + +- TanStack Config – Tooling for JS/TS packages +- TanStack DB – Reactive sync client store +- TanStack Devtools – Unified devtools panel +- TanStack Form – Type‑safe form state +- TanStack Pacer – Debouncing, throttling, batching +- TanStack Query – Async state & caching +- TanStack Ranger – Range & slider primitives +- TanStack Router – Type‑safe routing, caching & URL state +- TanStack Start – Full‑stack SSR & streaming +- TanStack Store – Reactive data store +- TanStack Table – Headless datagrids +- TanStack Virtual – Virtualized rendering + +… and more at TanStack.com Ā» + + diff --git a/packages/typescript/ai-ollama/package.json b/packages/typescript/ai-ollama/package.json index 8488f7532..64645f1ed 100644 --- a/packages/typescript/ai-ollama/package.json +++ b/packages/typescript/ai-ollama/package.json @@ -10,12 +10,12 @@ "directory": "packages/typescript/ai-ollama" }, "type": "module", - "module": "./dist/index.js", - "types": "./dist/index.d.ts", + "module": "./dist/esm/index.js", + "types": "./dist/esm/index.d.ts", "exports": { ".": { - "types": "./dist/index.d.ts", - "import": "./dist/index.js" + "types": "./dist/esm/index.d.ts", + "import": "./dist/esm/index.js" } }, "files": [ @@ -23,11 +23,14 @@ "src" ], "scripts": { - "build": "tsdown", - "dev": "tsdown --watch", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" + "build": "vite build", + "clean": "premove ./build ./dist", + "lint:fix": "eslint ./src --fix", + "test:build": "publint --strict", + "test:eslint": "eslint ./src", + "test:lib": "vitest --passWithNoTests", + "test:lib:dev": "pnpm test:lib --watch", + "test:types": "tsc" }, "keywords": [ "ai", @@ -39,12 +42,11 @@ ], "dependencies": { "@tanstack/ai": "workspace:*", - "ollama": "^0.5.0" + "ollama": "^0.6.3" }, "devDependencies": { - "@types/node": "^22.10.2", - "tsdown": "^0.15.9", - "typescript": "^5.7.2" + "@vitest/coverage-v8": "4.0.14", + "vite": "^7.2.4" }, "peerDependencies": { "@tanstack/ai": "workspace:*" diff --git a/packages/typescript/ai-ollama/src/index.ts b/packages/typescript/ai-ollama/src/index.ts index d301d2a76..47ec1a8ca 100644 --- a/packages/typescript/ai-ollama/src/index.ts +++ b/packages/typescript/ai-ollama/src/index.ts @@ -1,2 +1,6 @@ -export { Ollama, createOllama, ollama, type OllamaConfig } from "./ollama-adapter"; - +export { + Ollama, + createOllama, + ollama, + type OllamaConfig, +} from './ollama-adapter' diff --git a/packages/typescript/ai-ollama/src/ollama-adapter.ts b/packages/typescript/ai-ollama/src/ollama-adapter.ts index 0332999f8..a8f043970 100644 --- a/packages/typescript/ai-ollama/src/ollama-adapter.ts +++ b/packages/typescript/ai-ollama/src/ollama-adapter.ts @@ -1,129 +1,137 @@ -import { Ollama as OllamaSDK } from "ollama"; -import { - BaseAdapter, - type ChatOptions, - type SummarizationOptions, - type SummarizationResult, - type EmbeddingOptions, - type EmbeddingResult, - type StreamChunk, -} from "@tanstack/ai"; +import { Ollama as OllamaSDK } from 'ollama' +import { BaseAdapter } from '@tanstack/ai' +import type { + ChatOptions, + EmbeddingOptions, + EmbeddingResult, + StreamChunk, + SummarizationOptions, + SummarizationResult, +} from '@tanstack/ai' export interface OllamaConfig { - host?: string; + host?: string } const OLLAMA_MODELS = [ - "llama2", - "llama3", - "codellama", - "mistral", - "mixtral", - "phi", - "neural-chat", - "starling-lm", - "orca-mini", - "vicuna", - "nous-hermes", - "nomic-embed-text", - "gpt-oss:20b", -] as const; - -const OLLAMA_IMAGE_MODELS = [] as const; -const OLLAMA_EMBEDDING_MODELS = [] as const; -const OLLAMA_AUDIO_MODELS = [] as const; -const OLLAMA_VIDEO_MODELS = [] as const; - -export type OllamaModel = (typeof OLLAMA_MODELS)[number]; + 'llama2', + 'llama3', + 'codellama', + 'mistral', + 'mixtral', + 'phi', + 'neural-chat', + 'starling-lm', + 'orca-mini', + 'vicuna', + 'nous-hermes', + 'nomic-embed-text', + 'gpt-oss:20b', +] as const + +const OLLAMA_IMAGE_MODELS = [] as const +const OLLAMA_EMBEDDING_MODELS = [] as const +const OLLAMA_AUDIO_MODELS = [] as const +const OLLAMA_VIDEO_MODELS = [] as const + +// type OllamaModel = (typeof OLLAMA_MODELS)[number] /** * Ollama-specific provider options * Based on Ollama API options * @see https://github.com/ollama/ollama/blob/main/docs/api.md */ -export interface OllamaProviderOptions { +interface OllamaProviderOptions { /** Number of tokens to keep from the prompt */ - num_keep?: number; + num_keep?: number /** Number of tokens from context to consider for next token prediction */ - top_k?: number; + top_k?: number /** Minimum probability for nucleus sampling */ - min_p?: number; + min_p?: number /** Tail-free sampling parameter */ - tfs_z?: number; + tfs_z?: number /** Typical probability sampling parameter */ - typical_p?: number; + typical_p?: number /** Number of previous tokens to consider for repetition penalty */ - repeat_last_n?: number; + repeat_last_n?: number /** Penalty for repeating tokens */ - repeat_penalty?: number; + repeat_penalty?: number /** Enable Mirostat sampling (0=disabled, 1=Mirostat, 2=Mirostat 2.0) */ - mirostat?: number; + mirostat?: number /** Target entropy for Mirostat */ - mirostat_tau?: number; + mirostat_tau?: number /** Learning rate for Mirostat */ - mirostat_eta?: number; + mirostat_eta?: number /** Enable penalize_newline */ - penalize_newline?: boolean; + penalize_newline?: boolean /** Enable NUMA support */ - numa?: boolean; + numa?: boolean /** Context window size */ - num_ctx?: number; + num_ctx?: number /** Batch size for prompt processing */ - num_batch?: number; + num_batch?: number /** Number of GQA groups (for some models) */ - num_gqa?: number; + num_gqa?: number /** Number of GPU layers to use */ - num_gpu?: number; + num_gpu?: number /** GPU to use for inference */ - main_gpu?: number; + main_gpu?: number /** Use memory-mapped model */ - use_mmap?: boolean; + use_mmap?: boolean /** Use memory-locked model */ - use_mlock?: boolean; + use_mlock?: boolean /** Number of threads to use */ - num_thread?: number; + num_thread?: number } interface ChatCompletionChunk { - id: string; - model: string; - content: string; - role?: "assistant"; - finishReason?: "stop" | "length" | "content_filter" | null; + id: string + model: string + content: string + role?: 'assistant' + finishReason?: 'stop' | 'length' | 'content_filter' | 'tool_calls' | null + toolCalls?: Array<{ + id: string + type: 'function' + function: { + name: string + arguments: string + } + }> usage?: { - promptTokens: number; - completionTokens: number; - totalTokens: number; - }; + promptTokens: number + completionTokens: number + totalTokens: number + } } async function* convertChatCompletionStream( stream: AsyncIterable, - _model: string + _model: string, ): AsyncIterable { - let accumulatedContent = ""; - const timestamp = Date.now(); - let nextToolIndex = 0; + let accumulatedContent = '' + const timestamp = Date.now() + let nextToolIndex = 0 for await (const chunk of stream) { if (chunk.content) { - accumulatedContent += chunk.content; + accumulatedContent += chunk.content yield { - type: "content", + type: 'content', id: chunk.id, model: chunk.model, timestamp, delta: chunk.content, content: accumulatedContent, role: chunk.role, - }; + } } // Handle tool calls if present if (chunk.toolCalls && chunk.toolCalls.length > 0) { for (const toolCall of chunk.toolCalls) { yield { - type: "tool_call", + type: 'tool_call', id: chunk.id, model: chunk.model, timestamp, @@ -136,19 +144,19 @@ async function* convertChatCompletionStream( }, }, index: nextToolIndex++, - }; + } } } if (chunk.finishReason) { yield { - type: "done", + type: 'done', id: chunk.id, model: chunk.model, timestamp, finishReason: chunk.finishReason, usage: chunk.usage, - }; + } } } } @@ -157,19 +165,21 @@ async function* convertChatCompletionStream( * Converts standard Tool format to Ollama-specific tool format * Ollama uses OpenAI-compatible tool format */ -function convertToolsToOllamaFormat(tools?: any[]): any[] | undefined { +function convertToolsToOllamaFormat( + tools?: Array, +): Array | undefined { if (!tools || tools.length === 0) { - return undefined; + return undefined } return tools.map((tool) => ({ - type: "function", + type: 'function', function: { name: tool.function.name, description: tool.function.description, parameters: tool.function.parameters, }, - })); + })) } /** @@ -178,24 +188,24 @@ function convertToolsToOllamaFormat(tools?: any[]): any[] | undefined { */ function mapCommonOptionsToOllama( options: ChatOptions, - providerOpts?: OllamaProviderOptions + providerOpts?: OllamaProviderOptions, ): any { const ollamaOptions = { temperature: options.options?.temperature, top_p: options.options?.topP, num_predict: options.options?.maxTokens, - }; + } // Apply Ollama-specific provider options if (providerOpts) { - Object.assign(ollamaOptions, providerOpts); + Object.assign(ollamaOptions, providerOpts) } return { model: options.model, options: ollamaOptions, tools: convertToolsToOllamaFormat(options.tools), - }; + } } export class Ollama extends BaseAdapter< @@ -204,56 +214,56 @@ export class Ollama extends BaseAdapter< OllamaProviderOptions, Record > { - name = "ollama"; - models = OLLAMA_MODELS; - imageModels = OLLAMA_IMAGE_MODELS; - embeddingModels = OLLAMA_EMBEDDING_MODELS; - audioModels = OLLAMA_AUDIO_MODELS; - videoModels = OLLAMA_VIDEO_MODELS; - private client: OllamaSDK; + name = 'ollama' + models = OLLAMA_MODELS + imageModels = OLLAMA_IMAGE_MODELS + embeddingModels = OLLAMA_EMBEDDING_MODELS + audioModels = OLLAMA_AUDIO_MODELS + videoModels = OLLAMA_VIDEO_MODELS + private client: OllamaSDK constructor(config: OllamaConfig = {}) { - super({}); + super({}) this.client = new OllamaSDK({ - host: config.host || "http://localhost:11434", - }); + host: config.host || 'http://localhost:11434', + }) } async *chatCompletionStream( - options: ChatOptions + options: ChatOptions, ): AsyncIterable { const providerOpts = options.providerOptions as | OllamaProviderOptions - | undefined; + | undefined // Map common options to Ollama format - const mappedOptions = mapCommonOptionsToOllama(options, providerOpts); + const mappedOptions = mapCommonOptionsToOllama(options, providerOpts) // Format messages for Ollama (handle tool calls and tool results) const formattedMessages = options.messages.map((msg) => { const baseMessage: any = { - role: msg.role as "user" | "assistant" | "system", - content: msg.content || "", - }; + role: msg.role as 'user' | 'assistant' | 'system', + content: msg.content || '', + } // Handle tool calls (assistant messages) // Ollama expects arguments as an object, not a JSON string if ( - msg.role === "assistant" && + msg.role === 'assistant' && msg.toolCalls && msg.toolCalls.length > 0 ) { baseMessage.tool_calls = msg.toolCalls.map((toolCall) => { // Parse string arguments to object for Ollama - let parsedArguments: any = {}; - if (typeof toolCall.function.arguments === "string") { + let parsedArguments: any = {} + if (typeof toolCall.function.arguments === 'string') { try { - parsedArguments = JSON.parse(toolCall.function.arguments); + parsedArguments = JSON.parse(toolCall.function.arguments) } catch { - parsedArguments = {}; + parsedArguments = {} } } else { - parsedArguments = toolCall.function.arguments || {}; + parsedArguments = toolCall.function.arguments } return { @@ -263,22 +273,22 @@ export class Ollama extends BaseAdapter< name: toolCall.function.name, arguments: parsedArguments, }, - }; - }); + } + }) } // Handle tool results (tool messages) - if (msg.role === "tool" && msg.toolCallId) { - baseMessage.role = "tool"; - baseMessage.tool_call_id = msg.toolCallId; + if (msg.role === 'tool' && msg.toolCallId) { + baseMessage.role = 'tool' + baseMessage.tool_call_id = msg.toolCallId baseMessage.content = - typeof msg.content === "string" + typeof msg.content === 'string' ? msg.content - : JSON.stringify(msg.content); + : JSON.stringify(msg.content) } - return baseMessage; - }); + return baseMessage + }) const response = await this.client.chat({ model: mappedOptions.model, @@ -286,45 +296,45 @@ export class Ollama extends BaseAdapter< options: mappedOptions.options, tools: mappedOptions.tools, stream: true, - }); + }) - let hasToolCalls = false; + let hasToolCalls = false for await (const chunk of response) { // Check if tool calls are present in this chunk - const toolCalls = chunk.message.tool_calls || []; + const toolCalls = chunk.message.tool_calls || [] if (toolCalls.length > 0) { - hasToolCalls = true; + hasToolCalls = true } const result: ChatCompletionChunk = { id: this.generateId(), - model: chunk.model || options.model || "llama2", - content: chunk.message.content || "", - role: "assistant", + model: chunk.model || options.model || 'llama2', + content: chunk.message.content || '', + role: 'assistant', finishReason: chunk.done ? hasToolCalls - ? "tool_calls" - : "stop" + ? 'tool_calls' + : 'stop' : null, - }; + } // Handle tool calls if present if (toolCalls.length > 0) { result.toolCalls = toolCalls.map((tc: any) => ({ id: tc.id || this.generateId(), - type: tc.type || "function", + type: tc.type || 'function', function: { - name: tc.function?.name || "", + name: tc.function?.name || '', arguments: - typeof tc.function?.arguments === "string" + typeof tc.function?.arguments === 'string' ? tc.function.arguments : JSON.stringify(tc.function?.arguments || {}), }, - })); + })) } - yield result; + yield result } } @@ -333,25 +343,25 @@ export class Ollama extends BaseAdapter< // TODO: Implement native structured streaming for Ollama yield* convertChatCompletionStream( this.chatCompletionStream(options), - options.model || "llama2" - ); + options.model || 'llama2', + ) } async summarize(options: SummarizationOptions): Promise { - const prompt = this.buildSummarizationPrompt(options, options.text); + const prompt = this.buildSummarizationPrompt(options, options.text) const response = await this.client.generate({ - model: options.model || "llama2", + model: options.model || 'llama2', prompt, options: { temperature: 0.3, num_predict: options.maxLength || 500, }, stream: false, - }); + }) - const promptTokens = this.estimateTokens(prompt); - const completionTokens = this.estimateTokens(response.response); + const promptTokens = this.estimateTokens(prompt) + const completionTokens = this.estimateTokens(response.response) return { id: this.generateId(), @@ -362,71 +372,71 @@ export class Ollama extends BaseAdapter< completionTokens, totalTokens: promptTokens + completionTokens, }, - }; + } } async createEmbeddings(options: EmbeddingOptions): Promise { const inputs = Array.isArray(options.input) ? options.input - : [options.input]; - const embeddings: number[][] = []; + : [options.input] + const embeddings: Array> = [] for (const input of inputs) { const response = await this.client.embeddings({ - model: options.model || "nomic-embed-text", + model: options.model || 'nomic-embed-text', prompt: input, - }); - embeddings.push(response.embedding); + }) + embeddings.push(response.embedding) } const promptTokens = inputs.reduce( (sum, input) => sum + this.estimateTokens(input), - 0 - ); + 0, + ) return { id: this.generateId(), - model: options.model || "nomic-embed-text", + model: options.model || 'nomic-embed-text', embeddings, usage: { promptTokens, totalTokens: promptTokens, }, - }; + } } private buildSummarizationPrompt( options: SummarizationOptions, - text: string + text: string, ): string { - let prompt = "You are a professional summarizer. "; + let prompt = 'You are a professional summarizer. ' switch (options.style) { - case "bullet-points": - prompt += "Provide a summary in bullet point format. "; - break; - case "paragraph": - prompt += "Provide a summary in paragraph format. "; - break; - case "concise": - prompt += "Provide a very concise summary in 1-2 sentences. "; - break; + case 'bullet-points': + prompt += 'Provide a summary in bullet point format. ' + break + case 'paragraph': + prompt += 'Provide a summary in paragraph format. ' + break + case 'concise': + prompt += 'Provide a very concise summary in 1-2 sentences. ' + break default: - prompt += "Provide a clear and concise summary. "; + prompt += 'Provide a clear and concise summary. ' } if (options.focus && options.focus.length > 0) { - prompt += `Focus on the following aspects: ${options.focus.join(", ")}. `; + prompt += `Focus on the following aspects: ${options.focus.join(', ')}. ` } - prompt += `\n\nText to summarize:\n${text}\n\nSummary:`; + prompt += `\n\nText to summarize:\n${text}\n\nSummary:` - return prompt; + return prompt } private estimateTokens(text: string): number { // Rough approximation: 1 token ā‰ˆ 4 characters - return Math.ceil(text.length / 4); + return Math.ceil(text.length / 4) } } @@ -450,9 +460,9 @@ export class Ollama extends BaseAdapter< */ export function createOllama( host?: string, - config?: Omit + config?: Omit, ): Ollama { - return new Ollama({ host, ...config }); + return new Ollama({ host, ...config }) } /** @@ -473,14 +483,14 @@ export function createOllama( * const aiInstance = ai(ollama()); * ``` */ -export function ollama(config?: Omit): Ollama { +export function ollama(config?: Omit): Ollama { const env = - typeof globalThis !== "undefined" && (globalThis as any).window?.env + typeof globalThis !== 'undefined' && (globalThis as any).window?.env ? (globalThis as any).window.env - : typeof process !== "undefined" - ? process.env - : undefined; - const host = env?.OLLAMA_HOST; + : typeof process !== 'undefined' + ? process.env + : undefined + const host = env?.OLLAMA_HOST - return createOllama(host, config); + return createOllama(host, config) } diff --git a/packages/typescript/ai-ollama/tsconfig.json b/packages/typescript/ai-ollama/tsconfig.json index 204ca8d3f..ea11c1096 100644 --- a/packages/typescript/ai-ollama/tsconfig.json +++ b/packages/typescript/ai-ollama/tsconfig.json @@ -5,6 +5,5 @@ "rootDir": "src" }, "include": ["src/**/*.ts", "src/**/*.tsx"], - "exclude": ["node_modules", "dist", "**/*.config.ts"], - "references": [{ "path": "../ai" }] + "exclude": ["node_modules", "dist", "**/*.config.ts"] } diff --git a/packages/typescript/ai-ollama/tsdown.config.ts b/packages/typescript/ai-ollama/tsdown.config.ts deleted file mode 100644 index e3f56ca72..000000000 --- a/packages/typescript/ai-ollama/tsdown.config.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { defineConfig } from "tsdown"; - -export default defineConfig({ - entry: ["./src/index.ts"], - format: ["esm"], - unbundle: true, - dts: true, - sourcemap: true, - clean: true, - minify: false, - external: ["ollama"], -}); diff --git a/packages/typescript/ai-ollama/vite.config.ts b/packages/typescript/ai-ollama/vite.config.ts new file mode 100644 index 000000000..e83c13eb9 --- /dev/null +++ b/packages/typescript/ai-ollama/vite.config.ts @@ -0,0 +1,35 @@ +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/config/vite' +import packageJson from './package.json' + +const config = defineConfig({ + test: { + name: packageJson.name, + dir: './', + watch: false, + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: ['./src/index.ts'], + srcDir: './src', + }), +) diff --git a/packages/typescript/ai-openai/README.md b/packages/typescript/ai-openai/README.md new file mode 100644 index 000000000..7c4143074 --- /dev/null +++ b/packages/typescript/ai-openai/README.md @@ -0,0 +1,104 @@ +
+ +
+ +
+ +
+ + + + + + + + + +
+ + + +
+ +### [Become a Sponsor!](https://github.com/sponsors/tannerlinsley/) +
+ +# TanStack AI + +A powerful, type-safe AI SDK for building AI-powered applications. + +- Provider-agnostic adapters (OpenAI, Anthropic, Gemini, Ollama, etc.) +- Chat completion, streaming, and agent loop strategies +- Headless chat state management with adapters (SSE, HTTP stream, custom) +- Type-safe tools with server/client execution + +### Read the docs → + +## Get Involved + +- We welcome issues and pull requests! +- Participate in [GitHub discussions](https://github.com/TanStack/ai/discussions) +- Chat with the community on [Discord](https://discord.com/invite/WrRKjPJ) +- See [CONTRIBUTING.md](./CONTRIBUTING.md) for setup instructions + +## Partners + + + + + + +
+ + + + + CodeRabbit + + + + + + + + Cloudflare + + +
+ +
+AI & you? +

+We're looking for TanStack AI Partners to join our mission! Partner with us to push the boundaries of TanStack AI and build amazing things together. +

+LET'S CHAT +
+ +## Explore the TanStack Ecosystem + +- TanStack Config – Tooling for JS/TS packages +- TanStack DB – Reactive sync client store +- TanStack Devtools – Unified devtools panel +- TanStack Form – Type‑safe form state +- TanStack Pacer – Debouncing, throttling, batching +- TanStack Query – Async state & caching +- TanStack Ranger – Range & slider primitives +- TanStack Router – Type‑safe routing, caching & URL state +- TanStack Start – Full‑stack SSR & streaming +- TanStack Store – Reactive data store +- TanStack Table – Headless datagrids +- TanStack Virtual – Virtualized rendering + +… and more at TanStack.com Ā» + + diff --git a/packages/typescript/ai-openai/live-tests/example-guitars.ts b/packages/typescript/ai-openai/live-tests/example-guitars.ts deleted file mode 100644 index 16b6764f7..000000000 --- a/packages/typescript/ai-openai/live-tests/example-guitars.ts +++ /dev/null @@ -1,83 +0,0 @@ -export interface Guitar { - id: number - name: string - image: string - description: string - shortDescription: string - price: number -} - -const guitars: Array = [ - { - id: 1, - name: 'Video Game Guitar', - image: '/example-guitar-video-games.jpg', - description: - "The Video Game Guitar is a unique acoustic guitar that features a design inspired by video games. It has a sleek, high-gloss finish and a comfortable playability. The guitar's ergonomic body and fast neck profile ensure comfortable playability for hours on end.", - shortDescription: - 'A unique electric guitar with a video game design, high-gloss finish, and comfortable playability.', - price: 699, - }, - { - id: 2, - name: 'Superhero Guitar', - image: '/example-guitar-superhero.jpg', - description: - "The Superhero Guitar is a bold black electric guitar that stands out with its unique superhero logo design. Its sleek, high-gloss finish and powerful pickups make it perfect for high-energy performances. The guitar's ergonomic body and fast neck profile ensure comfortable playability for hours on end.", - shortDescription: - 'A bold black electric guitar with a unique superhero logo, high-gloss finish, and powerful pickups.', - price: 699, - }, - { - id: 3, - name: 'Motherboard Guitar', - image: '/example-guitar-motherboard.jpg', - description: - "This guitar is a tribute to the motherboard of a computer. It's a unique and stylish instrument that will make you feel like a hacker. The intricate circuit-inspired design features actual LED lights that pulse with your playing intensity, while the neck is inlaid with binary code patterns that glow under stage lights. Each pickup has been custom-wound to produce tones ranging from clean digital precision to glitched-out distortion, perfect for electronic music fusion. The Motherboard Guitar seamlessly bridges the gap between traditional craftsmanship and cutting-edge technology, making it the ultimate instrument for the digital age musician.", - shortDescription: - 'A tech-inspired electric guitar featuring LED lights and binary code inlays that glow under stage lights.', - price: 649, - }, - { - id: 4, - name: 'Racing Guitar', - image: '/example-guitar-racing.jpg', - description: - "Engineered for speed and precision, the Racing Guitar embodies the spirit of motorsport in every curve and contour. Its aerodynamic body, painted in classic racing stripes and high-gloss finish, is crafted from lightweight materials that allow for effortless play during extended performances. The custom low-action setup and streamlined neck profile enable lightning-fast fretwork, while specially designed pickups deliver a high-octane tone that cuts through any mix. Built with performance-grade hardware including racing-inspired control knobs and checkered flag inlays, this guitar isn't just played—it's driven to the limits of musical possibility.", - shortDescription: - 'A lightweight, aerodynamic guitar with racing stripes and a low-action setup designed for speed and precision.', - price: 679, - }, - { - id: 5, - name: 'Steamer Trunk Guitar', - image: '/example-guitar-steamer-trunk.jpg', - description: - 'The Steamer Trunk Guitar is a semi-hollow body instrument that exudes vintage charm and character. Crafted from reclaimed antique luggage wood, it features brass hardware that adds a touch of elegance and durability. The fretboard is adorned with a world map inlay, making it a unique piece that tells a story of travel and adventure.', - shortDescription: - 'A semi-hollow body guitar with brass hardware and a world map inlay, crafted from reclaimed antique luggage wood.', - price: 629, - }, - { - id: 6, - name: "Travelin' Man Guitar", - image: '/example-guitar-traveling.jpg', - description: - "The Travelin' Man Guitar is an acoustic masterpiece adorned with vintage postcards from around the world. Each postcard tells a story of adventure and wanderlust, making this guitar a unique piece of art. Its rich, resonant tones and comfortable playability make it perfect for musicians who love to travel and perform.", - shortDescription: - 'An acoustic guitar with vintage postcards, rich tones, and comfortable playability.', - price: 499, - }, - { - id: 7, - name: 'Flowerly Love Guitar', - image: '/example-guitar-flowers.jpg', - description: - "The Flowerly Love Guitar is an acoustic masterpiece adorned with intricate floral designs on its body. Each flower is hand-painted, adding a touch of nature's beauty to the instrument. Its warm, resonant tones make it perfect for both intimate performances and larger gatherings.", - shortDescription: - 'An acoustic guitar with hand-painted floral designs and warm, resonant tones.', - price: 599, - }, -] - -export default guitars diff --git a/packages/typescript/ai-openai/live-tests/test-tool-arguments.ts b/packages/typescript/ai-openai/live-tests/test-tool-arguments.ts deleted file mode 100644 index eba25f042..000000000 --- a/packages/typescript/ai-openai/live-tests/test-tool-arguments.ts +++ /dev/null @@ -1,159 +0,0 @@ -import { chat, tool } from "@tanstack/ai"; -import { createOpenAI } from "../dist/openai-adapter.js"; -import guitars from "./example-guitars.js"; -import * as path from "path"; -import * as fs from "fs"; -import { fileURLToPath } from "url"; - -// Load .env.local file -const __filename = fileURLToPath(import.meta.url); -const __dirname = path.dirname(__filename); -const envPath = path.join(__dirname, ".env.local"); - -let apiKey = process.env.OPENAI_API_KEY; - -if (!apiKey && fs.existsSync(envPath)) { - const envContent = fs.readFileSync(envPath, "utf-8"); - const match = envContent.match(/^OPENAI_API_KEY=(.+)$/m); - if (match) { - apiKey = match[1]?.trim(); - } -} - -if (!apiKey) { - throw new Error("OPENAI_API_KEY is required in .env.local or environment"); -} - -const getGuitars = tool({ - type: "function", - function: { - name: "getGuitars", - description: "Get all products from the database", - parameters: { - type: "object", - properties: {}, - required: [], - }, - }, - execute: async () => { - return JSON.stringify(guitars); - }, -}); - -const recommendGuitar = tool({ - type: "function", - function: { - name: "recommendGuitar", - description: "Use this tool to recommend a guitar to the user", - parameters: { - type: "object", - properties: { - id: { - type: "string", - description: "The id of the guitar to recommend", - }, - }, - required: ["id"], - }, - }, - execute: async ({ id }) => { - return JSON.stringify({ id }); - }, -}); - -async function testToolArguments() { - console.log("🧪 Testing tool argument parsing...\n"); - - const openai = createOpenAI(apiKey); - const tools = [getGuitars, recommendGuitar]; - - const messages = [ - { - role: "user" as const, - content: - "please search your product catalog and recommend a good acoustic guitar", - }, - ]; - - console.log("šŸ“¤ Sending request to OpenAI...\n"); - - const toolCalls: Array<{ - id: string; - name: string; - arguments: string; - }> = []; - - try { - for await (const chunk of chat({ - adapter: openai, - model: "gpt-4o", - messages, - tools, - })) { - if (chunk.type === "tool_call") { - const toolCall = chunk.toolCall; - const args = toolCall.function.arguments; - toolCalls.push({ - id: toolCall.id, - name: toolCall.function.name, - arguments: args, - }); - console.log(`šŸ”§ Tool call: ${toolCall.function.name}`); - console.log(` ID: ${toolCall.id}`); - console.log(` Arguments: ${args}`); - console.log(` Arguments length: ${args?.length || 0}`); - console.log(); - } - } - - console.log("\nšŸ“Š Results:\n"); - console.log(`Total tool calls: ${toolCalls.length}\n`); - - // Find the recommendGuitar call - const recommendCall = toolCalls.find((tc) => tc.name === "recommendGuitar"); - - if (!recommendCall) { - console.error("āŒ ERROR: recommendGuitar tool was not called"); - process.exit(1); - } - - console.log("āœ… recommendGuitar was called"); - console.log(` Arguments: ${recommendCall.arguments}`); - - // Parse and verify arguments - let parsedArgs: any; - try { - parsedArgs = JSON.parse(recommendCall.arguments); - } catch (e) { - console.error(`āŒ ERROR: Failed to parse arguments as JSON: ${e}`); - console.error(` Raw arguments: ${recommendCall.arguments}`); - process.exit(1); - } - - console.log(` Parsed: ${JSON.stringify(parsedArgs, null, 2)}`); - - // Verify the arguments contain an id - if (!parsedArgs.id) { - console.error("āŒ ERROR: Arguments do not contain 'id' field"); - console.error(` Parsed args: ${JSON.stringify(parsedArgs)}`); - process.exit(1); - } - - if (parsedArgs.id === "" || parsedArgs.id === undefined) { - console.error("āŒ ERROR: 'id' field is empty or undefined"); - console.error(` Parsed args: ${JSON.stringify(parsedArgs)}`); - process.exit(1); - } - - console.log( - `\nāœ… SUCCESS: recommendGuitar received correct arguments with id: "${parsedArgs.id}"` - ); - console.log("\nšŸŽ‰ Test passed!"); - } catch (error: any) { - console.error("\nāŒ ERROR:", error.message); - console.error(error.stack); - process.exit(1); - } -} - -testToolArguments(); diff --git a/packages/typescript/ai-openai/package.json b/packages/typescript/ai-openai/package.json index 312256651..164c34577 100644 --- a/packages/typescript/ai-openai/package.json +++ b/packages/typescript/ai-openai/package.json @@ -10,12 +10,12 @@ "directory": "packages/typescript/ai-openai" }, "type": "module", - "module": "./dist/index.js", - "types": "./dist/index.d.ts", + "module": "./dist/esm/index.js", + "types": "./dist/esm/index.d.ts", "exports": { ".": { - "types": "./dist/index.d.ts", - "import": "./dist/index.js" + "types": "./dist/esm/index.d.ts", + "import": "./dist/esm/index.js" } }, "files": [ @@ -23,14 +23,14 @@ "src" ], "scripts": { - "build": "tsdown", - "dev": "tsdown --watch", - "test": "vitest run", - "test:watch": "vitest", - "test:coverage": "vitest run --coverage", - "clean": "rm -rf dist node_modules", - "typecheck": "tsc --noEmit", - "lint": "tsc --noEmit" + "build": "vite build", + "clean": "premove ./build ./dist", + "lint:fix": "eslint ./src --fix", + "test:build": "publint --strict", + "test:eslint": "eslint ./src", + "test:lib": "vitest run", + "test:lib:dev": "pnpm test:lib --watch", + "test:types": "tsc" }, "keywords": [ "ai", @@ -44,15 +44,11 @@ "openai": "^6.9.1" }, "devDependencies": { - "@types/node": "^22.10.2", - "@vitest/coverage-v8": "4.0.13", - "tsdown": "^0.15.9", - "tsx": "^4.20.6", - "typescript": "^5.7.2", - "vitest": "^4.0.13", - "zod": "^4.1.12" + "@vitest/coverage-v8": "4.0.14", + "vite": "^7.2.4", + "zod": "^4.1.13" }, "peerDependencies": { "@tanstack/ai": "workspace:*" } -} \ No newline at end of file +} diff --git a/packages/typescript/ai-openai/src/audio/audio-provider-options.ts b/packages/typescript/ai-openai/src/audio/audio-provider-options.ts index ae316a334..6021df5bb 100644 --- a/packages/typescript/ai-openai/src/audio/audio-provider-options.ts +++ b/packages/typescript/ai-openai/src/audio/audio-provider-options.ts @@ -1,66 +1,75 @@ -import { OpenAIAudioModel } from "../openai-adapter"; - -export interface AudioProviderOptions { - /** - * The text to generate audio for. The maximum length is 4096 characters. - */ - input: string; - /** - * The audio model to use for generation. - */ - model: OpenAIAudioModel; - /** - * The voice to use when generating audio. - * Supported voices are alloy, ash, ballad, coral, echo, fable, onyx, nova, sage, shimmer, and verse. - * Previews of the voices are available on the following url: - * https://platform.openai.com/docs/guides/text-to-speech#voice-options - */ - voice?: "alloy" | "ash" | "ballad" | "coral" | "echo" | "fable" | "onyx" | "nova" | "sage" | "shimmer" | "verse"; - /** - * Control the voice of your generated audio with additional instructions. Does not work with tts-1 or tts-1-hd. - */ - instructions?: string; - /** - * The format of the generated audio. - * @default "mp3" - */ - response_format?: "mp3" | "opus" | "aac" | "flac" | "wav" | "pcm"; - /** - * The speed of the generated audio. - * Range of values between 0.25 to 4.0, where 1.0 is the default speed. - * @default 1.0 - */ - speed?: number; - /** - * The format to stream the audio in. Supported formats are sse and audio. sse is not supported for tts-1 or tts-1-hd. - */ - stream_format?: "sse" | "audio" -} - -export const validateStreamFormat = (options: AudioProviderOptions) => { - const unsupportedModels = ["tts-1", "tts-1-hd"]; - if (options.stream_format && unsupportedModels.includes(options.model)) { - throw new Error(`The model ${options.model} does not support streaming.`); - } -}; - -export const validateSpeed = (options: AudioProviderOptions) => { - if (options.speed) { - if (options.speed < 0.25 || options.speed > 4.0) { - throw new Error("Speed must be between 0.25 and 4.0."); - } - } -}; - -export const validateInstructions = (options: AudioProviderOptions) => { - const unsupportedModels = ["tts-1", "tts-1-hd"]; - if (options.instructions && unsupportedModels.includes(options.model)) { - throw new Error(`The model ${options.model} does not support instructions.`); - } -}; - -export const validateAudioInput = (options: AudioProviderOptions) => { - if (options.input.length > 4096) { - throw new Error("Input text exceeds maximum length of 4096 characters."); - } -}; \ No newline at end of file +export interface AudioProviderOptions { + /** + * The text to generate audio for. The maximum length is 4096 characters. + */ + input: string + /** + * The audio model to use for generation. + */ + model: string + /** + * The voice to use when generating audio. + * Supported voices are alloy, ash, ballad, coral, echo, fable, onyx, nova, sage, shimmer, and verse. + * Previews of the voices are available on the following url: + * https://platform.openai.com/docs/guides/text-to-speech#voice-options + */ + voice?: + | 'alloy' + | 'ash' + | 'ballad' + | 'coral' + | 'echo' + | 'fable' + | 'onyx' + | 'nova' + | 'sage' + | 'shimmer' + | 'verse' + /** + * Control the voice of your generated audio with additional instructions. Does not work with tts-1 or tts-1-hd. + */ + instructions?: string + /** + * The format of the generated audio. + * @default "mp3" + */ + response_format?: 'mp3' | 'opus' | 'aac' | 'flac' | 'wav' | 'pcm' + /** + * The speed of the generated audio. + * Range of values between 0.25 to 4.0, where 1.0 is the default speed. + * @default 1.0 + */ + speed?: number + /** + * The format to stream the audio in. Supported formats are sse and audio. sse is not supported for tts-1 or tts-1-hd. + */ + stream_format?: 'sse' | 'audio' +} + +export const validateStreamFormat = (options: AudioProviderOptions) => { + const unsupportedModels = ['tts-1', 'tts-1-hd'] + if (options.stream_format && unsupportedModels.includes(options.model)) { + throw new Error(`The model ${options.model} does not support streaming.`) + } +} + +export const validateSpeed = (options: AudioProviderOptions) => { + if (options.speed) { + if (options.speed < 0.25 || options.speed > 4.0) { + throw new Error('Speed must be between 0.25 and 4.0.') + } + } +} + +export const validateInstructions = (options: AudioProviderOptions) => { + const unsupportedModels = ['tts-1', 'tts-1-hd'] + if (options.instructions && unsupportedModels.includes(options.model)) { + throw new Error(`The model ${options.model} does not support instructions.`) + } +} + +export const validateAudioInput = (options: AudioProviderOptions) => { + if (options.input.length > 4096) { + throw new Error('Input text exceeds maximum length of 4096 characters.') + } +} diff --git a/packages/typescript/ai-openai/src/audio/transcribe-provider-options.ts b/packages/typescript/ai-openai/src/audio/transcribe-provider-options.ts index 956d7bb5d..063e719ff 100644 --- a/packages/typescript/ai-openai/src/audio/transcribe-provider-options.ts +++ b/packages/typescript/ai-openai/src/audio/transcribe-provider-options.ts @@ -1,119 +1,128 @@ -import { OpenAITranscriptionModel } from "../openai-adapter"; - -export interface TranscribeProviderOptions { - /** - * The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. - * https://platform.openai.com/docs/api-reference/audio/createTranscription#audio_createtranscription-file - */ - file: File; - /** - * The model to use for transcription. - * https://platform.openai.com/docs/api-reference/audio/createTranscription#audio_createtranscription-model - */ - model: OpenAITranscriptionModel; - - chunking_strategy: "auto" | { - type: "server_vad" - /** - * Amount of audio to include before the VAD detected speech (in milliseconds). - * @default 300 - */ - prefix_padding_ms?: number - /** - * Duration of silence to detect speech stop (in milliseconds). With shorter values the model will respond more quickly, but may jump in on short pauses from the user. - * @default 200 - */ - silence_duration_ms: number; - /** - * Sensitivity threshold (0.0 to 1.0) for voice activity detection. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments. - * @default 0.5 - */ - threshold?: number - } - /** - * Additional information to include in the transcription response. logprobs will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. logprobs only works with response_format set to json and only with the models gpt-4o-transcribe and gpt-4o-mini-transcribe. This field is not supported when using gpt-4o-transcribe-diarize. - */ - include?: string[] - /** - * Optional list of speaker names that correspond to the audio samples provided in known_speaker_references[]. Each entry should be a short identifier (for example customer or agent). Up to 4 speakers are supported. - */ - known_speaker_names: string[]; - /** - * Optional list of audio samples (as data URLs) that contain known speaker references matching known_speaker_names[]. Each sample must be between 2 and 10 seconds, and can use any of the same input audio formats supported by file. - */ - known_speaker_references?: string[]; - /** - * The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. - */ - language?: string; - /** - * An optional prompt to guide the transcription model's style or to help with uncommon words or phrases. - */ - prompt?: string; - /** - * The format of the output, in one of these options: json, text, srt, verbose_json, vtt, or diarized_json. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json. For gpt-4o-transcribe-diarize, the supported formats are json, text, and diarized_json, with diarized_json required to receive speaker annotations. - */ - response_format?: "json" | "text" | "srt" | "verbose_json" | "vtt" | "diarized_json" - - /** - * If set to true, the model response data will be streamed to the client as it is generated using server-sent events - * Note: Streaming is not supported for the whisper-1 model and will be ignored. - */ - stream?: boolean; - /** - * The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit. - */ - temperature?: number; - /** - * The timestamp granularities to populate for this transcription. response_format must be set verbose_json to use timestamp granularities. Either or both of these options are supported: word, or segment. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. This option is not available for gpt-4o-transcribe-diarize. - */ - timestamp_granularities?: Array<"word" | "segment">; -} - -export const validateTemperature = (options: TranscribeProviderOptions) => { - if (options.temperature) { - if (options.temperature < 0 || options.temperature > 1) { - throw new Error("Temperature must be between 0 and 1."); - } - } -} - -export const validateStream = (options: TranscribeProviderOptions) => { - const unsupportedModels = ["whisper-1"]; - if (options.stream) { - if (unsupportedModels.includes(options.model)) { - throw new Error(`The model ${options.model} does not support streaming.`); - } - } -} - -export const validatePrompt = (options: TranscribeProviderOptions) => { - const unsupportedModels = ["gpt-4o-transcribe-diarize"]; - if (options.prompt) { - if (unsupportedModels.includes(options.model)) { - throw new Error(`The model ${options.model} does not support prompts.`); - } - } -} - -export const validateKnownSpeakerNames = (options: TranscribeProviderOptions) => { - if (options.known_speaker_names) { - if (options.known_speaker_names.length > 4) { - throw new Error("A maximum of 4 known speaker names are supported."); - } - } -}; - -export const validateInclude = (options: TranscribeProviderOptions) => { - const unsupportedModels = ["gpt-4o-transcribe-diarize"]; - if (options.include) { - if (unsupportedModels.includes(options.model)) { - throw new Error(`The model ${options.model} does not support the include field.`); - } - } - - if (options.include && options.response_format !== "json") { - throw new Error("The include field is only supported when response_format is set to json."); - } - -}; \ No newline at end of file +export interface TranscribeProviderOptions { + /** + * The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm. + * https://platform.openai.com/docs/api-reference/audio/createTranscription#audio_createtranscription-file + */ + file: File + /** + * The model to use for transcription. + * https://platform.openai.com/docs/api-reference/audio/createTranscription#audio_createtranscription-model + */ + model: string + + chunking_strategy: + | 'auto' + | { + type: 'server_vad' + /** + * Amount of audio to include before the VAD detected speech (in milliseconds). + * @default 300 + */ + prefix_padding_ms?: number + /** + * Duration of silence to detect speech stop (in milliseconds). With shorter values the model will respond more quickly, but may jump in on short pauses from the user. + * @default 200 + */ + silence_duration_ms: number + /** + * Sensitivity threshold (0.0 to 1.0) for voice activity detection. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments. + * @default 0.5 + */ + threshold?: number + } + /** + * Additional information to include in the transcription response. logprobs will return the log probabilities of the tokens in the response to understand the model's confidence in the transcription. logprobs only works with response_format set to json and only with the models gpt-4o-transcribe and gpt-4o-mini-transcribe. This field is not supported when using gpt-4o-transcribe-diarize. + */ + include?: Array + /** + * Optional list of speaker names that correspond to the audio samples provided in known_speaker_references[]. Each entry should be a short identifier (for example customer or agent). Up to 4 speakers are supported. + */ + known_speaker_names: Array + /** + * Optional list of audio samples (as data URLs) that contain known speaker references matching known_speaker_names[]. Each sample must be between 2 and 10 seconds, and can use any of the same input audio formats supported by file. + */ + known_speaker_references?: Array + /** + * The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency. + */ + language?: string + /** + * An optional prompt to guide the transcription model's style or to help with uncommon words or phrases. + */ + prompt?: string + /** + * The format of the output, in one of these options: json, text, srt, verbose_json, vtt, or diarized_json. For gpt-4o-transcribe and gpt-4o-mini-transcribe, the only supported format is json. For gpt-4o-transcribe-diarize, the supported formats are json, text, and diarized_json, with diarized_json required to receive speaker annotations. + */ + response_format?: + | 'json' + | 'text' + | 'srt' + | 'verbose_json' + | 'vtt' + | 'diarized_json' + + /** + * If set to true, the model response data will be streamed to the client as it is generated using server-sent events + * Note: Streaming is not supported for the whisper-1 model and will be ignored. + */ + stream?: boolean + /** + * The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit. + */ + temperature?: number + /** + * The timestamp granularities to populate for this transcription. response_format must be set verbose_json to use timestamp granularities. Either or both of these options are supported: word, or segment. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. This option is not available for gpt-4o-transcribe-diarize. + */ + timestamp_granularities?: Array<'word' | 'segment'> +} + +export const validateTemperature = (options: TranscribeProviderOptions) => { + if (options.temperature) { + if (options.temperature < 0 || options.temperature > 1) { + throw new Error('Temperature must be between 0 and 1.') + } + } +} + +export const validateStream = (options: TranscribeProviderOptions) => { + const unsupportedModels = ['whisper-1'] + if (options.stream) { + if (unsupportedModels.includes(options.model)) { + throw new Error(`The model ${options.model} does not support streaming.`) + } + } +} + +export const validatePrompt = (options: TranscribeProviderOptions) => { + const unsupportedModels = ['gpt-4o-transcribe-diarize'] + if (options.prompt) { + if (unsupportedModels.includes(options.model)) { + throw new Error(`The model ${options.model} does not support prompts.`) + } + } +} + +export const validateKnownSpeakerNames = ( + options: TranscribeProviderOptions, +) => { + if (options.known_speaker_names.length > 4) { + throw new Error('A maximum of 4 known speaker names are supported.') + } +} + +export const validateInclude = (options: TranscribeProviderOptions) => { + const unsupportedModels = ['gpt-4o-transcribe-diarize'] + if (options.include) { + if (unsupportedModels.includes(options.model)) { + throw new Error( + `The model ${options.model} does not support the include field.`, + ) + } + } + + if (options.include && options.response_format !== 'json') { + throw new Error( + 'The include field is only supported when response_format is set to json.', + ) + } +} diff --git a/packages/typescript/ai-openai/src/image/image-provider-options.ts b/packages/typescript/ai-openai/src/image/image-provider-options.ts index 3e1d2c2f2..49e9fcc12 100644 --- a/packages/typescript/ai-openai/src/image/image-provider-options.ts +++ b/packages/typescript/ai-openai/src/image/image-provider-options.ts @@ -1,44 +1,48 @@ -import { OpenAIImageModel } from "../openai-adapter"; - -interface ImageProviderOptions { - /** - * A text prompt describing the desired image. The maximum length is 32000 characters for gpt-image-1, 1000 characters for dall-e-2 and 4000 characters for dall-e-3. - */ - prompt: string; - /** - * Allows to set transparency for the background of the generated image(s). This parameter is only supported for gpt-image-1. Must be one of transparent, opaque or auto (default value). When auto is used, the model will automatically determine the best background for the image. - -If transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp. - */ - background?: "transparent" | "opaque" | "auto" | null; - /** - * The image model to use for generation. - */ - model: OpenAIImageModel; -} - -export const validateBackground = (options: ImageProviderOptions) => { - if (options.background) { - const supportedModels = ["gpt-image-1"]; - if (!supportedModels.includes(options.model)) { - throw new Error(`The model ${options.model} does not support background option.`); - } - } -} - -export const validatePrompt = (options: ImageProviderOptions) => { - - if (options.prompt.length === 0) { - throw new Error("Prompt cannot be empty."); - } - if (options.model === "gpt-image-1" && options.prompt.length > 32000) { - throw new Error("For gpt-image-1, prompt length must be less than or equal to 32000 characters."); - } - if (options.model === "dall-e-2" && options.prompt.length > 1000) { - throw new Error("For dall-e-2, prompt length must be less than or equal to 1000 characters."); - } - if (options.model === "dall-e-3" && options.prompt.length > 4000) { - throw new Error("For dall-e-3, prompt length must be less than or equal to 4000 characters."); - } - -} \ No newline at end of file +interface ImageProviderOptions { + /** + * A text prompt describing the desired image. The maximum length is 32000 characters for gpt-image-1, 1000 characters for dall-e-2 and 4000 characters for dall-e-3. + */ + prompt: string + /** + * Allows to set transparency for the background of the generated image(s). This parameter is only supported for gpt-image-1. Must be one of transparent, opaque or auto (default value). When auto is used, the model will automatically determine the best background for the image. + +If transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp. + */ + background?: 'transparent' | 'opaque' | 'auto' | null + /** + * The image model to use for generation. + */ + model: string +} + +export const validateBackground = (options: ImageProviderOptions) => { + if (options.background) { + const supportedModels = ['gpt-image-1'] + if (!supportedModels.includes(options.model)) { + throw new Error( + `The model ${options.model} does not support background option.`, + ) + } + } +} + +export const validatePrompt = (options: ImageProviderOptions) => { + if (options.prompt.length === 0) { + throw new Error('Prompt cannot be empty.') + } + if (options.model === 'gpt-image-1' && options.prompt.length > 32000) { + throw new Error( + 'For gpt-image-1, prompt length must be less than or equal to 32000 characters.', + ) + } + if (options.model === 'dall-e-2' && options.prompt.length > 1000) { + throw new Error( + 'For dall-e-2, prompt length must be less than or equal to 1000 characters.', + ) + } + if (options.model === 'dall-e-3' && options.prompt.length > 4000) { + throw new Error( + 'For dall-e-3, prompt length must be less than or equal to 4000 characters.', + ) + } +} diff --git a/packages/typescript/ai-openai/src/index.ts b/packages/typescript/ai-openai/src/index.ts index 439fd7e08..c62bade16 100644 --- a/packages/typescript/ai-openai/src/index.ts +++ b/packages/typescript/ai-openai/src/index.ts @@ -1,3 +1,7 @@ -export { OpenAI, createOpenAI, openai, type OpenAIConfig } from "./openai-adapter"; -export type { OpenAIChatModelProviderOptionsByName } from "./model-meta"; - +export { + OpenAI, + createOpenAI, + openai, + type OpenAIConfig, +} from './openai-adapter' +export type { OpenAIChatModelProviderOptionsByName } from './model-meta' diff --git a/packages/typescript/ai-openai/src/model-meta.ts b/packages/typescript/ai-openai/src/model-meta.ts index d30a0f87b..c4a9a3732 100644 --- a/packages/typescript/ai-openai/src/model-meta.ts +++ b/packages/typescript/ai-openai/src/model-meta.ts @@ -1,1568 +1,1918 @@ -import type { - OpenAIBaseOptions, - OpenAIReasoningOptions, - OpenAIStructuredOutputOptions, - OpenAIToolsOptions, - OpenAIStreamingOptions, - OpenAIMetadataOptions, -} from "./text/text-provider-options"; - -interface ModelMeta { - name: string; - supports: { - input: ("text" | "image" | "audio" | "video")[]; - output: ("text" | "image" | "audio" | "video")[]; - endpoints: ("chat" | "chat-completions" | "assistants" | "speech_generation" | "image-generation" | "fine-tuning" | "batch" | "image-edit" | "moderation" | "translation" | "realtime" | "embedding" | "audio" | "video" | "transcription")[]; - features: ("streaming" | "function_calling" | "structured_outputs" | "predicted_outcomes" | "distillation" | "fine_tuning")[]; - tools?: ("web_search" | "file_search" | "image_generation" | "code_interpreter" | "mcp" | "computer_use")[]; - }; - context_window?: number; - max_output_tokens?: number; - knowledge_cutoff?: string; - pricing: { - input: { - normal: number; - cached?: number; - }; - output: { - normal: number; - }; - }; - /** - * Type-level description of which provider options this model supports. - */ - providerOptions?: TProviderOptions; -} - -const GPT5_1 = { - name: "gpt-5.1", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-09-30", - supports: { - input: ["text", "image"], - output: ["text", "image"], - endpoints: ["chat", "chat-completions"], - features: ["streaming", "function_calling", "structured_outputs", "distillation"], - tools: ["web_search", "file_search", "image_generation", "code_interpreter", "mcp"] - }, - pricing: { - input: { - normal: 1.25, - cached: 0.125 - }, - output: { - normal: 10 - } - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT5_1_CODEX = { - name: "gpt-5.1-codex", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-09-30", - supports: { - input: ["text", "image"], - output: ["text", "image"], - endpoints: ["chat",], - features: ["streaming", "function_calling", "structured_outputs",], - - }, - pricing: { - input: { - normal: 1.25, - cached: 0.125 - }, - output: { - normal: 10 - } - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT5 = { - name: "gpt-5", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-09-30", - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "batch"], - features: ["streaming", "function_calling", "structured_outputs", "distillation"], - tools: ["web_search", "file_search", "image_generation", "code_interpreter", "mcp"] - }, - pricing: { - input: { - normal: 1.25, - cached: 0.125 - }, - output: { - normal: 10 - } - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT5_MINI = { - name: "gpt-5-mini", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-05-31", - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "batch"], - features: ["streaming", "structured_outputs", "function_calling"], - tools: ["web_search", "file_search", "mcp", "code_interpreter"] - }, - pricing: { - input: { - normal: 0.25, - cached: 0.025 - }, - output: { - normal: 2 - } - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT5_NANO = { - name: "gpt-5-nano", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-05-31", - pricing: { - input: { - normal: 0.05, - cached: 0.005 - }, - output: { - normal: 0.4 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "batch"], - features: [ - "streaming", - "structured_outputs", - "function_calling" - ], - tools: ["web_search", "file_search", "mcp", "image_generation", "code_interpreter"] - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT5_PRO = { - name: "gpt-5-pro", - context_window: 400_000, - max_output_tokens: 272_000, - knowledge_cutoff: "2024-09-30", - pricing: { - input: { - normal: 15, - - }, output: { - normal: 120 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch"], - features: [ - "streaming", - "structured_outputs", - "function_calling",], - tools: ["web_search", "file_search", "image_generation", "mcp"] - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT5_CODEX = { - name: "gpt-5-codex", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-09-30", - pricing: { - input: { - normal: 1.25, - cached: 0.125 - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text", "image"], - output: ["text", "image"], - endpoints: ["chat",], - features: [ - "streaming", - "structured_outputs", - "function_calling"], - - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - - -const SORA2 = { - name: "sora-2", - pricing: { - input: { - normal: 0 - }, - output: { - // per second of video - normal: 0.1 - } - }, - supports: { - input: ["text", "image"], - output: ["video", "audio"], - endpoints: ["video"], - features: [], - - } -} as const satisfies ModelMeta - -const SORA2_PRO = { - name: "sora-2-pro", - pricing: { - input: { - normal: 0 - }, - output: { - // per second of video - normal: 0.5 - } - }, - supports: { - input: ["text", "image"], - output: ["video", "audio"], - endpoints: ["video"], - features: [], - - } -} as const satisfies ModelMeta - -const GPT_IMAGE_1 = { - name: "gpt-image-1", - // todo fix for images - pricing: { - input: { - normal: 5, - cached: 1.25 - }, - output: { - normal: 0.1 - } - }, - supports: { - input: ["text", "image"], - output: ["image"], - endpoints: ["image-generation", "image-edit"], - - features: [], - } -} as const satisfies ModelMeta - -const GPT_IMAGE_1_MINI = { - name: "gpt-image-1-mini", - // todo fix for images - pricing: { - input: { - normal: 2, - cached: 0.2 - }, - output: { - normal: 0.03 - } - }, - supports: { - input: ["text", "image"], - output: ["image"], - endpoints: ["image-generation", "image-edit"], - - features: [], - } -} as const satisfies ModelMeta - -const O3_DEEP_RESEARCH = { - name: "o3-deep-research", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 10, - cached: 2.5 - }, - output: { - normal: 40 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch"], - features: ["streaming"], - - } -} as const satisfies ModelMeta - -const O4_MINI_DEEP_RESEARCH = { - name: "o4-mini-deep-research", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 2, - cached: 0.5 - }, - output: { - normal: 8 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch"], - features: ["streaming"], - - } -} as const satisfies ModelMeta - -const O3_PRO = { - name: "o3-pro", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 20 - }, - output: { - normal: 80 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch"], - features: ["function_calling", "structured_outputs"], - - } -} as const satisfies ModelMeta - -const GPT_AUDIO = { - name: "gpt-audio", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo add audio tokens to input output - input: { - normal: 2.5, - - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text", "audio"], - output: ["text", "audio"], - endpoints: ["chat-completions"], - features: ["function_calling"], - - } -} as const satisfies ModelMeta - - -const GPT_REALTIME = { - name: "gpt-realtime", - context_window: 32_000, - max_output_tokens: 4_096, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo add audio tokens to input output - input: { - normal: 4, - cached: 0.5, - }, - output: { - normal: 16 - } - }, - supports: { - input: ["text", "audio", "image"], - output: ["text", "audio"], - endpoints: ["realtime"], - features: ["function_calling"], - - } -} as const satisfies ModelMeta - -const GPT_REALTIME_MINI = { - name: "gpt-realtime-mini", - context_window: 32_000, - max_output_tokens: 4_096, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo add audio and image tokens to input output - input: { - normal: 0.6, - cached: 0.06, - }, - output: { - normal: 2.4 - } - }, - supports: { - input: ["text", "audio", "image"], - output: ["text", "audio"], - endpoints: ["realtime"], - features: ["function_calling"], - - } -} as const satisfies ModelMeta - - -const GPT_AUDIO_MINI = { - name: "gpt-audio-mini", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo add audio tokens to input output - input: { - normal: 0.6, - - }, - output: { - normal: 2.4 - } - }, - supports: { - input: ["text", "audio"], - output: ["text", "audio"], - endpoints: ["chat-completions"], - features: ["function_calling"], - - } -} as const satisfies ModelMeta - -const O3 = { - name: "o3", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 2, - cached: 0.5 - }, - output: { - normal: 8 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch", "chat-completions"], - features: ["function_calling", "structured_outputs", "streaming"], - - } -} as const satisfies ModelMeta - -const O4_MINI = { - name: "o4-mini", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 1.1, - cached: 0.275 - }, - output: { - normal: 4.4 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch", "chat-completions", "fine-tuning"], - features: ["function_calling", "structured_outputs", "streaming", "fine_tuning"], - - } -} as const satisfies ModelMeta - -const GPT4_1 = { - name: "gpt-4.1", - context_window: 1_047_576, - max_output_tokens: 32_768, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 2, - cached: 0.5 - }, - output: { - normal: 8 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "assistants", "fine-tuning", "batch"], - features: ["streaming", "function_calling", "structured_outputs", "distillation", "fine_tuning"], - tools: [ - "web_search", - "file_search", - "image_generation", - "code_interpreter", - "mcp" - ] - } -} as const satisfies ModelMeta - -const GPT4_1_MINI = { - name: "gpt-4.1-mini", - context_window: 1_047_576, - max_output_tokens: 32_768, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 0.4, - cached: 0.1 - }, - output: { - normal: 1.6 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "assistants", "fine-tuning", "batch"], - features: ["streaming", "function_calling", "structured_outputs", "fine_tuning"], - - } -} as const satisfies ModelMeta - -const GPT4_1_NANO = { - name: "gpt-4.1-nano", - context_window: 1_047_576, - max_output_tokens: 32_768, - knowledge_cutoff: "2024-01-01", - pricing: { - input: { - normal: 0.1, - cached: 0.025 - }, - output: { - normal: 0.4 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "assistants", "fine-tuning", "batch"], - features: ["streaming", "function_calling", "structured_outputs", "fine_tuning", "predicted_outcomes"], - - } -} as const satisfies ModelMeta - -const O1_PRO = { - name: "o1-pro", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 150, - - }, - output: { - normal: 600 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch",], - features: ["function_calling", "structured_outputs",], - - } -} as const satisfies ModelMeta - -const COMPUTER_USE_PREVIEW = { - name: "computer-use-preview", - context_window: 8_192, - max_output_tokens: 1_024, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 3 - }, - output: { - normal: 12 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch"], - features: ["function_calling"], - - } -} as const satisfies ModelMeta - -const GPT_4O_MINI_SEARCH_PREVIEW = { - name: "gpt-4o-mini-search-preview", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 0.15, - }, - output: { - normal: 0.6 - } - }, - supports: { - input: ["text",], - output: ["text"], - endpoints: ["chat-completions",], - features: ["streaming", "structured_outputs",], - } -} as const satisfies ModelMeta - -const GPT_4O_SEARCH_PREVIEW = { - name: "gpt-4o-search-preview", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 2.5, - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text",], - output: ["text"], - endpoints: ["chat-completions",], - features: ["streaming", "structured_outputs",], - } -} as const satisfies ModelMeta - -const O3_MINI = { - name: "o3-mini", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 1.1, - cached: 0.55 - }, - output: { - normal: 4.4 - } - }, - supports: { - input: ["text"], - output: ["text"], - endpoints: ["chat", "batch", "chat-completions", "assistants"], - features: ["function_calling", "structured_outputs", "streaming"], - - } -} as const satisfies ModelMeta - -const GPT_4O_MINI_AUDIO = { - name: "gpt-4o-mini-audio", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo audio tokens - input: { - normal: 0.15, - - }, - output: { - normal: 0.6 - } - }, - supports: { - input: ["text", "audio"], - output: ["text", "audio"], - endpoints: ["chat-completions"], - features: ["function_calling", "streaming"], - } -} as const satisfies ModelMeta - -const GPT_4O_MINI_REALTIME = { - name: "gpt-4o-mini-realtime", - context_window: 16_000, - max_output_tokens: 4_096, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo add audio tokens - input: { - normal: 0.6, - cached: 0.3 - }, - output: { - normal: 2.4 - } - }, - supports: { - input: ["text", "audio",], - output: ["text", "audio"], - endpoints: ["realtime"], - features: ["function_calling",], - } -} as const satisfies ModelMeta - -const O1 = { - name: "o1", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 15, - cached: 7.5 - }, - output: { - normal: 60 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "batch", "chat-completions", "assistants"], - features: ["function_calling", "structured_outputs", "streaming"], - - } -} as const satisfies ModelMeta - -const OMNI_MODERATION = { - name: "omni-moderation", - pricing: { - input: { - normal: 0 - }, output: { - normal: 0 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["batch", "moderation"], - features: [] - }, - -} as const satisfies ModelMeta - -const GPT_4O = { - name: "gpt-4o", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - - input: { - normal: 2.5, - cached: 1.25 - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "assistants", "fine-tuning", "batch"], - features: ["streaming", "function_calling", "structured_outputs", "distillation", "fine_tuning", "predicted_outcomes"], - } -} as const satisfies ModelMeta - - -const GPT_4O_AUDIO = { - name: "gpt-4o-audio", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo audio tokens - input: { - normal: 2.5, - - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text", "audio"], - output: ["text", "audio"], - endpoints: ["chat-completions",], - features: ["streaming", "function_calling",], - } -} as const satisfies ModelMeta - -const GPT_40_MINI = { - name: "gpt-4o-mini", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 0.15, - cached: 0.075 - }, - output: { - normal: 0.6 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions", "assistants", "fine-tuning", "batch"], - features: ["streaming", "function_calling", "structured_outputs", "fine_tuning", "predicted_outcomes"], - } -} as const satisfies ModelMeta - -const GPT__4O_REALTIME = { - name: "gpt-4o-realtime", - context_window: 32_000, - max_output_tokens: 4_096, - knowledge_cutoff: "2023-10-01", - pricing: { - // todo add audio tokens to input output - input: { - normal: 5, - cached: 2.5, - }, - output: { - normal: 20 - } - }, - supports: { - input: ["text", "audio",], - output: ["text", "audio"], - endpoints: ["realtime"], - features: ["function_calling"], - - } -} as const satisfies ModelMeta - -const GPT_4_TURBO = { - name: "gpt-4-turbo", - context_window: 128_000, - max_output_tokens: 4_096, - knowledge_cutoff: "2023-12-01", - pricing: { - input: { - normal: 10 - }, - output: { - normal: 30 - } - }, - supports: { - input: ["text", "image",], - output: ["text",], - endpoints: ["chat", "chat-completions", "assistants", "batch"], - features: ["function_calling", "streaming"], - - } -} as const satisfies ModelMeta - -const CHATGPT_40 = { - name: "chatgpt-4.0", - context_window: 128_000, - max_output_tokens: 4_096, - knowledge_cutoff: "2023-10-01", - pricing: { - input: { - normal: 5 - }, - output: { - normal: 15 - } - }, - supports: { - input: ["text", "image",], - output: ["text",], - endpoints: ["chat", "chat-completions",], - features: ["predicted_outcomes", "streaming"], - } -} as const satisfies ModelMeta - -const GPT_5_1_CODEX_MINI = { - name: "gpt-5.1-codex-mini", - context_window: 400_000, - max_output_tokens: 128_000, - knowledge_cutoff: "2024-09-30", - pricing: { - input: { - normal: 0.25, - cached: 0.025 - }, - output: { - normal: 2 - } - }, - supports: { - input: ["text", "image",], - output: ["text", "image"], - endpoints: ["chat",], - features: ["streaming", "function_calling", "structured_outputs"], - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - - -const CODEX_MINI_LATEST = { - name: "codex-mini-latest", - context_window: 200_000, - max_output_tokens: 100_000, - knowledge_cutoff: "2024-06-01", - pricing: { - input: { - normal: 1.5, - cached: 0.375 - }, - output: { - normal: 6 - } - }, - supports: { - input: ["text", "image",], - output: ["text"], - endpoints: ["chat",], - features: ["streaming", "function_calling", "structured_outputs"], - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const DALL_E_2 = { - name: "dall-e-2", - pricing: { - // todo image tokens - input: { - normal: 0.016, - - }, - output: { - normal: 0.02 - } - }, - supports: { - input: ["text",], - output: ["image"], - endpoints: ["image-generation", "image-edit",], - features: [], - } -} as const satisfies ModelMeta - -const DALL_E_3 = { - name: "dall-e-3", - pricing: { - // todo image tokens - input: { - normal: 0.04, - } - , - output: { - normal: 0.08 - } - }, - supports: { - input: ["text",], - output: ["image"], - endpoints: ["image-generation", "image-edit",], - features: [], - } -} as const satisfies ModelMeta - -const GPT_3_5_TURBO = { - name: "gpt-3.5-turbo", - context_window: 16_385, - max_output_tokens: 4_096, - knowledge_cutoff: "2021-09-01", - pricing: { - input: { - normal: 0.5, - - }, - output: { - normal: 1.5 - } - }, - supports: { - input: ["text",], - output: ["text",], - endpoints: ["chat", "chat-completions", "batch", "fine-tuning"], - features: ["fine_tuning"], - } -} as const satisfies ModelMeta - -const GPT_4 = { - name: "gpt-4", - context_window: 8_192, - max_output_tokens: 8_192, - knowledge_cutoff: "2023-12-01", - pricing: { - input: { - normal: 30, - - }, - output: { - normal: 60 - } - }, - supports: { - input: ["text",], - output: ["text",], - endpoints: ["chat", "chat-completions", "batch", "fine-tuning", "assistants"], - features: ["fine_tuning", "streaming"], - } -} as const satisfies ModelMeta - -const GPT_4O_MINI_TRANSCRIBE = { - name: "gpt-4o-mini-transcribe", - context_window: 16_000, - max_output_tokens: 2_000, - knowledge_cutoff: "2024-01-01", - pricing: { - // todo audio tokens - input: { - normal: 1.25, - }, - output: { - normal: 5 - } - }, - supports: { - input: ["audio", "text"], - output: ["text"], - endpoints: ["realtime", "transcription"], - features: [] - } -} as const satisfies ModelMeta - -const GPT_4O_MINI_TTS = { - name: "gpt-4o-mini-tts", - pricing: { - // todo audio tokens - input: { - normal: 0.6, - }, - output: { - normal: 12 - } - }, - supports: { - input: ["text"], - output: ["audio"], - endpoints: ["speech_generation"], - features: [] - } -} as const satisfies ModelMeta - - -const GPT_4O_TRANSCRIBE = { - name: "gpt-4o-transcribe", - context_window: 16_000, - max_output_tokens: 2_000, - knowledge_cutoff: "2024-06-01", - pricing: { - // todo audio tokens - input: { - normal: 2.5, - }, - output: { - normal: 10 - } - }, - supports: { - input: ["audio", "text"], - output: ["text"], - endpoints: ["realtime", "transcription"], - features: [] - } -} as const satisfies ModelMeta - -const GPT_4O_TRANSCRIBE_DIARIZE = { - name: "gpt-4o-transcribe-diarize", - context_window: 16_000, - max_output_tokens: 2_000, - knowledge_cutoff: "2024-06-01", - pricing: { - // todo audio tokens - input: { - normal: 2.5, - }, - output: { - normal: 10 - } - }, - supports: { - input: ["audio", "text"], - output: ["text"], - endpoints: ["transcription"], - features: [] - } -} as const satisfies ModelMeta - -const GPT_5_1_CHAT = { - name: "gpt-5.1-chat", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2024-09-30", - pricing: { - input: { - normal: 1.25, - cached: 0.125 - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions"], - features: ["streaming", "function_calling", "structured_outputs"], - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const GPT_5_CHAT = { - name: "gpt-5-chat", - context_window: 128_000, - max_output_tokens: 16_384, - knowledge_cutoff: "2024-09-30", - pricing: { - input: { - normal: 1.25, - cached: 0.125 - }, - output: { - normal: 10 - } - }, - supports: { - input: ["text", "image"], - output: ["text"], - endpoints: ["chat", "chat-completions"], - features: ["streaming", "function_calling", "structured_outputs"], - tools: [ - "web_search", - "file_search", - "image_generation", - "code_interpreter", - "mcp" - ] - } -} as const satisfies ModelMeta< - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions -> - -const TEXT_EMBEDDING_3_LARGE = { - name: "text-embedding-3-large", - pricing: { - // todo embedding tokens - input: { - normal: 0.13 - }, - output: { - normal: 0.13 - } - }, - supports: { - input: ["text"], - output: ["text"], - endpoints: ["embedding", "batch"], - features: [] - } -} as const satisfies ModelMeta - -const TEXT_EMBEDDING_3_SMALL = { - name: "text-embedding-3-small", - pricing: { - // todo embedding tokens - input: { - normal: 0.02 - }, - output: { - normal: 0.02 - } - }, - supports: { - input: ["text"], - output: ["text"], - endpoints: ["embedding", "batch"], - features: [] - } -} as const satisfies ModelMeta - - -const TEXT_EMBEDDING_3_ADA_002 = { - name: "text-embedding-3-ada-002", - pricing: { - // todo embedding tokens - input: { - normal: 0.1 - }, - output: { - normal: 0.1 - } - }, - supports: { - input: ["text"], - output: ["text"], - endpoints: ["embedding", "batch"], - features: [] - } -} as const satisfies ModelMeta - -const TTS_1 = { - name: "tts-1", - pricing: { - // todo figure out pricing - input: { - normal: 15 - }, - output: { - normal: 15 - } - }, - supports: { - input: ["text"], - output: ["audio"], - endpoints: ["speech_generation"], - features: [] - } -} as const satisfies ModelMeta - -const TTS_1_HD = { - name: "tts-1-hd", - pricing: { - // todo figure out pricing - input: { - normal: 30 - }, - output: { - normal: 30 - } - }, - supports: { - input: ["text"], - output: ["audio"], - endpoints: ["speech_generation"], - features: [] - } -} as const satisfies ModelMeta - -// Chat/text completion models (based on endpoints: "chat" or "chat-completions") -export const OPENAI_CHAT_MODELS = [ - // Frontier models - GPT5_1.name, - GPT5_1_CODEX.name, - GPT5.name, - GPT5_MINI.name, - GPT5_NANO.name, - GPT5_PRO.name, - GPT5_CODEX.name, - // Reasoning models - O3.name, - O3_PRO.name, - O3_MINI.name, - O4_MINI.name, - O3_DEEP_RESEARCH.name, - O4_MINI_DEEP_RESEARCH.name, - // GPT-4 series - GPT4_1.name, - GPT4_1_MINI.name, - GPT4_1_NANO.name, - GPT_4.name, - GPT_4_TURBO.name, - GPT_4O.name, - GPT_40_MINI.name, - // GPT-3.5 - GPT_3_5_TURBO.name, - // Audio-enabled chat models - GPT_AUDIO.name, - GPT_AUDIO_MINI.name, - GPT_4O_AUDIO.name, - GPT_4O_MINI_AUDIO.name, - // ChatGPT models - GPT_5_1_CHAT.name, - GPT_5_CHAT.name, - CHATGPT_40.name, - // Specialized - GPT_5_1_CODEX_MINI.name, - CODEX_MINI_LATEST.name, - // Preview models - GPT_4O_SEARCH_PREVIEW.name, - GPT_4O_MINI_SEARCH_PREVIEW.name, - COMPUTER_USE_PREVIEW.name, - // Legacy reasoning - O1.name, - O1_PRO.name, -] as const; - -// Image generation models (based on endpoints: "image-generation" or "image-edit") -export const OPENAI_IMAGE_MODELS = [ - GPT_IMAGE_1.name, - GPT_IMAGE_1_MINI.name, - DALL_E_3.name, - DALL_E_2.name, -] as const; - -// Embedding models (based on endpoints: "embedding") -export const OPENAI_EMBEDDING_MODELS = [ - TEXT_EMBEDDING_3_LARGE.name, - TEXT_EMBEDDING_3_SMALL.name, - TEXT_EMBEDDING_3_ADA_002.name, -] as const; - -// Audio models (based on endpoints: "transcription", "speech_generation", or "realtime") -export const OPENAI_AUDIO_MODELS = [ - // Transcription models - GPT_4O_TRANSCRIBE.name, - GPT_4O_TRANSCRIBE_DIARIZE.name, - GPT_4O_MINI_TRANSCRIBE.name, - // Realtime models - GPT_REALTIME.name, - GPT_REALTIME_MINI.name, - GPT__4O_REALTIME.name, - GPT_4O_MINI_REALTIME.name, - // Text-to-speech models - GPT_4O_MINI_TTS.name, - TTS_1.name, - TTS_1_HD.name, -] as const; - -// Transcription-only models (based on endpoints: "transcription") -export const OPENAI_TRANSCRIPTION_MODELS = [ - GPT_4O_TRANSCRIBE.name, - GPT_4O_TRANSCRIBE_DIARIZE.name, - GPT_4O_MINI_TRANSCRIBE.name, -] as const; - -// Video generation models (based on endpoints: "video") -export const OPENAI_VIDEO_MODELS = [ - SORA2.name, - SORA2_PRO.name, -] as const; - -export const OPENAI_MODERATION_MODELS = [ - OMNI_MODERATION.name, -] as const; - - -export type OpenAIChatModel = (typeof OPENAI_CHAT_MODELS)[number]; -export type OpenAIImageModel = (typeof OPENAI_IMAGE_MODELS)[number]; -export type OpenAIEmbeddingModel = (typeof OPENAI_EMBEDDING_MODELS)[number]; -export type OpenAIAudioModel = (typeof OPENAI_AUDIO_MODELS)[number]; -export type OpenAIVideoModel = (typeof OPENAI_VIDEO_MODELS)[number]; -export type OpenAITranscriptionModel = (typeof OPENAI_TRANSCRIPTION_MODELS)[number]; - -export const OPENAI_MODEL_META = { - [GPT5_1.name]: GPT5_1, - [GPT5_1_CODEX.name]: GPT5_1_CODEX, - [GPT5.name]: GPT5, - [GPT5_MINI.name]: GPT5_MINI, - [GPT5_NANO.name]: GPT5_NANO, - [GPT5_PRO.name]: GPT5_PRO, - [GPT5_CODEX.name]: GPT5_CODEX, - [SORA2.name]: SORA2, - [SORA2_PRO.name]: SORA2_PRO, - [GPT_IMAGE_1.name]: GPT_IMAGE_1, - [GPT_IMAGE_1_MINI.name]: GPT_IMAGE_1_MINI, - [O3_DEEP_RESEARCH.name]: O3_DEEP_RESEARCH, - [O4_MINI_DEEP_RESEARCH.name]: O4_MINI_DEEP_RESEARCH, - [O3_PRO.name]: O3_PRO, - [GPT_AUDIO.name]: GPT_AUDIO, - [GPT_REALTIME.name]: GPT_REALTIME, - [GPT_REALTIME_MINI.name]: GPT_REALTIME_MINI, - [GPT_AUDIO_MINI.name]: GPT_AUDIO_MINI, - [O3.name]: O3, - [O4_MINI.name]: O4_MINI, - [GPT4_1.name]: GPT4_1, - [GPT4_1_MINI.name]: GPT4_1_MINI, - [GPT4_1_NANO.name]: GPT4_1_NANO, - [O1_PRO.name]: O1_PRO, - [COMPUTER_USE_PREVIEW.name]: COMPUTER_USE_PREVIEW, - [GPT_4O_MINI_SEARCH_PREVIEW.name]: GPT_4O_MINI_SEARCH_PREVIEW, - [GPT_4O_SEARCH_PREVIEW.name]: GPT_4O_SEARCH_PREVIEW, - [O3_MINI.name]: O3_MINI, - [GPT_4O_MINI_AUDIO.name]: GPT_4O_MINI_AUDIO, - [GPT_4O_MINI_REALTIME.name]: GPT_4O_MINI_REALTIME, - [O1.name]: O1, - [OMNI_MODERATION.name]: OMNI_MODERATION, - [GPT_4O.name]: GPT_4O, - [GPT_4O_AUDIO.name]: GPT_4O_AUDIO, - [GPT_40_MINI.name]: GPT_40_MINI, - [GPT__4O_REALTIME.name]: GPT__4O_REALTIME, - [GPT_4_TURBO.name]: GPT_4_TURBO, - [CHATGPT_40.name]: CHATGPT_40, - [GPT_5_1_CODEX_MINI.name]: GPT_5_1_CODEX_MINI, - [CODEX_MINI_LATEST.name]: CODEX_MINI_LATEST, - [DALL_E_2.name]: DALL_E_2, - [DALL_E_3.name]: DALL_E_3, - [GPT_3_5_TURBO.name]: GPT_3_5_TURBO, - [GPT_4.name]: GPT_4, - [GPT_4O_MINI_TRANSCRIBE.name]: GPT_4O_MINI_TRANSCRIBE, - [GPT_4O_MINI_TTS.name]: GPT_4O_MINI_TTS, - [GPT_4O_TRANSCRIBE.name]: GPT_4O_TRANSCRIBE, - [GPT_4O_TRANSCRIBE_DIARIZE.name]: GPT_4O_TRANSCRIBE_DIARIZE, - [GPT_5_1_CHAT.name]: GPT_5_1_CHAT, - [GPT_5_CHAT.name]: GPT_5_CHAT, - [TEXT_EMBEDDING_3_LARGE.name]: TEXT_EMBEDDING_3_LARGE, - [TEXT_EMBEDDING_3_SMALL.name]: TEXT_EMBEDDING_3_SMALL, - [TEXT_EMBEDDING_3_ADA_002.name]: TEXT_EMBEDDING_3_ADA_002, - [TTS_1.name]: TTS_1, - [TTS_1_HD.name]: TTS_1_HD, -} as const; - -export type OpenAIModelMetaMap = typeof OPENAI_MODEL_META; - -/** - * Type-only map from chat model name to its provider options type. - * Used by the core AI types (via the adapter) to narrow - * `providerOptions` based on the selected model. - * - * Manually defined to ensure accurate type narrowing per model. - */ -export type OpenAIChatModelProviderOptionsByName = { - // Models WITH structured output support (have 'text' field) - "gpt-5.1": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-5.1-codex": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-5": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-5-mini": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-5-nano": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-5-pro": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-5-codex": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4.1": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4.1-mini": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4.1-nano": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4o": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4o-mini": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - - // Models WITHOUT structured output support (NO 'text' field) - "gpt-4": OpenAIBaseOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4-turbo": OpenAIBaseOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-3.5-turbo": OpenAIBaseOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "chatgpt-4.0": OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "o3": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o3-pro": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o3-mini": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o4-mini": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o3-deep-research": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o4-mini-deep-research": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o1": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - "o1-pro": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions; - - // Audio models - "gpt-audio": OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-audio-mini": OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4o-audio": OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4o-mini-audio": OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - - // Chat-only models - "gpt-5.1-chat": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIStructuredOutputOptions & OpenAIMetadataOptions; - "gpt-5-chat": OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIStructuredOutputOptions & OpenAIMetadataOptions; - - // Codex models - "gpt-5.1-codex-mini": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "codex-mini-latest": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - - // Search models - "gpt-4o-search-preview": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - "gpt-4o-mini-search-preview": OpenAIBaseOptions & OpenAIStructuredOutputOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; - - // Special models - "computer-use-preview": OpenAIBaseOptions & OpenAIToolsOptions & OpenAIStreamingOptions & OpenAIMetadataOptions; -}; +import type { + OpenAIBaseOptions, + OpenAIMetadataOptions, + OpenAIReasoningOptions, + OpenAIStreamingOptions, + OpenAIStructuredOutputOptions, + OpenAIToolsOptions, +} from './text/text-provider-options' + +interface ModelMeta { + name: string + supports: { + input: Array<'text' | 'image' | 'audio' | 'video'> + output: Array<'text' | 'image' | 'audio' | 'video'> + endpoints: Array< + | 'chat' + | 'chat-completions' + | 'assistants' + | 'speech_generation' + | 'image-generation' + | 'fine-tuning' + | 'batch' + | 'image-edit' + | 'moderation' + | 'translation' + | 'realtime' + | 'embedding' + | 'audio' + | 'video' + | 'transcription' + > + features: Array< + | 'streaming' + | 'function_calling' + | 'structured_outputs' + | 'predicted_outcomes' + | 'distillation' + | 'fine_tuning' + > + tools?: Array< + | 'web_search' + | 'file_search' + | 'image_generation' + | 'code_interpreter' + | 'mcp' + | 'computer_use' + > + } + context_window?: number + max_output_tokens?: number + knowledge_cutoff?: string + pricing: { + input: { + normal: number + cached?: number + } + output: { + normal: number + } + } + /** + * Type-level description of which provider options this model supports. + */ + providerOptions?: TProviderOptions +} + +const GPT5_1 = { + name: 'gpt-5.1', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-09-30', + supports: { + input: ['text', 'image'], + output: ['text', 'image'], + endpoints: ['chat', 'chat-completions'], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'distillation', + ], + tools: [ + 'web_search', + 'file_search', + 'image_generation', + 'code_interpreter', + 'mcp', + ], + }, + pricing: { + input: { + normal: 1.25, + cached: 0.125, + }, + output: { + normal: 10, + }, + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT5_1_CODEX = { + name: 'gpt-5.1-codex', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-09-30', + supports: { + input: ['text', 'image'], + output: ['text', 'image'], + endpoints: ['chat'], + features: ['streaming', 'function_calling', 'structured_outputs'], + }, + pricing: { + input: { + normal: 1.25, + cached: 0.125, + }, + output: { + normal: 10, + }, + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT5 = { + name: 'gpt-5', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-09-30', + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions', 'batch'], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'distillation', + ], + tools: [ + 'web_search', + 'file_search', + 'image_generation', + 'code_interpreter', + 'mcp', + ], + }, + pricing: { + input: { + normal: 1.25, + cached: 0.125, + }, + output: { + normal: 10, + }, + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT5_MINI = { + name: 'gpt-5-mini', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-05-31', + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions', 'batch'], + features: ['streaming', 'structured_outputs', 'function_calling'], + tools: ['web_search', 'file_search', 'mcp', 'code_interpreter'], + }, + pricing: { + input: { + normal: 0.25, + cached: 0.025, + }, + output: { + normal: 2, + }, + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT5_NANO = { + name: 'gpt-5-nano', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-05-31', + pricing: { + input: { + normal: 0.05, + cached: 0.005, + }, + output: { + normal: 0.4, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions', 'batch'], + features: ['streaming', 'structured_outputs', 'function_calling'], + tools: [ + 'web_search', + 'file_search', + 'mcp', + 'image_generation', + 'code_interpreter', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT5_PRO = { + name: 'gpt-5-pro', + context_window: 400_000, + max_output_tokens: 272_000, + knowledge_cutoff: '2024-09-30', + pricing: { + input: { + normal: 15, + }, + output: { + normal: 120, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch'], + features: ['streaming', 'structured_outputs', 'function_calling'], + tools: ['web_search', 'file_search', 'image_generation', 'mcp'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT5_CODEX = { + name: 'gpt-5-codex', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-09-30', + pricing: { + input: { + normal: 1.25, + cached: 0.125, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text', 'image'], + endpoints: ['chat'], + features: ['streaming', 'structured_outputs', 'function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +/* const SORA2 = { + name: 'sora-2', + pricing: { + input: { + normal: 0, + }, + output: { + // per second of video + normal: 0.1, + }, + }, + supports: { + input: ['text', 'image'], + output: ['video', 'audio'], + endpoints: ['video'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const SORA2_PRO = { + name: 'sora-2-pro', + pricing: { + input: { + normal: 0, + }, + output: { + // per second of video + normal: 0.5, + }, + }, + supports: { + input: ['text', 'image'], + output: ['video', 'audio'], + endpoints: ['video'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_IMAGE_1 = { + name: 'gpt-image-1', + // todo fix for images + pricing: { + input: { + normal: 5, + cached: 1.25, + }, + output: { + normal: 0.1, + }, + }, + supports: { + input: ['text', 'image'], + output: ['image'], + endpoints: ['image-generation', 'image-edit'], + + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_IMAGE_1_MINI = { + name: 'gpt-image-1-mini', + // todo fix for images + pricing: { + input: { + normal: 2, + cached: 0.2, + }, + output: { + normal: 0.03, + }, + }, + supports: { + input: ['text', 'image'], + output: ['image'], + endpoints: ['image-generation', 'image-edit'], + + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> */ + +const O3_DEEP_RESEARCH = { + name: 'o3-deep-research', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 10, + cached: 2.5, + }, + output: { + normal: 40, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch'], + features: ['streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const O4_MINI_DEEP_RESEARCH = { + name: 'o4-mini-deep-research', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 2, + cached: 0.5, + }, + output: { + normal: 8, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch'], + features: ['streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const O3_PRO = { + name: 'o3-pro', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 20, + }, + output: { + normal: 80, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch'], + features: ['function_calling', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_AUDIO = { + name: 'gpt-audio', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo add audio tokens to input output + input: { + normal: 2.5, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text', 'audio'], + output: ['text', 'audio'], + endpoints: ['chat-completions'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +/* const GPT_REALTIME = { + name: 'gpt-realtime', + context_window: 32_000, + max_output_tokens: 4_096, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo add audio tokens to input output + input: { + normal: 4, + cached: 0.5, + }, + output: { + normal: 16, + }, + }, + supports: { + input: ['text', 'audio', 'image'], + output: ['text', 'audio'], + endpoints: ['realtime'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_REALTIME_MINI = { + name: 'gpt-realtime-mini', + context_window: 32_000, + max_output_tokens: 4_096, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo add audio and image tokens to input output + input: { + normal: 0.6, + cached: 0.06, + }, + output: { + normal: 2.4, + }, + }, + supports: { + input: ['text', 'audio', 'image'], + output: ['text', 'audio'], + endpoints: ['realtime'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> */ + +const GPT_AUDIO_MINI = { + name: 'gpt-audio-mini', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo add audio tokens to input output + input: { + normal: 0.6, + }, + output: { + normal: 2.4, + }, + }, + supports: { + input: ['text', 'audio'], + output: ['text', 'audio'], + endpoints: ['chat-completions'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const O3 = { + name: 'o3', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 2, + cached: 0.5, + }, + output: { + normal: 8, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch', 'chat-completions'], + features: ['function_calling', 'structured_outputs', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const O4_MINI = { + name: 'o4-mini', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 1.1, + cached: 0.275, + }, + output: { + normal: 4.4, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch', 'chat-completions', 'fine-tuning'], + features: [ + 'function_calling', + 'structured_outputs', + 'streaming', + 'fine_tuning', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT4_1 = { + name: 'gpt-4.1', + context_window: 1_047_576, + max_output_tokens: 32_768, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 2, + cached: 0.5, + }, + output: { + normal: 8, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: [ + 'chat', + 'chat-completions', + 'assistants', + 'fine-tuning', + 'batch', + ], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'distillation', + 'fine_tuning', + ], + tools: [ + 'web_search', + 'file_search', + 'image_generation', + 'code_interpreter', + 'mcp', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT4_1_MINI = { + name: 'gpt-4.1-mini', + context_window: 1_047_576, + max_output_tokens: 32_768, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 0.4, + cached: 0.1, + }, + output: { + normal: 1.6, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: [ + 'chat', + 'chat-completions', + 'assistants', + 'fine-tuning', + 'batch', + ], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'fine_tuning', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT4_1_NANO = { + name: 'gpt-4.1-nano', + context_window: 1_047_576, + max_output_tokens: 32_768, + knowledge_cutoff: '2024-01-01', + pricing: { + input: { + normal: 0.1, + cached: 0.025, + }, + output: { + normal: 0.4, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: [ + 'chat', + 'chat-completions', + 'assistants', + 'fine-tuning', + 'batch', + ], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'fine_tuning', + 'predicted_outcomes', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const O1_PRO = { + name: 'o1-pro', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 150, + }, + output: { + normal: 600, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch'], + features: ['function_calling', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const COMPUTER_USE_PREVIEW = { + name: 'computer-use-preview', + context_window: 8_192, + max_output_tokens: 1_024, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 3, + }, + output: { + normal: 12, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_4O_MINI_SEARCH_PREVIEW = { + name: 'gpt-4o-mini-search-preview', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 0.15, + }, + output: { + normal: 0.6, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['chat-completions'], + features: ['streaming', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_4O_SEARCH_PREVIEW = { + name: 'gpt-4o-search-preview', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 2.5, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['chat-completions'], + features: ['streaming', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const O3_MINI = { + name: 'o3-mini', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 1.1, + cached: 0.55, + }, + output: { + normal: 4.4, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['chat', 'batch', 'chat-completions', 'assistants'], + features: ['function_calling', 'structured_outputs', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_4O_MINI_AUDIO = { + name: 'gpt-4o-mini-audio', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo audio tokens + input: { + normal: 0.15, + }, + output: { + normal: 0.6, + }, + }, + supports: { + input: ['text', 'audio'], + output: ['text', 'audio'], + endpoints: ['chat-completions'], + features: ['function_calling', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +/* const GPT_4O_MINI_REALTIME = { + name: 'gpt-4o-mini-realtime', + context_window: 16_000, + max_output_tokens: 4_096, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo add audio tokens + input: { + normal: 0.6, + cached: 0.3, + }, + output: { + normal: 2.4, + }, + }, + supports: { + input: ['text', 'audio'], + output: ['text', 'audio'], + endpoints: ['realtime'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + */ +const O1 = { + name: 'o1', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 15, + cached: 7.5, + }, + output: { + normal: 60, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'batch', 'chat-completions', 'assistants'], + features: ['function_calling', 'structured_outputs', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +/* const OMNI_MODERATION = { + name: 'omni-moderation', + pricing: { + input: { + normal: 0, + }, + output: { + normal: 0, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['batch', 'moderation'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> */ + +const GPT_4O = { + name: 'gpt-4o', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 2.5, + cached: 1.25, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: [ + 'chat', + 'chat-completions', + 'assistants', + 'fine-tuning', + 'batch', + ], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'distillation', + 'fine_tuning', + 'predicted_outcomes', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_4O_AUDIO = { + name: 'gpt-4o-audio', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo audio tokens + input: { + normal: 2.5, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text', 'audio'], + output: ['text', 'audio'], + endpoints: ['chat-completions'], + features: ['streaming', 'function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_40_MINI = { + name: 'gpt-4o-mini', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 0.15, + cached: 0.075, + }, + output: { + normal: 0.6, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: [ + 'chat', + 'chat-completions', + 'assistants', + 'fine-tuning', + 'batch', + ], + features: [ + 'streaming', + 'function_calling', + 'structured_outputs', + 'fine_tuning', + 'predicted_outcomes', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +/* const GPT__4O_REALTIME = { + name: 'gpt-4o-realtime', + context_window: 32_000, + max_output_tokens: 4_096, + knowledge_cutoff: '2023-10-01', + pricing: { + // todo add audio tokens to input output + input: { + normal: 5, + cached: 2.5, + }, + output: { + normal: 20, + }, + }, + supports: { + input: ['text', 'audio'], + output: ['text', 'audio'], + endpoints: ['realtime'], + features: ['function_calling'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> */ + +const GPT_4_TURBO = { + name: 'gpt-4-turbo', + context_window: 128_000, + max_output_tokens: 4_096, + knowledge_cutoff: '2023-12-01', + pricing: { + input: { + normal: 10, + }, + output: { + normal: 30, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions', 'assistants', 'batch'], + features: ['function_calling', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const CHATGPT_40 = { + name: 'chatgpt-4.0', + context_window: 128_000, + max_output_tokens: 4_096, + knowledge_cutoff: '2023-10-01', + pricing: { + input: { + normal: 5, + }, + output: { + normal: 15, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions'], + features: ['predicted_outcomes', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_5_1_CODEX_MINI = { + name: 'gpt-5.1-codex-mini', + context_window: 400_000, + max_output_tokens: 128_000, + knowledge_cutoff: '2024-09-30', + pricing: { + input: { + normal: 0.25, + cached: 0.025, + }, + output: { + normal: 2, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text', 'image'], + endpoints: ['chat'], + features: ['streaming', 'function_calling', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const CODEX_MINI_LATEST = { + name: 'codex-mini-latest', + context_window: 200_000, + max_output_tokens: 100_000, + knowledge_cutoff: '2024-06-01', + pricing: { + input: { + normal: 1.5, + cached: 0.375, + }, + output: { + normal: 6, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat'], + features: ['streaming', 'function_calling', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> +/* +const DALL_E_2 = { + name: 'dall-e-2', + pricing: { + // todo image tokens + input: { + normal: 0.016, + }, + output: { + normal: 0.02, + }, + }, + supports: { + input: ['text'], + output: ['image'], + endpoints: ['image-generation', 'image-edit'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const DALL_E_3 = { + name: 'dall-e-3', + pricing: { + // todo image tokens + input: { + normal: 0.04, + }, + output: { + normal: 0.08, + }, + }, + supports: { + input: ['text'], + output: ['image'], + endpoints: ['image-generation', 'image-edit'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> */ + +const GPT_3_5_TURBO = { + name: 'gpt-3.5-turbo', + context_window: 16_385, + max_output_tokens: 4_096, + knowledge_cutoff: '2021-09-01', + pricing: { + input: { + normal: 0.5, + }, + output: { + normal: 1.5, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['chat', 'chat-completions', 'batch', 'fine-tuning'], + features: ['fine_tuning'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_4 = { + name: 'gpt-4', + context_window: 8_192, + max_output_tokens: 8_192, + knowledge_cutoff: '2023-12-01', + pricing: { + input: { + normal: 30, + }, + output: { + normal: 60, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: [ + 'chat', + 'chat-completions', + 'batch', + 'fine-tuning', + 'assistants', + ], + features: ['fine_tuning', 'streaming'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> +/* +const GPT_4O_MINI_TRANSCRIBE = { + name: 'gpt-4o-mini-transcribe', + context_window: 16_000, + max_output_tokens: 2_000, + knowledge_cutoff: '2024-01-01', + pricing: { + // todo audio tokens + input: { + normal: 1.25, + }, + output: { + normal: 5, + }, + }, + supports: { + input: ['audio', 'text'], + output: ['text'], + endpoints: ['realtime', 'transcription'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_4O_MINI_TTS = { + name: 'gpt-4o-mini-tts', + pricing: { + // todo audio tokens + input: { + normal: 0.6, + }, + output: { + normal: 12, + }, + }, + supports: { + input: ['text'], + output: ['audio'], + endpoints: ['speech_generation'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const GPT_4O_TRANSCRIBE = { + name: 'gpt-4o-transcribe', + context_window: 16_000, + max_output_tokens: 2_000, + knowledge_cutoff: '2024-06-01', + pricing: { + // todo audio tokens + input: { + normal: 2.5, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['audio', 'text'], + output: ['text'], + endpoints: ['realtime', 'transcription'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> */ +/* +const GPT_4O_TRANSCRIBE_DIARIZE = { + name: 'gpt-4o-transcribe-diarize', + context_window: 16_000, + max_output_tokens: 2_000, + knowledge_cutoff: '2024-06-01', + pricing: { + // todo audio tokens + input: { + normal: 2.5, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['audio', 'text'], + output: ['text'], + endpoints: ['transcription'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> */ + +const GPT_5_1_CHAT = { + name: 'gpt-5.1-chat', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2024-09-30', + pricing: { + input: { + normal: 1.25, + cached: 0.125, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions'], + features: ['streaming', 'function_calling', 'structured_outputs'], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const GPT_5_CHAT = { + name: 'gpt-5-chat', + context_window: 128_000, + max_output_tokens: 16_384, + knowledge_cutoff: '2024-09-30', + pricing: { + input: { + normal: 1.25, + cached: 0.125, + }, + output: { + normal: 10, + }, + }, + supports: { + input: ['text', 'image'], + output: ['text'], + endpoints: ['chat', 'chat-completions'], + features: ['streaming', 'function_calling', 'structured_outputs'], + tools: [ + 'web_search', + 'file_search', + 'image_generation', + 'code_interpreter', + 'mcp', + ], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +> + +const TEXT_EMBEDDING_3_LARGE = { + name: 'text-embedding-3-large', + pricing: { + // todo embedding tokens + input: { + normal: 0.13, + }, + output: { + normal: 0.13, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['embedding', 'batch'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const TEXT_EMBEDDING_3_SMALL = { + name: 'text-embedding-3-small', + pricing: { + // todo embedding tokens + input: { + normal: 0.02, + }, + output: { + normal: 0.02, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['embedding', 'batch'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const TEXT_EMBEDDING_3_ADA_002 = { + name: 'text-embedding-3-ada-002', + pricing: { + // todo embedding tokens + input: { + normal: 0.1, + }, + output: { + normal: 0.1, + }, + }, + supports: { + input: ['text'], + output: ['text'], + endpoints: ['embedding', 'batch'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +/* const TTS_1 = { + name: 'tts-1', + pricing: { + // todo figure out pricing + input: { + normal: 15, + }, + output: { + normal: 15, + }, + }, + supports: { + input: ['text'], + output: ['audio'], + endpoints: ['speech_generation'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> + +const TTS_1_HD = { + name: 'tts-1-hd', + pricing: { + // todo figure out pricing + input: { + normal: 30, + }, + output: { + normal: 30, + }, + }, + supports: { + input: ['text'], + output: ['audio'], + endpoints: ['speech_generation'], + features: [], + }, +} as const satisfies ModelMeta< + OpenAIBaseOptions & OpenAIStreamingOptions & OpenAIMetadataOptions +> */ + +// Chat/text completion models (based on endpoints: "chat" or "chat-completions") +export const OPENAI_CHAT_MODELS = [ + // Frontier models + GPT5_1.name, + GPT5_1_CODEX.name, + GPT5.name, + GPT5_MINI.name, + GPT5_NANO.name, + GPT5_PRO.name, + GPT5_CODEX.name, + // Reasoning models + O3.name, + O3_PRO.name, + O3_MINI.name, + O4_MINI.name, + O3_DEEP_RESEARCH.name, + O4_MINI_DEEP_RESEARCH.name, + // GPT-4 series + GPT4_1.name, + GPT4_1_MINI.name, + GPT4_1_NANO.name, + GPT_4.name, + GPT_4_TURBO.name, + GPT_4O.name, + GPT_40_MINI.name, + // GPT-3.5 + GPT_3_5_TURBO.name, + // Audio-enabled chat models + GPT_AUDIO.name, + GPT_AUDIO_MINI.name, + GPT_4O_AUDIO.name, + GPT_4O_MINI_AUDIO.name, + // ChatGPT models + GPT_5_1_CHAT.name, + GPT_5_CHAT.name, + CHATGPT_40.name, + // Specialized + GPT_5_1_CODEX_MINI.name, + CODEX_MINI_LATEST.name, + // Preview models + GPT_4O_SEARCH_PREVIEW.name, + GPT_4O_MINI_SEARCH_PREVIEW.name, + COMPUTER_USE_PREVIEW.name, + // Legacy reasoning + O1.name, + O1_PRO.name, +] as const + +// Image generation models (based on endpoints: "image-generation" or "image-edit") +/* const OPENAI_IMAGE_MODELS = [ + GPT_IMAGE_1.name, + GPT_IMAGE_1_MINI.name, + DALL_E_3.name, + DALL_E_2.name, +] as const */ + +// Embedding models (based on endpoints: "embedding") +export const OPENAI_EMBEDDING_MODELS = [ + TEXT_EMBEDDING_3_LARGE.name, + TEXT_EMBEDDING_3_SMALL.name, + TEXT_EMBEDDING_3_ADA_002.name, +] as const + +// Audio models (based on endpoints: "transcription", "speech_generation", or "realtime") +/* const OPENAI_AUDIO_MODELS = [ + // Transcription models + GPT_4O_TRANSCRIBE.name, + GPT_4O_TRANSCRIBE_DIARIZE.name, + GPT_4O_MINI_TRANSCRIBE.name, + // Realtime models + GPT_REALTIME.name, + GPT_REALTIME_MINI.name, + GPT__4O_REALTIME.name, + GPT_4O_MINI_REALTIME.name, + // Text-to-speech models + GPT_4O_MINI_TTS.name, + TTS_1.name, + TTS_1_HD.name, +] as const */ + +// Transcription-only models (based on endpoints: "transcription") +/* const OPENAI_TRANSCRIPTION_MODELS = [ + GPT_4O_TRANSCRIBE.name, + GPT_4O_TRANSCRIBE_DIARIZE.name, + GPT_4O_MINI_TRANSCRIBE.name, +] as const + +// Video generation models (based on endpoints: "video") +const OPENAI_VIDEO_MODELS = [SORA2.name, SORA2_PRO.name] as const + */ +// const OPENAI_MODERATION_MODELS = [OMNI_MODERATION.name] as const + +// export type OpenAIChatModel = (typeof OPENAI_CHAT_MODELS)[number] +// type OpenAIImageModel = (typeof OPENAI_IMAGE_MODELS)[number] +// export type OpenAIEmbeddingModel = (typeof OPENAI_EMBEDDING_MODELS)[number] +// type OpenAIAudioModel = (typeof OPENAI_AUDIO_MODELS)[number] +// type OpenAIVideoModel = (typeof OPENAI_VIDEO_MODELS)[number] +// type OpenAITranscriptionModel = +// (typeof OPENAI_TRANSCRIPTION_MODELS)[number] + +/* const OPENAI_MODEL_META = { + [GPT5_1.name]: GPT5_1, + [GPT5_1_CODEX.name]: GPT5_1_CODEX, + [GPT5.name]: GPT5, + [GPT5_MINI.name]: GPT5_MINI, + [GPT5_NANO.name]: GPT5_NANO, + [GPT5_PRO.name]: GPT5_PRO, + [GPT5_CODEX.name]: GPT5_CODEX, + [SORA2.name]: SORA2, + [SORA2_PRO.name]: SORA2_PRO, + [GPT_IMAGE_1.name]: GPT_IMAGE_1, + [GPT_IMAGE_1_MINI.name]: GPT_IMAGE_1_MINI, + [O3_DEEP_RESEARCH.name]: O3_DEEP_RESEARCH, + [O4_MINI_DEEP_RESEARCH.name]: O4_MINI_DEEP_RESEARCH, + [O3_PRO.name]: O3_PRO, + [GPT_AUDIO.name]: GPT_AUDIO, + [GPT_REALTIME.name]: GPT_REALTIME, + [GPT_REALTIME_MINI.name]: GPT_REALTIME_MINI, + [GPT_AUDIO_MINI.name]: GPT_AUDIO_MINI, + [O3.name]: O3, + [O4_MINI.name]: O4_MINI, + [GPT4_1.name]: GPT4_1, + [GPT4_1_MINI.name]: GPT4_1_MINI, + [GPT4_1_NANO.name]: GPT4_1_NANO, + [O1_PRO.name]: O1_PRO, + [COMPUTER_USE_PREVIEW.name]: COMPUTER_USE_PREVIEW, + [GPT_4O_MINI_SEARCH_PREVIEW.name]: GPT_4O_MINI_SEARCH_PREVIEW, + [GPT_4O_SEARCH_PREVIEW.name]: GPT_4O_SEARCH_PREVIEW, + [O3_MINI.name]: O3_MINI, + [GPT_4O_MINI_AUDIO.name]: GPT_4O_MINI_AUDIO, + [GPT_4O_MINI_REALTIME.name]: GPT_4O_MINI_REALTIME, + [O1.name]: O1, + [OMNI_MODERATION.name]: OMNI_MODERATION, + [GPT_4O.name]: GPT_4O, + [GPT_4O_AUDIO.name]: GPT_4O_AUDIO, + [GPT_40_MINI.name]: GPT_40_MINI, + [GPT__4O_REALTIME.name]: GPT__4O_REALTIME, + [GPT_4_TURBO.name]: GPT_4_TURBO, + [CHATGPT_40.name]: CHATGPT_40, + [GPT_5_1_CODEX_MINI.name]: GPT_5_1_CODEX_MINI, + [CODEX_MINI_LATEST.name]: CODEX_MINI_LATEST, + [DALL_E_2.name]: DALL_E_2, + [DALL_E_3.name]: DALL_E_3, + [GPT_3_5_TURBO.name]: GPT_3_5_TURBO, + [GPT_4.name]: GPT_4, + [GPT_4O_MINI_TRANSCRIBE.name]: GPT_4O_MINI_TRANSCRIBE, + [GPT_4O_MINI_TTS.name]: GPT_4O_MINI_TTS, + [GPT_4O_TRANSCRIBE.name]: GPT_4O_TRANSCRIBE, + [GPT_4O_TRANSCRIBE_DIARIZE.name]: GPT_4O_TRANSCRIBE_DIARIZE, + [GPT_5_1_CHAT.name]: GPT_5_1_CHAT, + [GPT_5_CHAT.name]: GPT_5_CHAT, + [TEXT_EMBEDDING_3_LARGE.name]: TEXT_EMBEDDING_3_LARGE, + [TEXT_EMBEDDING_3_SMALL.name]: TEXT_EMBEDDING_3_SMALL, + [TEXT_EMBEDDING_3_ADA_002.name]: TEXT_EMBEDDING_3_ADA_002, + [TTS_1.name]: TTS_1, + [TTS_1_HD.name]: TTS_1_HD, +} as const + */ + +/** + * Type-only map from chat model name to its provider options type. + * Used by the core AI types (via the adapter) to narrow + * `providerOptions` based on the selected model. + * + * Manually defined to ensure accurate type narrowing per model. + */ +export type OpenAIChatModelProviderOptionsByName = { + // Models WITH structured output support (have 'text' field) + 'gpt-5.1': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-5.1-codex': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-5': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-5-mini': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-5-nano': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-5-pro': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-5-codex': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4.1': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4.1-mini': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4.1-nano': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4o': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4o-mini': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + + // Models WITHOUT structured output support (NO 'text' field) + 'gpt-4': OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4-turbo': OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-3.5-turbo': OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'chatgpt-4.0': OpenAIBaseOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + o3: OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions + 'o3-pro': OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions + 'o3-mini': OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions + 'o4-mini': OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions + 'o3-deep-research': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIMetadataOptions + 'o4-mini-deep-research': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIMetadataOptions + o1: OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions + 'o1-pro': OpenAIBaseOptions & OpenAIReasoningOptions & OpenAIMetadataOptions + + // Audio models + 'gpt-audio': OpenAIBaseOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-audio-mini': OpenAIBaseOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4o-audio': OpenAIBaseOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4o-mini-audio': OpenAIBaseOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + + // Chat-only models + 'gpt-5.1-chat': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIMetadataOptions + 'gpt-5-chat': OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIMetadataOptions + + // Codex models + 'gpt-5.1-codex-mini': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'codex-mini-latest': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + + // Search models + 'gpt-4o-search-preview': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + 'gpt-4o-mini-search-preview': OpenAIBaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + + // Special models + 'computer-use-preview': OpenAIBaseOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions +} diff --git a/packages/typescript/ai-openai/src/openai-adapter.ts b/packages/typescript/ai-openai/src/openai-adapter.ts index a77d8d586..ef2dc4fc7 100644 --- a/packages/typescript/ai-openai/src/openai-adapter.ts +++ b/packages/typescript/ai-openai/src/openai-adapter.ts @@ -1,119 +1,46 @@ -import OpenAI_SDK from "openai"; -import { - BaseAdapter, - type ChatCompletionOptions, - type ChatCompletionResult, - type SummarizationOptions, - type SummarizationResult, - type EmbeddingOptions, - type EmbeddingResult, - type ModelMessage, - type Tool, - StreamChunk, -} from "@tanstack/ai"; -import { - OPENAI_CHAT_MODELS, - OPENAI_EMBEDDING_MODELS, - type OpenAIChatModelProviderOptionsByName, -} from "./model-meta"; +import OpenAI_SDK from 'openai' +import { BaseAdapter } from '@tanstack/ai' +import { OPENAI_CHAT_MODELS, OPENAI_EMBEDDING_MODELS } from './model-meta' import { convertMessagesToInput, + validateTextProviderOptions, +} from './text/text-provider-options' +import { convertToolsToProviderFormat } from './tools' +import type { + ChatOptions, + EmbeddingOptions, + EmbeddingResult, + StreamChunk, + SummarizationOptions, + SummarizationResult, +} from '@tanstack/ai' +import type { OpenAIChatModelProviderOptionsByName } from './model-meta' +import type { ExternalTextProviderOptions, InternalTextProviderOptions, -} from "./text/text-provider-options"; -import { convertToolsToProviderFormat } from "./tools"; +} from './text/text-provider-options' export interface OpenAIConfig { - apiKey: string; - organization?: string; - baseURL?: string; + apiKey: string + organization?: string + baseURL?: string } /** * Alias for TextProviderOptions */ -export type OpenAIProviderOptions = ExternalTextProviderOptions; - -/** - * OpenAI-specific provider options for image generation - * Based on OpenAI Images API documentation - * @see https://platform.openai.com/docs/api-reference/images/create - */ -export interface OpenAIImageProviderOptions { - /** Image quality: 'standard' | 'hd' (dall-e-3, gpt-image-1 only) */ - quality?: "standard" | "hd"; - /** Image style: 'natural' | 'vivid' (dall-e-3 only) */ - style?: "natural" | "vivid"; - /** Background: 'transparent' | 'opaque' (gpt-image-1 only) */ - background?: "transparent" | "opaque"; - /** Output format: 'png' | 'webp' | 'jpeg' (gpt-image-1 only) */ - outputFormat?: "png" | "webp" | "jpeg"; -} +export type OpenAIProviderOptions = ExternalTextProviderOptions /** * OpenAI-specific provider options for embeddings * Based on OpenAI Embeddings API documentation * @see https://platform.openai.com/docs/api-reference/embeddings/create */ -export interface OpenAIEmbeddingProviderOptions { +interface OpenAIEmbeddingProviderOptions { /** Encoding format for embeddings: 'float' | 'base64' */ - encodingFormat?: "float" | "base64"; + encodingFormat?: 'float' | 'base64' /** Unique identifier for end-user (for abuse monitoring) */ - user?: string; -} - -/** - * OpenAI-specific provider options for audio transcription - * Based on OpenAI Audio API documentation - * @see https://platform.openai.com/docs/api-reference/audio/createTranscription - */ -export interface OpenAIAudioTranscriptionProviderOptions { - /** Timestamp granularities: 'word' | 'segment' (whisper-1 only) */ - timestampGranularities?: Array<"word" | "segment">; - /** Chunking strategy for long audio (gpt-4o-transcribe-diarize): 'auto' or VAD config */ - chunkingStrategy?: - | "auto" - | { - type: "vad"; - threshold?: number; - prefix_padding_ms?: number; - silence_duration_ms?: number; - }; - /** Known speaker names for diarization (gpt-4o-transcribe-diarize) */ - knownSpeakerNames?: string[]; - /** Known speaker reference audio as data URLs (gpt-4o-transcribe-diarize) */ - knownSpeakerReferences?: string[]; - /** Whether to enable streaming (gpt-4o-transcribe, gpt-4o-mini-transcribe only) */ - stream?: boolean; - /** Include log probabilities (gpt-4o-transcribe, gpt-4o-mini-transcribe only) */ - logprobs?: boolean; -} - -/** - * OpenAI-specific provider options for text-to-speech - * Based on OpenAI Audio API documentation - * @see https://platform.openai.com/docs/api-reference/audio/createSpeech - */ -export interface OpenAITextToSpeechProviderOptions { - // Currently no OpenAI-specific text-to-speech options beyond the common SDK surface. -} - -/** - * Combined audio provider options (transcription + text-to-speech) - */ -export type OpenAIAudioProviderOptions = - OpenAIAudioTranscriptionProviderOptions & OpenAITextToSpeechProviderOptions; - -/** - * OpenAI-specific provider options for video generation - * Based on OpenAI Video API documentation - * @see https://platform.openai.com/docs/guides/video-generation - */ -export interface OpenAIVideoProviderOptions { - /** Input reference image (File, Blob, or Buffer) for first frame */ - inputReference?: File | Blob | Buffer; - /** Remix video ID to modify an existing video */ - remixVideoId?: string; + user?: string } export class OpenAI extends BaseAdapter< @@ -123,102 +50,96 @@ export class OpenAI extends BaseAdapter< OpenAIEmbeddingProviderOptions, OpenAIChatModelProviderOptionsByName > { - name = "openai" as const; - models = OPENAI_CHAT_MODELS; - embeddingModels = OPENAI_EMBEDDING_MODELS; + name = 'openai' as const + models = OPENAI_CHAT_MODELS + embeddingModels = OPENAI_EMBEDDING_MODELS - private client: OpenAI_SDK; + private client: OpenAI_SDK // Type-only map used by core AI to infer per-model provider options. // This is never set at runtime; it exists purely for TypeScript. // Using definite assignment assertion (!) since this is type-only. // @ts-ignore - We never assign this at runtime and it's only used for types - _modelProviderOptionsByName: OpenAIChatModelProviderOptionsByName; + _modelProviderOptionsByName: OpenAIChatModelProviderOptionsByName constructor(config: OpenAIConfig) { - super({}); + super({}) this.client = new OpenAI_SDK({ apiKey: config.apiKey, organization: config.organization, baseURL: config.baseURL, - }); + }) } async *chatStream( - options: ChatCompletionOptions + options: ChatOptions, ): AsyncIterable { // Track tool call metadata by unique ID // OpenAI streams tool calls with deltas - first chunk has ID/name, subsequent chunks only have args // We assign our own indices as we encounter unique tool call IDs - const toolCallMetadata = new Map(); - - // Use Chat Completions API (standard, well-established API with reliable tool call support) - // instead of Responses API which has issues with argument parsing - const messages = this.convertMessagesToChatCompletionsFormat( - options.messages - ); - const tools = options.tools - ? this.convertToolsToChatCompletionsFormat([...options.tools]) - : undefined; + const toolCallMetadata = new Map() + const requestArguments = this.mapChatOptionsToOpenAI(options) - const response = await this.client.chat.completions.create( - { - model: options.model, - messages, - tools, - tool_choice: options.options?.toolChoice as any, - temperature: options.options?.temperature, - max_tokens: options.options?.maxTokens, - top_p: options.options?.topP, - stream: true, - }, - { - headers: options.request?.headers, - signal: options.request?.signal, - } - ); - - // Chat Completions API uses SSE format - iterate directly - yield* this.processChatCompletionsStream( - response, - toolCallMetadata, - options, - () => this.generateId() - ); + try { + const response = await this.client.responses.create( + { + ...requestArguments, + stream: true, + }, + { + headers: options.request?.headers, + signal: options.request?.signal, + }, + ) + + // Chat Completions API uses SSE format - iterate directly + yield* this.processOpenAIStreamChunks( + response, + toolCallMetadata, + options, + () => this.generateId(), + ) + } catch (error: any) { + console.error('>>> chatStream: Fatal error during response creation <<<') + console.error('>>> Error message:', error?.message) + console.error('>>> Error stack:', error?.stack) + console.error('>>> Full error:', error) + throw error + } } async summarize(options: SummarizationOptions): Promise { - const systemPrompt = this.buildSummarizationPrompt(options); + const systemPrompt = this.buildSummarizationPrompt(options) const response = await this.client.chat.completions.create({ - model: options.model || "gpt-3.5-turbo", + model: options.model || 'gpt-3.5-turbo', messages: [ - { role: "system", content: systemPrompt }, - { role: "user", content: options.text }, + { role: 'system', content: systemPrompt }, + { role: 'user', content: options.text }, ], max_tokens: options.maxLength, temperature: 0.3, stream: false, - }); + }) return { id: response.id, model: response.model, - summary: response.choices[0].message.content || "", + summary: response.choices[0]?.message.content || '', usage: { promptTokens: response.usage?.prompt_tokens || 0, completionTokens: response.usage?.completion_tokens || 0, totalTokens: response.usage?.total_tokens || 0, }, - }; + } } async createEmbeddings(options: EmbeddingOptions): Promise { const response = await this.client.embeddings.create({ - model: options.model || "text-embedding-ada-002", + model: options.model || 'text-embedding-ada-002', input: options.input, dimensions: options.dimensions, - }); + }) return { id: this.generateId(), @@ -228,1267 +149,226 @@ export class OpenAI extends BaseAdapter< promptTokens: response.usage.prompt_tokens, totalTokens: response.usage.total_tokens, }, - }; + } } private buildSummarizationPrompt(options: SummarizationOptions): string { - let prompt = "You are a professional summarizer. "; + let prompt = 'You are a professional summarizer. ' switch (options.style) { - case "bullet-points": - prompt += "Provide a summary in bullet point format. "; - break; - case "paragraph": - prompt += "Provide a summary in paragraph format. "; - break; - case "concise": - prompt += "Provide a very concise summary in 1-2 sentences. "; - break; + case 'bullet-points': + prompt += 'Provide a summary in bullet point format. ' + break + case 'paragraph': + prompt += 'Provide a summary in paragraph format. ' + break + case 'concise': + prompt += 'Provide a very concise summary in 1-2 sentences. ' + break default: - prompt += "Provide a clear and concise summary. "; + prompt += 'Provide a clear and concise summary. ' } if (options.focus && options.focus.length > 0) { - prompt += `Focus on the following aspects: ${options.focus.join(", ")}. `; + prompt += `Focus on the following aspects: ${options.focus.join(', ')}. ` } if (options.maxLength) { - prompt += `Keep the summary under ${options.maxLength} tokens. `; + prompt += `Keep the summary under ${options.maxLength} tokens. ` } - return prompt; - } - - private mapOpenAIResponseToChatResult( - response: OpenAI_SDK.Responses.Response - ): ChatCompletionResult { - // response.output is an array of output items - const outputItems = response.output; - - // Find the message output item - const messageItem = outputItems.find((item) => item.type === "message"); - const content = - messageItem?.content?.[0].type === "output_text" - ? messageItem?.content?.[0]?.text || "" - : ""; - - // Find function call items - const functionCalls = outputItems.filter( - (item) => item.type === "function_call" - ); - const toolCalls = - functionCalls.length > 0 - ? functionCalls.map((fc) => ({ - id: fc.call_id, - type: "function" as const, - function: { - name: fc.name, - arguments: JSON.stringify(fc.arguments), - }, - })) - : undefined; - - return { - id: response.id, - model: response.model, - content, - role: "assistant", - finishReason: messageItem?.status, - toolCalls, - usage: { - promptTokens: response.usage?.input_tokens || 0, - completionTokens: response.usage?.output_tokens || 0, - totalTokens: response.usage?.total_tokens || 0, - }, - }; - } - - /** - * Convert tools to Chat Completions API format - */ - private convertToolsToChatCompletionsFormat( - tools: Array - ): Array { - return tools.map((tool) => { - // Chat Completions API uses simpler format: { type: "function", function: { name, description, parameters } } - return { - type: "function" as const, - function: { - name: tool.function.name, - description: tool.function.description || "", - parameters: tool.function.parameters || {}, - }, - }; - }); + return prompt } - /** - * Convert messages to Chat Completions API format - */ - private convertMessagesToChatCompletionsFormat( - messages: ModelMessage[] - ): Array { - const result: Array = - []; - - for (const message of messages) { - // Handle tool messages - convert to tool role - if (message.role === "tool") { - result.push({ - role: "tool", - tool_call_id: message.toolCallId || "", - content: - typeof message.content === "string" - ? message.content - : JSON.stringify(message.content), - }); - continue; - } - - // Handle assistant messages with tool calls - if (message.role === "assistant") { - const assistantMsg: OpenAI_SDK.Chat.Completions.ChatCompletionAssistantMessageParam = - { - role: "assistant", - content: message.content || null, - }; - - if (message.toolCalls && message.toolCalls.length > 0) { - assistantMsg.tool_calls = message.toolCalls.map((tc) => ({ - id: tc.id, - type: "function", - function: { - name: tc.function.name, - arguments: - typeof tc.function.arguments === "string" - ? tc.function.arguments - : JSON.stringify(tc.function.arguments || {}), - }, - })); - } - - result.push(assistantMsg); - continue; - } - - // Handle user and system messages - if (message.role === "user") { - result.push({ - role: "user", - content: - typeof message.content === "string" - ? message.content - : JSON.stringify(message.content), - }); - } else if (message.role === "system") { - result.push({ - role: "system", - content: - typeof message.content === "string" - ? message.content - : JSON.stringify(message.content), - }); - } - } - - return result; - } - - /** - * Process Chat Completions API stream (SSE format) - * This is the standard, well-established API with reliable tool call support - */ - private async *processChatCompletionsStream( - stream: AsyncIterable, - toolCallMetadata: Map, - options: ChatCompletionOptions, - generateId: () => string - ): AsyncIterable { - let accumulatedContent = ""; - const timestamp = Date.now(); - let nextIndex = 0; - - // Track accumulated function call arguments by call_id - const accumulatedFunctionCallArguments = new Map(); - - let responseId: string | null = null; - let model: string | null = null; - - try { - for await (const chunk of stream) { - // Preserve response metadata - if (chunk.id) responseId = chunk.id; - if (chunk.model) model = chunk.model; - - const choice = chunk.choices?.[0]; - if (!choice) continue; - - const delta = choice.delta; - const finishReason = choice.finish_reason; - - // Handle content delta - if (delta?.content) { - accumulatedContent += delta.content; - yield { - type: "content" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - delta: delta.content, - content: accumulatedContent, - role: "assistant" as const, - }; - } - - // Handle tool calls - if (delta?.tool_calls) { - for (const toolCall of delta.tool_calls) { - let toolCallId: string; - let toolCallName: string; - let toolCallArgs: string; - let actualIndex: number; - - if (toolCall.id) { - // First chunk with ID and name - toolCallId = toolCall.id; - toolCallName = toolCall.function?.name || ""; - toolCallArgs = toolCall.function?.arguments || ""; - - // Track for index assignment - if (!toolCallMetadata.has(toolCallId)) { - toolCallMetadata.set(toolCallId, { - index: nextIndex++, - name: toolCallName, - }); - accumulatedFunctionCallArguments.set(toolCallId, ""); - } - const meta = toolCallMetadata.get(toolCallId)!; - actualIndex = meta.index; - - // Track the delta for this chunk - if (toolCallArgs) { - const current = - accumulatedFunctionCallArguments.get(toolCallId) || ""; - accumulatedFunctionCallArguments.set( - toolCallId, - current + toolCallArgs - ); - } - } else { - // Delta chunk - find by index - const openAIIndex = - typeof toolCall.index === "number" ? toolCall.index : 0; - const entry = Array.from(toolCallMetadata.entries())[openAIIndex]; - if (entry) { - const [id, meta] = entry; - toolCallId = id; - toolCallName = meta.name; - actualIndex = meta.index; - toolCallArgs = toolCall.function?.arguments || ""; - - // Track the delta - if (toolCallArgs) { - const current = - accumulatedFunctionCallArguments.get(toolCallId) || ""; - accumulatedFunctionCallArguments.set( - toolCallId, - current + toolCallArgs - ); - } - } else { - // Fallback - toolCallId = `call_${Date.now()}`; - toolCallName = ""; - actualIndex = openAIIndex; - toolCallArgs = ""; - } - } - - // Emit the tool call chunk - // For chunks with ID (first chunk), always emit to register the tool call - // For delta chunks, only emit if there are arguments to add - // The ToolCallManager will accumulate the argument deltas for us - if (toolCall.id || toolCallArgs) { - yield { - type: "tool_call", - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - toolCall: { - id: toolCallId, - type: "function", - function: { - name: toolCallName, - arguments: toolCallArgs, // Only the delta, not accumulated - }, - }, - index: actualIndex, - }; - } - } - } - - // Handle completion - if (finishReason) { - yield { - type: "done" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - finishReason: finishReason as any, - usage: chunk.usage - ? { - promptTokens: chunk.usage.prompt_tokens || 0, - completionTokens: chunk.usage.completion_tokens || 0, - totalTokens: chunk.usage.total_tokens || 0, - } - : undefined, - }; - } - } - } catch (error: any) { - yield { - type: "error", - id: generateId(), - model: options.model || "gpt-3.5-turbo", - timestamp, - error: { - message: error.message || "Unknown error occurred", - code: error.code, - }, - }; - } - } - - /** - * Parse Responses API stream - it's JSON lines (not SSE format) - * Each line is a complete JSON object - */ - private async *parseResponsesStream( - stream: ReadableStream - ): AsyncIterable { - const reader = stream.getReader(); - const decoder = new TextDecoder(); - let buffer = ""; - let parsedCount = 0; - - try { - while (true) { - const { done, value } = await reader.read(); - if (done) break; - - // Decode the chunk and add to buffer - buffer += decoder.decode(value, { stream: true }); - - // Process complete lines (newline-separated JSON objects) - const lines = buffer.split("\n"); - buffer = lines.pop() || ""; // Keep incomplete line in buffer - - for (const line of lines) { - const trimmed = line.trim(); - if (!trimmed) continue; - - try { - const parsed = JSON.parse(trimmed); - parsedCount++; - - // Debug: Log reasoning-related events at the parser level - if ( - parsed.type && - (parsed.type.includes("reasoning") || - parsed.type.includes("reasoning_text")) - ) { - console.log( - "[OpenAI Adapter] Parser: Reasoning event detected:", - { - type: parsed.type, - hasDelta: !!parsed.delta, - hasItem: !!parsed.item, - hasPart: !!parsed.part, - fullEvent: JSON.stringify(parsed).substring(0, 500), - } - ); - } - - yield parsed; - } catch (e) { - // Skip malformed JSON lines - console.log( - "[OpenAI Adapter] Parser: Failed to parse line:", - trimmed.substring(0, 200) - ); - } - } - } - - // Process any remaining buffer - if (buffer.trim()) { - try { - const parsed = JSON.parse(buffer.trim()); - parsedCount++; - yield parsed; - } catch (e) { - // Ignore parse errors for final buffer - } - } - } finally { - reader.releaseLock(); - } - } - - // TODO proper type is AsyncIterable private async *processOpenAIStreamChunks( - stream: AsyncIterable, + stream: AsyncIterable, toolCallMetadata: Map, - options: ChatCompletionOptions, - generateId: () => string + options: ChatOptions, + generateId: () => string, ): AsyncIterable { - let accumulatedContent = ""; - let accumulatedReasoning = ""; - const timestamp = Date.now(); - let nextIndex = 0; - let chunkCount = 0; + let accumulatedContent = '' + let accumulatedReasoning = '' + const timestamp = Date.now() + // let nextIndex = 0 + let chunkCount = 0 // Track accumulated function call arguments by call_id - const accumulatedFunctionCallArguments = new Map(); + const accumulatedFunctionCallArguments = new Map() // Map item_id (from delta events) to call_id (from function_call items) - const itemIdToCallId = new Map(); + // const itemIdToCallId = new Map() // Preserve response metadata across events - let responseId: string | null = null; - let model: string | null = null; - let doneChunkEmitted = false; - const eventTypeCounts = new Map(); + let responseId: string | null = null + let model: string = options.model + + const eventTypeCounts = new Map() // Track which item indices are reasoning items - const reasoningItemIndices = new Set(); + // const reasoningItemIndices = new Set() try { for await (const chunk of stream) { - chunkCount++; - - // Track event types for debugging - if (chunk.type) { - const count = eventTypeCounts.get(chunk.type) || 0; - eventTypeCounts.set(chunk.type, count + 1); - - // Log first occurrence of each event type - if (count === 0) { - console.log( - "[OpenAI Adapter] New event type detected:", - chunk.type - ); - } - } - - // Responses API uses event-based streaming with types like: - // - response.created - // - response.in_progress - // - response.output_item.added - // - response.output_text.delta - // - response.function_call_arguments.delta - // - response.function_call_arguments.done - // - response.done - - let delta: any = null; - let finishReason: string | null = null; - - // Handle Responses API event format - if (chunk.type) { - const eventType = chunk.type; - - // Debug: Log all event types to help diagnose reasoning events - if ( - eventType.includes("reasoning") || - eventType.includes("output_reasoning") - ) { - console.log("[OpenAI Adapter] Reasoning-related event detected:", { - eventType, - hasDelta: !!chunk.delta, - deltaType: typeof chunk.delta, - deltaIsArray: Array.isArray(chunk.delta), - hasItem: !!chunk.item, - itemType: chunk.item?.type, - hasPart: !!chunk.part, - partType: chunk.part?.type, - }); - } - - // Debug: Inspect content_part events - reasoning might come through here - if ( - eventType === "response.content_part.added" || - eventType === "response.content_part.done" - ) { - console.log("[OpenAI Adapter] Content part event:", { - eventType, - hasPart: !!chunk.part, - partType: chunk.part?.type, - partContentType: chunk.part?.content_type, - hasText: !!chunk.part?.text, - textLength: chunk.part?.text?.length || 0, - hasDelta: !!chunk.delta, - deltaType: typeof chunk.delta, - itemIndex: chunk.item_index, - partIndex: chunk.part_index, - fullPart: JSON.stringify(chunk.part).substring(0, 200), // First 200 chars - }); - } - - // Debug: Inspect ALL output_item.added events - if (eventType === "response.output_item.added" && chunk.item) { - const item = chunk.item; - const itemIndex = chunk.item_index; - - // Track reasoning items by index - if (item.type === "reasoning" && itemIndex !== undefined) { - reasoningItemIndices.add(itemIndex); - console.log( - "[OpenAI Adapter] Reasoning item detected, tracking index:", - itemIndex - ); - } - - console.log("[OpenAI Adapter] Output item added:", { - itemType: item.type, - itemIndex, - itemId: item.id, - hasContent: !!item.content, - contentIsArray: Array.isArray(item.content), - contentLength: Array.isArray(item.content) - ? item.content.length - : 0, - hasSummary: !!item.summary, - summaryIsArray: Array.isArray(item.summary), - summaryLength: Array.isArray(item.summary) - ? item.summary.length - : 0, - chunkKeys: Object.keys(chunk), - }); - - if (item.type === "message" && item.content) { - const contentTypes = item.content.map((c: any) => c.type); - console.log( - "[OpenAI Adapter] Output item added (message details):", - { - itemType: item.type, - contentTypes, - hasReasoning: contentTypes.includes("output_reasoning"), - contentDetails: item.content.map((c: any) => ({ - type: c.type, - hasText: !!c.text, - textLength: c.text?.length || 0, - })), - } - ); - } else if (item.type !== "message") { - // Log non-message items - maybe reasoning comes as a different item type? - console.log("[OpenAI Adapter] Output item added (non-message):", { - itemType: item.type, - fullItem: JSON.stringify(item).substring(0, 500), // First 500 chars - }); - } - } - - // Extract and preserve response metadata from response.created or response.in_progress - if (chunk.response) { - responseId = chunk.response.id; - model = chunk.response.model; - } - - // Handle output text deltas (content streaming) - // For response.output_text.delta, chunk.delta is an array of characters/strings - if (eventType === "response.output_text.delta" && chunk.delta) { - // Delta is an array of characters/strings - join them together - if (Array.isArray(chunk.delta)) { - const textDelta = chunk.delta.join(""); - if (textDelta) { - delta = { content: textDelta }; - } - } else if (typeof chunk.delta === "string") { - // Fallback: if it's already a string - delta = { content: chunk.delta }; - } - } - - // Handle function call argument deltas - // response.function_call_arguments.delta events contain incremental argument updates - // Note: delta events use item_id, not call_id, and we need to map item_id to call_id - if ( - eventType === "response.function_call_arguments.delta" && - chunk.delta !== undefined && - chunk.item_id - ) { - // Find the call_id by looking up the item_id in the output items - // We need to track item_id -> call_id mapping - // For now, we'll look it up from the item if available, or use a reverse lookup - let callId: string | undefined; - - // Try to find call_id from item_id by checking if we have the item stored - // The item_id corresponds to the function_call item's id - // We need to maintain a mapping: item_id -> call_id - // For now, we'll use a workaround: check if delta events come after output_item.added - // and use the most recently added function call's call_id - - // Actually, we should track item_id -> call_id when we see output_item.added - // For now, let's use the item_id as a fallback and try to find the call_id - // by checking accumulated items or using a reverse lookup - - // Better approach: track item_id -> call_id mapping when we see output_item.added - const itemId = chunk.item_id; - - callId = itemIdToCallId.get(itemId); - - if (callId) { - const currentArgs = - accumulatedFunctionCallArguments.get(callId) || ""; - // Delta is a JSON string fragment - append it to accumulate - let deltaText: string; - if (typeof chunk.delta === "string") { - deltaText = chunk.delta; - } else if (Array.isArray(chunk.delta)) { - // Delta might be an array of characters/strings - deltaText = chunk.delta.join(""); - } else { - deltaText = JSON.stringify(chunk.delta); - } - - const newArgs = currentArgs + deltaText; - accumulatedFunctionCallArguments.set(callId, newArgs); - - // Debug: log delta accumulation for recommendGuitar - if (process.env.DEBUG_TOOL_ARGS) { - const meta = toolCallMetadata.get(callId); - if (meta?.name === "recommendGuitar") { - console.log( - `[DEBUG] Delta for ${callId}:`, - JSON.stringify(deltaText), - "-> Accumulated:", - newArgs - ); - } - } - - // Emit updated tool call with accumulated arguments - const meta = toolCallMetadata.get(callId); - if (meta) { - delta = delta || {}; - delta.tool_calls = [ - { - id: callId, - function: { - name: meta.name, - arguments: newArgs, - }, - }, - ]; - } - } - } - - // Handle function call arguments done (complete arguments) - // response.function_call_arguments.done events contain the final complete arguments - if ( - eventType === "response.function_call_arguments.done" && - chunk.item_id - ) { - const itemId = chunk.item_id; - const callId = itemIdToCallId.get(itemId); - - if (callId) { - // Prefer accumulated arguments from deltas over the done event's arguments - // The done event might have incomplete or empty arguments - let completeArgs: string = - accumulatedFunctionCallArguments.get(callId) || ""; - - // If we don't have accumulated args, try the done event's arguments - if (!completeArgs || completeArgs === "") { - if ( - typeof chunk.arguments === "string" && - chunk.arguments !== "{}" - ) { - completeArgs = chunk.arguments; - } else if ( - chunk.arguments && - typeof chunk.arguments === "object" && - Object.keys(chunk.arguments).length > 0 - ) { - // If it's a non-empty object, stringify it - completeArgs = JSON.stringify(chunk.arguments); - } else { - // Fallback to empty object - completeArgs = "{}"; - } - } - - accumulatedFunctionCallArguments.set(callId, completeArgs); - - // Emit final tool call with complete arguments - const meta = toolCallMetadata.get(callId); - if (meta) { - delta = delta || {}; - delta.tool_calls = [ - { - id: callId, - function: { - name: meta.name, - arguments: completeArgs, - }, - }, - ]; - } - } - } - - // Handle reasoning text deltas (reasoning content streaming) - // OpenAI uses response.reasoning_text.delta events for reasoning content - if (eventType === "response.reasoning_text.delta" && chunk.delta) { - // Delta is an array of characters/strings - join them together - let reasoningDelta = ""; - if (Array.isArray(chunk.delta)) { - reasoningDelta = chunk.delta.join(""); - } else if (typeof chunk.delta === "string") { - reasoningDelta = chunk.delta; - } - - if (reasoningDelta) { - accumulatedReasoning += reasoningDelta; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - delta: reasoningDelta, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (reasoning_text.delta):", - { - eventType, - deltaLength: reasoningDelta.length, - accumulatedLength: accumulatedReasoning.length, - chunkType: thinkingChunk.type, - } - ); - yield thinkingChunk; + console.log(chunk) + chunkCount++ + const handleContentPart = ( + contentPart: + | OpenAI_SDK.Responses.ResponseOutputText + | OpenAI_SDK.Responses.ResponseOutputRefusal + | OpenAI_SDK.Responses.ResponseContentPartAddedEvent.ReasoningText, + ): StreamChunk => { + if (contentPart.type === 'output_text') { + accumulatedContent += contentPart.text + return { + type: 'content', + id: responseId || generateId(), + model: model || options.model, + timestamp, + delta: contentPart.text, + content: accumulatedContent, + role: 'assistant', } } - // Also handle the old format for backwards compatibility - if (eventType === "response.output_reasoning.delta" && chunk.delta) { - let reasoningDelta = ""; - if (Array.isArray(chunk.delta)) { - reasoningDelta = chunk.delta.join(""); - } else if (typeof chunk.delta === "string") { - reasoningDelta = chunk.delta; - } - - if (reasoningDelta) { - accumulatedReasoning += reasoningDelta; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - delta: reasoningDelta, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (output_reasoning.delta):", - { - eventType, - deltaLength: reasoningDelta.length, - accumulatedLength: accumulatedReasoning.length, - chunkType: thinkingChunk.type, - } - ); - yield thinkingChunk; + if (contentPart.type === 'reasoning_text') { + accumulatedReasoning += contentPart.text + return { + type: 'thinking', + id: responseId || generateId(), + model: model || options.model, + timestamp, + delta: contentPart.text, + content: accumulatedReasoning, } } - - // Handle content part events - reasoning might come through content parts - // Note: Content parts can belong to reasoning items (check item_index) - if (eventType === "response.content_part.added" && chunk.part) { - const part = chunk.part; - const itemIndex = chunk.item_index; - - // Check if this content part belongs to a reasoning item - const belongsToReasoningItem = - itemIndex !== undefined && reasoningItemIndices.has(itemIndex); - - // Check if this is a reasoning content part - const isReasoningPart = - part.type === "output_reasoning" || - part.content_type === "reasoning" || - part.type === "reasoning_text" || - part.type === "reasoning" || - belongsToReasoningItem; - - if (isReasoningPart) { - const reasoningText = part.text || ""; - if (reasoningText) { - accumulatedReasoning += reasoningText; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - delta: reasoningText, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (from content_part):", - { - eventType, - partType: part.type, - contentType: part.content_type, - itemIndex, - belongsToReasoningItem, - textLength: reasoningText.length, - accumulatedLength: accumulatedReasoning.length, - } - ); - yield thinkingChunk; - } - } + return { + type: 'error', + id: responseId || generateId(), + model: model || options.model, + timestamp, + error: { + message: contentPart.refusal, + }, } - - // Handle output item added (new items like function calls or complete messages) - if (eventType === "response.output_item.added" && chunk.item) { - const item = chunk.item; - if (item.type === "function_call") { - // Initialize arguments accumulator for this call - const callId = item.call_id; - const itemId = item.id; // The item's id (used in delta events) - - // Map item_id to call_id for delta event lookups - if (itemId && callId) { - itemIdToCallId.set(itemId, callId); - } - - const initialArgs = item.arguments - ? typeof item.arguments === "string" - ? item.arguments - : JSON.stringify(item.arguments) - : ""; - accumulatedFunctionCallArguments.set(callId, initialArgs); - - // Track metadata immediately so delta/done events can use it - if (!toolCallMetadata.has(callId)) { - toolCallMetadata.set(callId, { - index: nextIndex++, - name: item.name, - }); - } - - delta = delta || {}; - delta.tool_calls = [ - { - id: callId, - function: { - name: item.name, - arguments: initialArgs, - }, - }, - ]; - } else if (item.type === "message") { - // Extract content from message item - if (item.content && Array.isArray(item.content)) { - const textContent = item.content.find( - (c: any) => c.type === "output_text" - ); - if (textContent?.text) { - // For message items added, the text might be incremental or complete - // We'll treat it as a delta and accumulate - const newContent = textContent.text; - // If the new content is longer than accumulated, it's likely the full content - // Otherwise, it's a delta - if ( - newContent.length > accumulatedContent.length || - !accumulatedContent - ) { - delta = { content: newContent }; - } else { - // It's a delta - extract just the new part - const deltaText = newContent.slice( - accumulatedContent.length - ); - if (deltaText) { - delta = { content: deltaText }; - } - } - } - - // Extract reasoning content from message item - const reasoningContent = item.content.find( - (c: any) => c.type === "output_reasoning" - ); - if (reasoningContent?.text) { - // Reasoning content comes as complete text in message items - accumulatedReasoning = reasoningContent.text; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (from message item):", - { - eventType: "response.output_item.added", - contentLength: accumulatedReasoning.length, - chunkType: thinkingChunk.type, - hasDelta: false, - } - ); - yield thinkingChunk; - } - } - // Only set finish reason if status indicates completion (not "in_progress") - if (item.status && item.status !== "in_progress") { - finishReason = item.status; - } + } + // handle general response events + if ( + chunk.type === 'response.created' || + chunk.type === 'response.incomplete' || + chunk.type === 'response.failed' + ) { + responseId = chunk.response.id + model = chunk.response.model + if (chunk.response.error) { + yield { + type: 'error', + id: chunk.response.id, + model: chunk.response.model, + timestamp, + error: chunk.response.error, } } - - // Handle output item done - check for both function calls and reasoning items - if (eventType === "response.output_item.done" && chunk.item) { - const item = chunk.item; - - // Handle function call items - check if it has final arguments - if (item.type === "function_call" && item.call_id) { - const callId = item.call_id; - // The item might have the final arguments as a string - if ( - item.arguments && - typeof item.arguments === "string" && - item.arguments !== "{}" - ) { - const finalArgs = item.arguments; - accumulatedFunctionCallArguments.set(callId, finalArgs); - - // Emit final tool call with complete arguments - const meta = toolCallMetadata.get(callId); - if (meta) { - delta = delta || {}; - delta.tool_calls = [ - { - id: callId, - function: { - name: meta.name, - arguments: finalArgs, - }, - }, - ]; - } - } - } - - // Handle reasoning items - reasoning content might be available when item completes - if (item.type === "reasoning") { - // Check if reasoning item has content/text/summary when it's done - console.log("[OpenAI Adapter] Reasoning item done:", { - itemId: item.id, - hasContent: !!item.content, - contentType: typeof item.content, - hasText: !!item.text, - textLength: item.text?.length || 0, - hasSummary: !!item.summary, - summaryType: typeof item.summary, - summaryIsArray: Array.isArray(item.summary), - summaryLength: Array.isArray(item.summary) - ? item.summary.length - : 0, - summaryContent: Array.isArray(item.summary) - ? item.summary - : item.summary, - fullItem: JSON.stringify(item).substring(0, 1000), // More chars to see summary - }); - - // If reasoning item has text content when done, emit it - if (item.text) { - accumulatedReasoning = item.text; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (from reasoning item done - text):", - { - textLength: item.text.length, - } - ); - yield thinkingChunk; - } - - // Check if summary contains reasoning text (summary might be an array of text chunks) - if (Array.isArray(item.summary) && item.summary.length > 0) { - // Summary might be an array of text strings or objects with text/content - const summaryText = item.summary - .map((s: any) => - typeof s === "string" - ? s - : s?.text || s?.content || JSON.stringify(s) - ) - .join(""); - if (summaryText) { - accumulatedReasoning = summaryText; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (from reasoning item done - summary):", - { - summaryLength: summaryText.length, - } - ); - yield thinkingChunk; - } - } else if (typeof item.summary === "string" && item.summary) { - accumulatedReasoning = item.summary; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (from reasoning item done - summary string):", - { - summaryLength: item.summary.length, - } - ); - yield thinkingChunk; - } + if (chunk.response.incomplete_details) { + yield { + type: 'error', + id: chunk.response.id, + model: chunk.response.model, + timestamp, + error: { + message: chunk.response.incomplete_details.reason ?? '', + }, } } + } + // handle content_part added events for text, reasoning and refusals + if (chunk.type === 'response.content_part.added') { + const contentPart = chunk.part + yield handleContentPart(contentPart) + } - // Handle response done - if (eventType === "response.done") { - // If we have tool calls, the finish reason should be "tool_calls" - // Otherwise, it's a normal completion with "stop" - finishReason = toolCallMetadata.size > 0 ? "tool_calls" : "stop"; - } - } else if (chunk.output && Array.isArray(chunk.output)) { - // Legacy Responses API format with output array - const messageItem = chunk.output.find( - (item: any) => item.type === "message" - ); - const functionCallItems = chunk.output.filter( - (item: any) => item.type === "function_call" - ); + if (chunk.type === 'response.content_part.done') { + const contentPart = chunk.part - if (messageItem?.content) { - const textContent = messageItem.content.find( - (c: any) => c.type === "output_text" - ); - if (textContent?.text) { - delta = { content: textContent.text }; - } + yield handleContentPart(contentPart) + } - // Extract reasoning content from legacy format - const reasoningContent = messageItem.content.find( - (c: any) => c.type === "output_reasoning" - ); - if (reasoningContent?.text) { - accumulatedReasoning = reasoningContent.text; - const thinkingChunk: StreamChunk = { - type: "thinking" as const, - id: responseId || chunk.id || generateId(), - model: model || chunk.model || options.model || "gpt-4o", - timestamp, - content: accumulatedReasoning, - }; - console.log( - "[OpenAI Adapter] Emitting thinking chunk (legacy format):", - { - format: "legacy", - contentLength: accumulatedReasoning.length, - chunkType: thinkingChunk.type, - } - ); - yield thinkingChunk; - } + if (chunk.type === 'response.function_call_arguments.done') { + const { name, item_id, output_index } = chunk + if (!toolCallMetadata.has(item_id)) { + toolCallMetadata.set(item_id, { + index: output_index, + name: name, + }) + accumulatedFunctionCallArguments.set(item_id, '') } - - if (functionCallItems.length > 0) { - delta = delta || {}; - delta.tool_calls = functionCallItems.map((fc: any) => ({ - id: fc.call_id, + yield { + type: 'tool_call', + id: responseId || generateId(), + model: model || options.model, + timestamp, + index: output_index, + toolCall: { + id: item_id, + type: 'function', function: { - name: fc.name, - arguments: JSON.stringify(fc.arguments || {}), + name, + arguments: chunk.arguments, }, - })); - } - - if (messageItem?.status) { - // If we have tool calls, the finish reason should be "tool_calls" - // Otherwise, use the status from the message item - if (toolCallMetadata.size > 0) { - finishReason = "tool_calls"; - } else { - finishReason = messageItem.status; - } + }, } - } else if (chunk.choices) { - // Chat Completions format (legacy) - delta = chunk.choices?.[0]?.delta; - finishReason = chunk.choices?.[0]?.finish_reason; } - // Handle content delta - if (delta?.content) { - accumulatedContent += delta.content; + if (chunk.type === 'response.output_text.done') { yield { - type: "content" as const, - id: responseId || chunk.id || generateId(), - model: model || chunk.model || options.model || "gpt-4o", + type: 'done', + id: responseId || generateId(), + model: model || options.model, timestamp, - delta: delta.content, - content: accumulatedContent, - role: "assistant" as const, - }; + finishReason: 'stop', + } } - // Handle tool calls - if (delta?.tool_calls) { - for (const toolCall of delta.tool_calls) { - // For Responses API, tool calls come as complete items, not deltas - // For Chat Completions, they come as deltas that need tracking - let toolCallId: string; - let toolCallName: string; - let toolCallArgs: string; - let actualIndex: number; - - if (toolCall.id) { - // Complete tool call (Responses API format) or first delta (Chat Completions) - toolCallId = toolCall.id; - toolCallName = toolCall.function?.name || ""; - toolCallArgs = - typeof toolCall.function?.arguments === "string" - ? toolCall.function.arguments - : JSON.stringify(toolCall.function?.arguments || {}); - - // Track for index assignment - if (!toolCallMetadata.has(toolCallId)) { - toolCallMetadata.set(toolCallId, { - index: nextIndex++, - name: toolCallName, - }); - } - const meta = toolCallMetadata.get(toolCallId)!; - actualIndex = meta.index; - } else { - // Delta chunk (Chat Completions format) - find by index - const openAIIndex = - typeof toolCall.index === "number" ? toolCall.index : 0; - const entry = Array.from(toolCallMetadata.entries())[openAIIndex]; - if (entry) { - const [id, meta] = entry; - toolCallId = id; - toolCallName = meta.name; - actualIndex = meta.index; - toolCallArgs = toolCall.function?.arguments || ""; - } else { - // Fallback - toolCallId = `call_${Date.now()}`; - toolCallName = ""; - actualIndex = openAIIndex; - toolCallArgs = ""; - } - } - - yield { - type: "tool_call", - id: responseId || chunk.id || generateId(), - model: model || chunk.model || options.model || "gpt-4o", - timestamp, - toolCall: { - id: toolCallId, - type: "function", - function: { - name: toolCallName, - arguments: toolCallArgs, - }, - }, - index: actualIndex, - }; + if (chunk.type === 'response.completed') { + yield { + type: 'done', + id: responseId || generateId(), + model: model || options.model, + timestamp, + finishReason: 'stop', } } - // Handle completion - only yield "done" for actual completion statuses - if (finishReason && finishReason !== "in_progress") { - // Get usage from chunk.response.usage (Responses API) or chunk.usage (Chat Completions) - const usage = chunk.response?.usage || chunk.usage; - + if (chunk.type === 'error') { yield { - type: "done" as const, - id: responseId || chunk.id || generateId(), - model: model || chunk.model || options.model || "gpt-4o", + type: 'error', + id: responseId || generateId(), + model: model || options.model, timestamp, - finishReason: finishReason as any, - usage: usage - ? { - promptTokens: usage.input_tokens || usage.prompt_tokens || 0, - completionTokens: - usage.output_tokens || usage.completion_tokens || 0, - totalTokens: usage.total_tokens || 0, - } - : undefined, - }; - doneChunkEmitted = true; + error: { + message: chunk.message, + code: chunk.code ?? undefined, + }, + } } } - - // After stream ends, if we have tool calls but no done chunk was emitted, - // emit a done chunk with tool_calls finish reason - if (toolCallMetadata.size > 0 && !doneChunkEmitted) { - yield { - type: "done" as const, - id: responseId || generateId(), - model: model || options.model || "gpt-4o", - timestamp, - finishReason: "tool_calls" as any, - usage: undefined, - }; - } - - // Log summary of all event types encountered - console.log("[OpenAI Adapter] Stream completed. Event type summary:", { - totalChunks: chunkCount, - eventTypes: Object.fromEntries(eventTypeCounts), - accumulatedReasoningLength: accumulatedReasoning.length, - accumulatedContentLength: accumulatedContent.length, - hasReasoning: accumulatedReasoning.length > 0, - }); } catch (error: any) { console.log( - "[OpenAI Adapter] Stream ended with error. Event type summary:", + '[OpenAI Adapter] Stream ended with error. Event type summary:', { totalChunks: chunkCount, eventTypes: Object.fromEntries(eventTypeCounts), error: error.message, - } - ); + }, + ) yield { - type: "error", + type: 'error', id: generateId(), - model: options.model || "gpt-3.5-turbo", + model: options.model, timestamp, error: { - message: error.message || "Unknown error occurred", + message: error.message || 'Unknown error occurred', code: error.code, }, - }; + } } } @@ -1496,60 +376,42 @@ export class OpenAI extends BaseAdapter< * Maps common options to OpenAI-specific format * Handles translation of normalized options to OpenAI's API format */ - private mapChatOptionsToOpenAI(options: ChatCompletionOptions) { - try { - const providerOptions = options.providerOptions as - | Omit< - InternalTextProviderOptions, - | "max_output_tokens" - | "tools" - | "metadata" - | "temperature" - | "input" - | "top_p" - > - | undefined; - - const input = convertMessagesToInput(options.messages); - - const tools = options.tools - ? convertToolsToProviderFormat([...options.tools]) - : undefined; - - const requestParams: Omit< - OpenAI_SDK.Responses.ResponseCreateParams, - "stream" - > = { - model: options.model, - temperature: options.options?.temperature, - max_output_tokens: options.options?.maxTokens, - top_p: options.options?.topP, - metadata: options.options?.metadata, - ...providerOptions, - input, - tools, - }; - - // Debug: Log the reasoning config being sent to OpenAI - console.log("[OpenAI Adapter] Request params (reasoning check):", { - model: requestParams.model, - hasReasoning: !!requestParams.reasoning, - reasoning: requestParams.reasoning, - reasoningEffort: requestParams.reasoning?.effort, - providerOptionsKeys: providerOptions - ? Object.keys(providerOptions) - : [], - fullProviderOptions: providerOptions, - }); + private mapChatOptionsToOpenAI(options: ChatOptions) { + const providerOptions = options.providerOptions as + | Omit< + InternalTextProviderOptions, + | 'max_output_tokens' + | 'tools' + | 'metadata' + | 'temperature' + | 'input' + | 'top_p' + > + | undefined + const input = convertMessagesToInput(options.messages) + if (providerOptions) { + validateTextProviderOptions({ ...providerOptions, input }) + } - return requestParams; - } catch (error: any) { - console.error(">>> mapChatOptionsToOpenAI: Fatal error <<<"); - console.error(">>> Error message:", error?.message); - console.error(">>> Error stack:", error?.stack); - console.error(">>> Full error:", error); - throw error; + const tools = options.tools + ? convertToolsToProviderFormat(options.tools) + : undefined + + const requestParams: Omit< + OpenAI_SDK.Responses.ResponseCreateParams, + 'stream' + > = { + model: options.model, + temperature: options.options?.temperature, + max_output_tokens: options.options?.maxTokens, + top_p: options.options?.topP, + metadata: options.options?.metadata, + ...providerOptions, + input, + tools, } + + return requestParams } } @@ -1571,9 +433,9 @@ export class OpenAI extends BaseAdapter< */ export function createOpenAI( apiKey: string, - config?: Omit + config?: Omit, ): OpenAI { - return new OpenAI({ apiKey, ...config }); + return new OpenAI({ apiKey, ...config }) } /** @@ -1593,20 +455,20 @@ export function createOpenAI( * const aiInstance = ai(openai()); * ``` */ -export function openai(config?: Omit): OpenAI { +export function openai(config?: Omit): OpenAI { const env = - typeof globalThis !== "undefined" && (globalThis as any).window?.env + typeof globalThis !== 'undefined' && (globalThis as any).window?.env ? (globalThis as any).window.env - : typeof process !== "undefined" - ? process.env - : undefined; - const key = env?.OPENAI_API_KEY; + : typeof process !== 'undefined' + ? process.env + : undefined + const key = env?.OPENAI_API_KEY if (!key) { throw new Error( - "OPENAI_API_KEY is required. Please set it in your environment variables or use createOpenAI(apiKey, config) instead." - ); + 'OPENAI_API_KEY is required. Please set it in your environment variables or use createOpenAI(apiKey, config) instead.', + ) } - return createOpenAI(key, config); + return createOpenAI(key, config) } diff --git a/packages/typescript/ai-openai/src/text/text-provider-options.ts b/packages/typescript/ai-openai/src/text/text-provider-options.ts index 1b594b9a4..e44f86e8f 100644 --- a/packages/typescript/ai-openai/src/text/text-provider-options.ts +++ b/packages/typescript/ai-openai/src/text/text-provider-options.ts @@ -1,385 +1,395 @@ -import { ModelMessage } from "@tanstack/ai"; -import { ApplyPatchTool } from "../tools/apply-patch-tool"; -import { CodeInterpreterTool } from "../tools/code-interpreter-tool"; -import { ComputerUseTool } from "../tools/computer-use-tool"; -import { CustomTool } from "../tools/custom-tool"; -import { FileSearchTool } from "../tools/file-search-tool"; -import { FunctionTool } from "../tools/function-tool"; -import { ImageGenerationTool } from "../tools/image-generation-tool"; -import { LocalShellTool } from "../tools/local-shell-tool"; -import { MCPTool } from "../tools/mcp-tool"; -import { ShellTool } from "../tools/shell-tool"; -import { ToolChoice } from "../tools/tool-choice"; -import { WebSearchPreviewTool } from "../tools/web-search-preview-tool"; -import { WebSearchTool } from "../tools/web-search-tool"; -import OpenAI from "openai"; - -// Core, always-available options for Responses API -export interface OpenAIBaseOptions { - /** - -Whether to run the model response in the background. Learn more here: -https://platform.openai.com/docs/api-reference/responses/create#responses_create-background - @default false - */ - background?: boolean; - /** - * The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request. Input items and output items from this response are automatically added to this conversation after this response completes. - * - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-conversation - */ - conversation?: string | { id: string } - /** - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-include - Specify additional output data to include in the model response. Currently supported values are: - - web_search_call.action.sources: Include the sources of the web search tool call. - code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items. - computer_call_output.output.image_url: Include image urls from the computer call output. - file_search_call.results: Include the search results of the file search tool call. - message.input_image.image_url: Include image urls from the input message. - message.output_text.logprobs: Include logprobs with assistant messages. - reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program). - */ - include?: OpenAI.Responses.ResponseIncludable[]; - - /** - * The unique ID of the previous response to the model. Use this to create multi-turn conversations. Cannot be used in conjunction with conversation. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-previous_response_id - */ - previous_response_id?: string; - /** - * Reference to a prompt template and its variables. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-prompt - */ - prompt?: { - /** - * Unique identifier of your prompt, found in the dashboard - */ - id: string, - /** - * A specific version of your prompt (defaults to the "current" version as specified in the dashboard) - */ - version?: string, - /** - * A map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input message types like input_image or input_file - */ - variables?: Record; - } - /** - * Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-prompt_cache_key - */ - prompt_cache_key?: string; - - /** - * The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-prompt_cache_retention - */ - prompt_cache_retention?: "in-memory" | "24h"; - - /** - * A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-safety_identifier - */ - safety_identifier?: string; - - - /** - * Specifies the processing type used for serving the request. - -If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'. -If set to 'default', then the request will be processed with the standard pricing and performance for the selected model. -If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier. -When not set, the default behavior is 'auto'. -When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter. - -https://platform.openai.com/docs/api-reference/responses/create#responses_create-service_tier -@default 'auto' - */ - service_tier?: "auto" | "default" | "flex" | "priority"; - - /** - * Whether to store the generated model response for later retrieval via API. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-store - * @default true - */ - store?: boolean; - - /** - * Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-text-verbosity - */ - verbosity?: "low" | "medium" | "high"; - /** - * An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-top_logprobs - */ - top_logprobs?: number; - - /** - * The truncation strategy to use for the model response. - - auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. - disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error. - */ - truncation?: "auto" | "disabled"; -} - -// Feature fragments that can be stitched per-model -export interface OpenAIReasoningOptions { - /** - * Reasoning controls for models that support it. - * Lets you guide how much chain-of-thought computation to spend. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-reasoning - * https://platform.openai.com/docs/guides/reasoning - */ - reasoning?: { - /** - * gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1. - * All models before gpt-5.1 default to medium reasoning effort, and do not support none. - * The gpt-5-pro model defaults to (and only supports) high reasoning effort. - */ - effort?: "none" | "minimal" | "low" | "medium" | "high"; - }; - /** - * A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-reasoning-summary - */ - summary?: "auto" | "concise" | "detailed"; -} - -export interface OpenAIStructuredOutputOptions { - /** - * Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: - https://platform.openai.com/docs/api-reference/responses/create#responses_create-text - */ - text?: OpenAI.Responses.ResponseTextConfig; -} - -export interface OpenAIToolsOptions { - /** - * The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-max_tool_calls - */ - max_tool_calls?: number; - /** - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-parallel_tool_calls - * Whether to allow the model to run tool calls in parallel. - * @default true - */ - parallel_tool_calls?: boolean; - /** - * Function/tool calling configuration. Supply tool schemas in `tools` - * and control selection here: - * - "auto" | "none" | "required" - * - { type: "tool", tool_name: string } (or model-specific shape) - * https://platform.openai.com/docs/guides/tools/tool-choice - * https://platform.openai.com/docs/api-reference/introduction (tools array) - */ - tool_choice?: - | "auto" - | "none" - | "required" - | ToolChoice; -} - -export interface OpenAIStreamingOptions { - /** - * Options for streaming responses. Only set this when you set stream: true - */ - stream_options?: { - /** - * When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API. - */ - include_obfuscation?: boolean; - }; -} - -export interface OpenAIMetadataOptions { - /** - * Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. - -Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. -https://platform.openai.com/docs/api-reference/responses/create#responses_create-metadata - */ - metadata?: Record; -} - -export type ExternalTextProviderOptions = - OpenAIBaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions & - OpenAIMetadataOptions; - - -/** - * Options your SDK forwards to OpenAI when doing chat/responses. - * Tip: gate these by model capability in your SDK, not just by presence. - */ -export interface InternalTextProviderOptions extends ExternalTextProviderOptions { - - - input: string | OpenAI.Responses.ResponseInput - /** - * A system (or developer) message inserted into the model's context. - -When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. -https://platform.openai.com/docs/api-reference/responses/create#responses_create-instructions - */ - instructions?: string; - /** - * An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. - * (Responses API name: max_output_tokens) - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-max_output_tokens - */ - max_output_tokens?: number; - - /** - * The model name (e.g. "gpt-4o", "gpt-5", "gpt-4.1-mini", etc). - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-model - */ - model: string; - - /** - * If set to true, the model response data will be streamed to the client as it is generated using server-sent events. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-stream - * @default false - */ - stream?: boolean; - - /** - * What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-temperature - */ - temperature?: number; - /** - * An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - * https://platform.openai.com/docs/api-reference/responses/create#responses_create-top_p - */ - top_p?: number; - /** - * Tools the model may call (functions, web_search, etc). - * Function tool example: - * { type: "function", function: { name, description?, parameters: JSONSchema } } - * https://platform.openai.com/docs/guides/tools/tool-choice - * https://platform.openai.com/docs/guides/tools-web-search - */ - tools?: Array< - FunctionTool | FileSearchTool | ComputerUseTool | WebSearchTool | MCPTool | CodeInterpreterTool | ImageGenerationTool | ShellTool | LocalShellTool | CustomTool | WebSearchPreviewTool | ApplyPatchTool - >; - -} - - -export const validateConversationAndPreviousResponseId = ( - options: InternalTextProviderOptions -) => { - if (options.conversation && options.previous_response_id) { - throw new Error( - "Cannot use both 'conversation' and 'previous_response_id' in the same request." - ); - } -}; - -export const validateMetadata = (options: InternalTextProviderOptions) => { - const metadata = options.metadata; - const tooManyKeys = metadata && Object.keys(metadata).length > 16; - if (tooManyKeys) { - throw new Error("Metadata cannot have more than 16 key-value pairs."); - } - const keyTooLong = metadata && Object.keys(metadata).some(key => key.length > 64); - if (keyTooLong) { - throw new Error("Metadata keys cannot be longer than 64 characters."); - } - const valueTooLong = metadata && Object.values(metadata).some(value => value.length > 512); - if (valueTooLong) { - throw new Error("Metadata values cannot be longer than 512 characters."); - } -}; - -export function convertMessagesToInput(messages: ModelMessage[]): OpenAI.Responses.ResponseInput { - const result: OpenAI.Responses.ResponseInput = []; - - for (const message of messages) { - - // Handle tool messages - convert to FunctionToolCallOutput - if (message.role === "tool") { - result.push({ - type: "function_call_output", - call_id: message.toolCallId || "", - output: typeof message.content === "string" ? message.content : JSON.stringify(message.content) - }); - continue; - } - - // Handle assistant messages - if (message.role === "assistant") { - // If the assistant message has tool calls, add them as FunctionToolCall objects - // OpenAI Responses API expects arguments as a string (JSON string) - if (message.toolCalls && message.toolCalls.length > 0) { - for (const toolCall of message.toolCalls) { - // Keep arguments as string for Responses API - // Our internal format stores arguments as a JSON string, which is what API expects - const argumentsString = typeof toolCall.function.arguments === "string" - ? toolCall.function.arguments - : JSON.stringify(toolCall.function.arguments || {}); - - result.push({ - type: "function_call", - call_id: toolCall.id, - name: toolCall.function.name, - arguments: argumentsString - } as any); - } - } - - // Add the assistant's text message if there is content - if (message.content) { - result.push({ - type: "message", - role: "assistant", - content: [ - { - type: "input_text", - text: message.content - } - ] - }); - } - - continue; - } - - // Handle system messages - if (message.role === "system") { - result.push({ - type: "message", - role: "system", - content: [ - { - type: "input_text", - text: message.content || "" - } - ] - }); - continue; - } - - // Handle user messages (default case) - result.push({ - type: "message", - role: "user", - content: [ - { - type: "input_text", - text: message.content || "" - } - ] - }); - } - - return result; -} +import type OpenAI from 'openai' +import type { ApplyPatchTool } from '../tools/apply-patch-tool' +import type { CodeInterpreterTool } from '../tools/code-interpreter-tool' +import type { ComputerUseTool } from '../tools/computer-use-tool' +import type { CustomTool } from '../tools/custom-tool' +import type { FileSearchTool } from '../tools/file-search-tool' +import type { FunctionTool } from '../tools/function-tool' +import type { ImageGenerationTool } from '../tools/image-generation-tool' +import type { LocalShellTool } from '../tools/local-shell-tool' +import type { MCPTool } from '../tools/mcp-tool' +import type { ShellTool } from '../tools/shell-tool' +import type { ToolChoice } from '../tools/tool-choice' +import type { WebSearchPreviewTool } from '../tools/web-search-preview-tool' +import type { WebSearchTool } from '../tools/web-search-tool' +import type { ModelMessage } from '@tanstack/ai' + +// Core, always-available options for Responses API +export interface OpenAIBaseOptions { + /** + +Whether to run the model response in the background. Learn more here: +https://platform.openai.com/docs/api-reference/responses/create#responses_create-background + @default false + */ + background?: boolean + /** + * The conversation that this response belongs to. Items from this conversation are prepended to input_items for this response request. Input items and output items from this response are automatically added to this conversation after this response completes. + * + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-conversation + */ + conversation?: string | { id: string } + /** + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-include + Specify additional output data to include in the model response. Currently supported values are: + + web_search_call.action.sources: Include the sources of the web search tool call. + code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items. + computer_call_output.output.image_url: Include image urls from the computer call output. + file_search_call.results: Include the search results of the file search tool call. + message.input_image.image_url: Include image urls from the input message. + message.output_text.logprobs: Include logprobs with assistant messages. + reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program). + */ + include?: Array + + /** + * The unique ID of the previous response to the model. Use this to create multi-turn conversations. Cannot be used in conjunction with conversation. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-previous_response_id + */ + previous_response_id?: string + /** + * Reference to a prompt template and its variables. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-prompt + */ + prompt?: { + /** + * Unique identifier of your prompt, found in the dashboard + */ + id: string + /** + * A specific version of your prompt (defaults to the "current" version as specified in the dashboard) + */ + version?: string + /** + * A map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input message types like input_image or input_file + */ + variables?: Record + } + /** + * Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-prompt_cache_key + */ + prompt_cache_key?: string + + /** + * The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-prompt_cache_retention + */ + prompt_cache_retention?: 'in-memory' | '24h' + + /** + * A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-safety_identifier + */ + safety_identifier?: string + + /** + * Specifies the processing type used for serving the request. + +If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'. +If set to 'default', then the request will be processed with the standard pricing and performance for the selected model. +If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier. +When not set, the default behavior is 'auto'. +When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter. + +https://platform.openai.com/docs/api-reference/responses/create#responses_create-service_tier +@default 'auto' + */ + service_tier?: 'auto' | 'default' | 'flex' | 'priority' + + /** + * Whether to store the generated model response for later retrieval via API. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-store + * @default true + */ + store?: boolean + + /** + * Constrains the verbosity of the model's response. Lower values will result in more concise responses, while higher values will result in more verbose responses. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-text-verbosity + */ + verbosity?: 'low' | 'medium' | 'high' + /** + * An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-top_logprobs + */ + top_logprobs?: number + + /** + * The truncation strategy to use for the model response. + + auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation. + disabled (default): If the input size will exceed the context window size for a model, the request will fail with a 400 error. + */ + truncation?: 'auto' | 'disabled' +} + +// Feature fragments that can be stitched per-model +export interface OpenAIReasoningOptions { + /** + * Reasoning controls for models that support it. + * Lets you guide how much chain-of-thought computation to spend. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-reasoning + * https://platform.openai.com/docs/guides/reasoning + */ + reasoning?: { + /** + * gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1. + * All models before gpt-5.1 default to medium reasoning effort, and do not support none. + * The gpt-5-pro model defaults to (and only supports) high reasoning effort. + */ + effort?: 'none' | 'minimal' | 'low' | 'medium' | 'high' + } + /** + * A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-reasoning-summary + */ + summary?: 'auto' | 'concise' | 'detailed' +} + +export interface OpenAIStructuredOutputOptions { + /** + * Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more: + https://platform.openai.com/docs/api-reference/responses/create#responses_create-text + */ + text?: OpenAI.Responses.ResponseTextConfig +} + +export interface OpenAIToolsOptions { + /** + * The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-max_tool_calls + */ + max_tool_calls?: number + /** + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-parallel_tool_calls + * Whether to allow the model to run tool calls in parallel. + * @default true + */ + parallel_tool_calls?: boolean + /** + * Function/tool calling configuration. Supply tool schemas in `tools` + * and control selection here: + * - "auto" | "none" | "required" + * - { type: "tool", tool_name: string } (or model-specific shape) + * https://platform.openai.com/docs/guides/tools/tool-choice + * https://platform.openai.com/docs/api-reference/introduction (tools array) + */ + tool_choice?: 'auto' | 'none' | 'required' | ToolChoice +} + +export interface OpenAIStreamingOptions { + /** + * Options for streaming responses. Only set this when you set stream: true + */ + stream_options?: { + /** + * When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API. + */ + include_obfuscation?: boolean + } +} + +export interface OpenAIMetadataOptions { + /** + * Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. + +Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. +https://platform.openai.com/docs/api-reference/responses/create#responses_create-metadata + */ + metadata?: Record +} + +export type ExternalTextProviderOptions = OpenAIBaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions & + OpenAIMetadataOptions + +/** + * Options your SDK forwards to OpenAI when doing chat/responses. + * Tip: gate these by model capability in your SDK, not just by presence. + */ +export interface InternalTextProviderOptions + extends ExternalTextProviderOptions { + input: string | OpenAI.Responses.ResponseInput + /** + * A system (or developer) message inserted into the model's context. + +When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. +https://platform.openai.com/docs/api-reference/responses/create#responses_create-instructions + */ + instructions?: string + /** + * An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. + * (Responses API name: max_output_tokens) + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-max_output_tokens + */ + max_output_tokens?: number + + /** + * The model name (e.g. "gpt-4o", "gpt-5", "gpt-4.1-mini", etc). + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-model + */ + model: string + + /** + * If set to true, the model response data will be streamed to the client as it is generated using server-sent events. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-stream + * @default false + */ + stream?: boolean + + /** + * What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-temperature + */ + temperature?: number + /** + * An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. + * https://platform.openai.com/docs/api-reference/responses/create#responses_create-top_p + */ + top_p?: number + /** + * Tools the model may call (functions, web_search, etc). + * Function tool example: + * { type: "function", function: { name, description?, parameters: JSONSchema } } + * https://platform.openai.com/docs/guides/tools/tool-choice + * https://platform.openai.com/docs/guides/tools-web-search + */ + tools?: Array< + | FunctionTool + | FileSearchTool + | ComputerUseTool + | WebSearchTool + | MCPTool + | CodeInterpreterTool + | ImageGenerationTool + | ShellTool + | LocalShellTool + | CustomTool + | WebSearchPreviewTool + | ApplyPatchTool + > +} + +const validateConversationAndPreviousResponseId = ( + options: InternalTextProviderOptions, +) => { + if (options.conversation && options.previous_response_id) { + throw new Error( + "Cannot use both 'conversation' and 'previous_response_id' in the same request.", + ) + } +} + +export const validateTextProviderOptions = ( + options: InternalTextProviderOptions, +) => { + validateMetadata(options) + validateConversationAndPreviousResponseId(options) +} + +const validateMetadata = (options: InternalTextProviderOptions) => { + const metadata = options.metadata + const tooManyKeys = metadata && Object.keys(metadata).length > 16 + if (tooManyKeys) { + throw new Error('Metadata cannot have more than 16 key-value pairs.') + } + const keyTooLong = + metadata && Object.keys(metadata).some((key) => key.length > 64) + if (keyTooLong) { + throw new Error('Metadata keys cannot be longer than 64 characters.') + } + const valueTooLong = + metadata && Object.values(metadata).some((value) => value.length > 512) + if (valueTooLong) { + throw new Error('Metadata values cannot be longer than 512 characters.') + } +} + +export function convertMessagesToInput( + messages: Array, +): OpenAI.Responses.ResponseInput { + const result: OpenAI.Responses.ResponseInput = [] + + for (const message of messages) { + // Handle tool messages - convert to FunctionToolCallOutput + if (message.role === 'tool') { + result.push({ + type: 'function_call_output', + call_id: message.toolCallId || '', + output: + typeof message.content === 'string' + ? message.content + : JSON.stringify(message.content), + }) + continue + } + + // Handle assistant messages + if (message.role === 'assistant') { + // If the assistant message has tool calls, add them as FunctionToolCall objects + // OpenAI Responses API expects arguments as a string (JSON string) + if (message.toolCalls && message.toolCalls.length > 0) { + for (const toolCall of message.toolCalls) { + // Keep arguments as string for Responses API + // Our internal format stores arguments as a JSON string, which is what API expects + const argumentsString = + typeof toolCall.function.arguments === 'string' + ? toolCall.function.arguments + : JSON.stringify(toolCall.function.arguments) + + result.push({ + type: 'function_call', + call_id: toolCall.id, + name: toolCall.function.name, + arguments: argumentsString, + } as any) + } + } + + // Add the assistant's text message if there is content + if (message.content) { + result.push({ + type: 'message', + role: 'assistant', + content: message.content, + }) + } + + continue + } + + // Handle system messages + if (message.role === 'system') { + result.push({ + type: 'message', + role: 'system', + content: [ + { + type: 'input_text', + text: message.content || '', + }, + ], + }) + continue + } + + // Handle user messages (default case) + result.push({ + type: 'message', + role: 'user', + content: [ + { + type: 'input_text', + text: message.content || '', + }, + ], + }) + } + + return result +} diff --git a/packages/typescript/ai-openai/src/tools/apply-patch-tool.ts b/packages/typescript/ai-openai/src/tools/apply-patch-tool.ts index 4cca2735d..08b28fa1b 100644 --- a/packages/typescript/ai-openai/src/tools/apply-patch-tool.ts +++ b/packages/typescript/ai-openai/src/tools/apply-patch-tool.ts @@ -1,30 +1,30 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - - -export type ApplyPatchTool = OpenAI.Responses.ApplyPatchTool; - - -/** - * Converts a standard Tool to OpenAI ApplyPatchTool format - */ -export function convertApplyPatchToolToAdapterFormat(_tool: Tool): ApplyPatchTool { - return { - type: "apply_patch", - }; -} - -/** - * Creates a standard Tool from ApplyPatchTool parameters - */ -export function applyPatchTool(): Tool { - return { - type: "function", - function: { - name: "apply_patch", - description: "Apply a patch to modify files", - parameters: {}, - }, - metadata: {}, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type ApplyPatchTool = OpenAI.Responses.ApplyPatchTool + +/** + * Converts a standard Tool to OpenAI ApplyPatchTool format + */ +export function convertApplyPatchToolToAdapterFormat( + _tool: Tool, +): ApplyPatchTool { + return { + type: 'apply_patch', + } +} + +/** + * Creates a standard Tool from ApplyPatchTool parameters + */ +export function applyPatchTool(): Tool { + return { + type: 'function', + function: { + name: 'apply_patch', + description: 'Apply a patch to modify files', + parameters: {}, + }, + metadata: {}, + } +} diff --git a/packages/typescript/ai-openai/src/tools/code-interpreter-tool.ts b/packages/typescript/ai-openai/src/tools/code-interpreter-tool.ts index ce55b3569..c03e97937 100644 --- a/packages/typescript/ai-openai/src/tools/code-interpreter-tool.ts +++ b/packages/typescript/ai-openai/src/tools/code-interpreter-tool.ts @@ -1,36 +1,35 @@ -import type { Tool } from "@tanstack/ai"; -import type OpenAI from "openai" - -export type CodeInterpreterTool = OpenAI.Responses.Tool.CodeInterpreter; - - -/** - * Converts a standard Tool to OpenAI CodeInterpreterTool format - */ -export function convertCodeInterpreterToolToAdapterFormat(tool: Tool): CodeInterpreterTool { - const metadata = tool.metadata as CodeInterpreterTool; - return { - type: "code_interpreter", - container: metadata.container, - }; -} - -/** - * Creates a standard Tool from CodeInterpreterTool parameters - */ -export function codeInterpreterTool( - container: CodeInterpreterTool -): Tool { - return { - type: "function", - function: { - name: "code_interpreter", - description: "Execute code in a sandboxed environment", - parameters: {}, - }, - metadata: { - type: "code_interpreter", - container, - }, - }; -} \ No newline at end of file +import type { Tool } from '@tanstack/ai' +import type OpenAI from 'openai' + +export type CodeInterpreterTool = OpenAI.Responses.Tool.CodeInterpreter + +/** + * Converts a standard Tool to OpenAI CodeInterpreterTool format + */ +export function convertCodeInterpreterToolToAdapterFormat( + tool: Tool, +): CodeInterpreterTool { + const metadata = tool.metadata as CodeInterpreterTool + return { + type: 'code_interpreter', + container: metadata.container, + } +} + +/** + * Creates a standard Tool from CodeInterpreterTool parameters + */ +export function codeInterpreterTool(container: CodeInterpreterTool): Tool { + return { + type: 'function', + function: { + name: 'code_interpreter', + description: 'Execute code in a sandboxed environment', + parameters: {}, + }, + metadata: { + type: 'code_interpreter', + container, + }, + } +} diff --git a/packages/typescript/ai-openai/src/tools/computer-use-tool.ts b/packages/typescript/ai-openai/src/tools/computer-use-tool.ts index b64184409..0bce5ad3b 100644 --- a/packages/typescript/ai-openai/src/tools/computer-use-tool.ts +++ b/packages/typescript/ai-openai/src/tools/computer-use-tool.ts @@ -1,35 +1,35 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type ComputerUseTool = OpenAI.Responses.ComputerTool; -/** - * Converts a standard Tool to OpenAI ComputerUseTool format - */ -export function convertComputerUseToolToAdapterFormat(tool: Tool): ComputerUseTool { - const metadata = tool.metadata as ComputerUseTool; - return { - type: "computer_use_preview", - display_height: metadata.display_height, - display_width: metadata.display_width, - environment: metadata.environment, - }; -} - -/** - * Creates a standard Tool from ComputerUseTool parameters - */ -export function computerUseTool( - toolData: ComputerUseTool -): Tool { - return { - type: "function", - function: { - name: "computer_use_preview", - description: "Control a virtual computer", - parameters: {}, - }, - metadata: { - ...toolData, - }, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type ComputerUseTool = OpenAI.Responses.ComputerTool +/** + * Converts a standard Tool to OpenAI ComputerUseTool format + */ +export function convertComputerUseToolToAdapterFormat( + tool: Tool, +): ComputerUseTool { + const metadata = tool.metadata as ComputerUseTool + return { + type: 'computer_use_preview', + display_height: metadata.display_height, + display_width: metadata.display_width, + environment: metadata.environment, + } +} + +/** + * Creates a standard Tool from ComputerUseTool parameters + */ +export function computerUseTool(toolData: ComputerUseTool): Tool { + return { + type: 'function', + function: { + name: 'computer_use_preview', + description: 'Control a virtual computer', + parameters: {}, + }, + metadata: { + ...toolData, + }, + } +} diff --git a/packages/typescript/ai-openai/src/tools/custom-tool.ts b/packages/typescript/ai-openai/src/tools/custom-tool.ts index 907af69a1..5e365d85d 100644 --- a/packages/typescript/ai-openai/src/tools/custom-tool.ts +++ b/packages/typescript/ai-openai/src/tools/custom-tool.ts @@ -1,38 +1,34 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type CustomTool = OpenAI.Responses.CustomTool - - - -/** - * Converts a standard Tool to OpenAI CustomTool format - */ -export function convertCustomToolToAdapterFormat(tool: Tool): CustomTool { - const metadata = tool.metadata as CustomTool; - return { - type: "custom", - name: metadata.name, - description: metadata.description, - format: metadata.format, - }; -} - -/** - * Creates a standard Tool from CustomTool parameters - */ -export function customTool( - toolData: CustomTool -): Tool { - return { - type: "function", - function: { - name: "custom", - description: toolData.description || "A custom tool", - parameters: {}, - }, - metadata: { - ...toolData, - }, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type CustomTool = OpenAI.Responses.CustomTool + +/** + * Converts a standard Tool to OpenAI CustomTool format + */ +export function convertCustomToolToAdapterFormat(tool: Tool): CustomTool { + const metadata = tool.metadata as CustomTool + return { + type: 'custom', + name: metadata.name, + description: metadata.description, + format: metadata.format, + } +} + +/** + * Creates a standard Tool from CustomTool parameters + */ +export function customTool(toolData: CustomTool): Tool { + return { + type: 'function', + function: { + name: 'custom', + description: toolData.description || 'A custom tool', + parameters: {}, + }, + metadata: { + ...toolData, + }, + } +} diff --git a/packages/typescript/ai-openai/src/tools/file-search-tool.ts b/packages/typescript/ai-openai/src/tools/file-search-tool.ts index 0ab6f9ae4..0aae36710 100644 --- a/packages/typescript/ai-openai/src/tools/file-search-tool.ts +++ b/packages/typescript/ai-openai/src/tools/file-search-tool.ts @@ -1,45 +1,46 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export const validateMaxNumResults = (maxNumResults: number | undefined) => { - if (maxNumResults && (maxNumResults < 1 || maxNumResults > 50)) { - throw new Error("max_num_results must be between 1 and 50."); - } -}; - -export type FileSearchTool = OpenAI.Responses.FileSearchTool; - - -/** - * Converts a standard Tool to OpenAI FileSearchTool format - */ -export function convertFileSearchToolToAdapterFormat(tool: Tool): OpenAI.Responses.FileSearchTool { - const metadata = tool.metadata as OpenAI.Responses.FileSearchTool; - return { - type: "file_search", - vector_store_ids: metadata.vector_store_ids, - max_num_results: metadata.max_num_results, - ranking_options: metadata.ranking_options, - filters: metadata.filters, - }; -} - -/** - * Creates a standard Tool from FileSearchTool parameters - */ -export function fileSearchTool( - toolData: OpenAI.Responses.FileSearchTool -): Tool { - validateMaxNumResults(toolData.max_num_results); - return { - type: "function", - function: { - name: "file_search", - description: "Search files in vector stores", - parameters: {}, - }, - metadata: { - ...toolData, - }, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +const validateMaxNumResults = (maxNumResults: number | undefined) => { + if (maxNumResults && (maxNumResults < 1 || maxNumResults > 50)) { + throw new Error('max_num_results must be between 1 and 50.') + } +} + +export type FileSearchTool = OpenAI.Responses.FileSearchTool + +/** + * Converts a standard Tool to OpenAI FileSearchTool format + */ +export function convertFileSearchToolToAdapterFormat( + tool: Tool, +): OpenAI.Responses.FileSearchTool { + const metadata = tool.metadata as OpenAI.Responses.FileSearchTool + return { + type: 'file_search', + vector_store_ids: metadata.vector_store_ids, + max_num_results: metadata.max_num_results, + ranking_options: metadata.ranking_options, + filters: metadata.filters, + } +} + +/** + * Creates a standard Tool from FileSearchTool parameters + */ +export function fileSearchTool( + toolData: OpenAI.Responses.FileSearchTool, +): Tool { + validateMaxNumResults(toolData.max_num_results) + return { + type: 'function', + function: { + name: 'file_search', + description: 'Search files in vector stores', + parameters: {}, + }, + metadata: { + ...toolData, + }, + } +} diff --git a/packages/typescript/ai-openai/src/tools/function-tool.ts b/packages/typescript/ai-openai/src/tools/function-tool.ts index 5cc5de89c..d7e324cef 100644 --- a/packages/typescript/ai-openai/src/tools/function-tool.ts +++ b/packages/typescript/ai-openai/src/tools/function-tool.ts @@ -1,49 +1,49 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type FunctionTool = OpenAI.Responses.FunctionTool - - -/** - * Converts a standard Tool to OpenAI FunctionTool format - */ -export function convertFunctionToolToAdapterFormat(tool: Tool): FunctionTool { - // If tool has metadata (created via functionTool helper), use that - if (tool.metadata) { - const metadata = tool.metadata as Omit; - return { - type: "function", - ...metadata - }; - } - - // Otherwise, convert directly from tool.function (regular Tool structure) - // For Responses API, FunctionTool has name at top level, with function containing description and parameters - return { - type: "function", - name: tool.function.name, - function: { - description: tool.function.description, - parameters: tool.function.parameters, - }, - } as FunctionTool; -} - -/** - * Creates a standard Tool from FunctionTool parameters - */ -export function functionTool( - config: Omit -): Tool { - return { - type: "function", - function: { - name: config.name, - description: config.description ?? "", - parameters: config.parameters ?? {}, - }, - metadata: { - ...config - }, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type FunctionTool = OpenAI.Responses.FunctionTool + +/** + * Converts a standard Tool to OpenAI FunctionTool format + */ +export function convertFunctionToolToAdapterFormat(tool: Tool): FunctionTool { + // If tool has metadata (created via functionTool helper), use that + if (tool.metadata) { + const metadata = tool.metadata as Omit + return { + type: 'function', + ...metadata, + } + } + + // Otherwise, convert directly from tool.function (regular Tool structure) + // For Responses API, FunctionTool has name at top level, with function containing description and parameters + return { + type: 'function', + name: tool.function.name, + description: tool.function.description, + parameters: { + ...tool.function.parameters, + additionalProperties: false, + }, + + strict: true, + } satisfies FunctionTool +} + +/** + * Creates a standard Tool from FunctionTool parameters + */ +export function functionTool(config: Omit): Tool { + return { + type: 'function', + function: { + name: config.name, + description: config.description ?? '', + parameters: config.parameters ?? {}, + }, + metadata: { + ...config, + }, + } +} diff --git a/packages/typescript/ai-openai/src/tools/image-generation-tool.ts b/packages/typescript/ai-openai/src/tools/image-generation-tool.ts index 27eaab8e2..3ed45c922 100644 --- a/packages/typescript/ai-openai/src/tools/image-generation-tool.ts +++ b/packages/typescript/ai-openai/src/tools/image-generation-tool.ts @@ -1,42 +1,43 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type ImageGenerationTool = OpenAI.Responses.Tool.ImageGeneration; - - -export const validatePartialImages = (value: number | undefined) => { - if (value !== undefined && (value < 0 || value > 3)) { - throw new Error("partial_images must be between 0 and 3"); - } -}; - -/** - * Converts a standard Tool to OpenAI ImageGenerationTool format - */ -export function convertImageGenerationToolToAdapterFormat(tool: Tool): ImageGenerationTool { - const metadata = tool.metadata as Omit; - return { - type: "image_generation", - ...metadata - }; -} - -/** - * Creates a standard Tool from ImageGenerationTool parameters - */ -export function imageGenerationTool( - toolData: Omit -): Tool { - validatePartialImages(toolData.partial_images); - return { - type: "function", - function: { - name: "image_generation", - description: "Generate images based on text descriptions", - parameters: {}, - }, - metadata: { - ...toolData - }, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type ImageGenerationTool = OpenAI.Responses.Tool.ImageGeneration + +const validatePartialImages = (value: number | undefined) => { + if (value !== undefined && (value < 0 || value > 3)) { + throw new Error('partial_images must be between 0 and 3') + } +} + +/** + * Converts a standard Tool to OpenAI ImageGenerationTool format + */ +export function convertImageGenerationToolToAdapterFormat( + tool: Tool, +): ImageGenerationTool { + const metadata = tool.metadata as Omit + return { + type: 'image_generation', + ...metadata, + } +} + +/** + * Creates a standard Tool from ImageGenerationTool parameters + */ +export function imageGenerationTool( + toolData: Omit, +): Tool { + validatePartialImages(toolData.partial_images) + return { + type: 'function', + function: { + name: 'image_generation', + description: 'Generate images based on text descriptions', + parameters: {}, + }, + metadata: { + ...toolData, + }, + } +} diff --git a/packages/typescript/ai-openai/src/tools/index.ts b/packages/typescript/ai-openai/src/tools/index.ts index cf8197a93..1795d7fce 100644 --- a/packages/typescript/ai-openai/src/tools/index.ts +++ b/packages/typescript/ai-openai/src/tools/index.ts @@ -1,41 +1,41 @@ -import type { ApplyPatchTool } from "./apply-patch-tool"; -import type { CodeInterpreterTool } from "./code-interpreter-tool"; -import type { ComputerUseTool } from "./computer-use-tool"; -import type { CustomTool } from "./custom-tool"; -import type { FileSearchTool } from "./file-search-tool"; -import type { FunctionTool } from "./function-tool"; -import type { ImageGenerationTool } from "./image-generation-tool"; -import type { LocalShellTool } from "./local-shell-tool"; -import type { MCPTool } from "./mcp-tool"; -import type { ShellTool } from "./shell-tool"; -import type { WebSearchPreviewTool } from "./web-search-preview-tool"; -import type { WebSearchTool } from "./web-search-tool"; - -export type OpenAITool = - | ApplyPatchTool - | CodeInterpreterTool - | ComputerUseTool - | CustomTool - | FileSearchTool - | FunctionTool - | ImageGenerationTool - | LocalShellTool - | MCPTool - | ShellTool - | WebSearchPreviewTool - | WebSearchTool; - -export * from "./apply-patch-tool"; -export * from "./code-interpreter-tool"; -export * from "./computer-use-tool"; -export * from "./custom-tool"; -export * from "./file-search-tool"; -export * from "./function-tool"; -export * from "./image-generation-tool"; -export * from "./local-shell-tool"; -export * from "./mcp-tool"; -export * from "./shell-tool"; -export * from "./tool-choice"; -export * from "./tool-converter"; -export * from "./web-search-preview-tool"; -export * from "./web-search-tool"; +import type { ApplyPatchTool } from './apply-patch-tool' +import type { CodeInterpreterTool } from './code-interpreter-tool' +import type { ComputerUseTool } from './computer-use-tool' +import type { CustomTool } from './custom-tool' +import type { FileSearchTool } from './file-search-tool' +import type { FunctionTool } from './function-tool' +import type { ImageGenerationTool } from './image-generation-tool' +import type { LocalShellTool } from './local-shell-tool' +import type { MCPTool } from './mcp-tool' +import type { ShellTool } from './shell-tool' +import type { WebSearchPreviewTool } from './web-search-preview-tool' +import type { WebSearchTool } from './web-search-tool' + +export type OpenAITool = + | ApplyPatchTool + | CodeInterpreterTool + | ComputerUseTool + | CustomTool + | FileSearchTool + | FunctionTool + | ImageGenerationTool + | LocalShellTool + | MCPTool + | ShellTool + | WebSearchPreviewTool + | WebSearchTool + +export * from './apply-patch-tool' +export * from './code-interpreter-tool' +export * from './computer-use-tool' +export * from './custom-tool' +export * from './file-search-tool' +export * from './function-tool' +export * from './image-generation-tool' +export * from './local-shell-tool' +export * from './mcp-tool' +export * from './shell-tool' +export * from './tool-choice' +export * from './tool-converter' +export * from './web-search-preview-tool' +export * from './web-search-tool' diff --git a/packages/typescript/ai-openai/src/tools/local-shell-tool.ts b/packages/typescript/ai-openai/src/tools/local-shell-tool.ts index 8a70237b7..40c0bc3be 100644 --- a/packages/typescript/ai-openai/src/tools/local-shell-tool.ts +++ b/packages/typescript/ai-openai/src/tools/local-shell-tool.ts @@ -1,29 +1,30 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type LocalShellTool = OpenAI.Responses.Tool.LocalShell; - -/** - * Converts a standard Tool to OpenAI LocalShellTool format - */ -export function convertLocalShellToolToAdapterFormat(_tool: Tool): LocalShellTool { - return { - type: "local_shell", - }; -} - -/** - * Creates a standard Tool from LocalShellTool parameters - */ -export function localShellTool(): Tool { - return { - type: "function", - function: { - name: "local_shell", - description: "Execute local shell commands", - parameters: {}, - }, - metadata: {}, - }; -} - +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type LocalShellTool = OpenAI.Responses.Tool.LocalShell + +/** + * Converts a standard Tool to OpenAI LocalShellTool format + */ +export function convertLocalShellToolToAdapterFormat( + _tool: Tool, +): LocalShellTool { + return { + type: 'local_shell', + } +} + +/** + * Creates a standard Tool from LocalShellTool parameters + */ +export function localShellTool(): Tool { + return { + type: 'function', + function: { + name: 'local_shell', + description: 'Execute local shell commands', + parameters: {}, + }, + metadata: {}, + } +} diff --git a/packages/typescript/ai-openai/src/tools/mcp-tool.ts b/packages/typescript/ai-openai/src/tools/mcp-tool.ts index ba9965abd..0792a099f 100644 --- a/packages/typescript/ai-openai/src/tools/mcp-tool.ts +++ b/packages/typescript/ai-openai/src/tools/mcp-tool.ts @@ -1,47 +1,45 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type MCPTool = OpenAI.Responses.Tool.Mcp; - -export const validateMCPtool = (tool: MCPTool) => { - if (!tool.server_url && !tool.connector_id) { - throw new Error("Either server_url or connector_id must be provided."); - } - if (tool.connector_id && tool.server_url) { - throw new Error("Only one of server_url or connector_id can be provided."); - } -} - -/** -* Converts a standard Tool to OpenAI MCPTool format -*/ -export function convertMCPToolToAdapterFormat(tool: Tool): MCPTool { - const metadata = tool.metadata as Omit; - - const mcpTool: MCPTool = { - type: "mcp", - ...metadata - } - - validateMCPtool(mcpTool); - return mcpTool; -} - -/** - * Creates a standard Tool from MCPTool parameters - */ -export function mcpTool( - toolData: Omit -): Tool { - validateMCPtool({ ...toolData, type: "mcp" }); - - return { - type: "function", - function: { - name: "mcp", - description: toolData.server_description || "", - parameters: {}, - }, - metadata: toolData - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type MCPTool = OpenAI.Responses.Tool.Mcp + +export function validateMCPtool(tool: MCPTool) { + if (!tool.server_url && !tool.connector_id) { + throw new Error('Either server_url or connector_id must be provided.') + } + if (tool.connector_id && tool.server_url) { + throw new Error('Only one of server_url or connector_id can be provided.') + } +} + +/** + * Converts a standard Tool to OpenAI MCPTool format + */ +export function convertMCPToolToAdapterFormat(tool: Tool): MCPTool { + const metadata = tool.metadata as Omit + + const mcpTool: MCPTool = { + type: 'mcp', + ...metadata, + } + + validateMCPtool(mcpTool) + return mcpTool +} + +/** + * Creates a standard Tool from MCPTool parameters + */ +export function mcpTool(toolData: Omit): Tool { + validateMCPtool({ ...toolData, type: 'mcp' }) + + return { + type: 'function', + function: { + name: 'mcp', + description: toolData.server_description || '', + parameters: {}, + }, + metadata: toolData, + } +} diff --git a/packages/typescript/ai-openai/src/tools/shell-tool.ts b/packages/typescript/ai-openai/src/tools/shell-tool.ts index 98f76fa07..30a1b57b1 100644 --- a/packages/typescript/ai-openai/src/tools/shell-tool.ts +++ b/packages/typescript/ai-openai/src/tools/shell-tool.ts @@ -1,28 +1,28 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type ShellTool = OpenAI.Responses.FunctionShellTool - -/** - * Converts a standard Tool to OpenAI ShellTool format - */ -export function convertShellToolToAdapterFormat(_tool: Tool): ShellTool { - return { - type: "shell", - }; -} - -/** - * Creates a standard Tool from ShellTool parameters - */ -export function shellTool(): Tool { - return { - type: "function", - function: { - name: "shell", - description: "Execute shell commands", - parameters: {}, - }, - metadata: {}, - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type ShellTool = OpenAI.Responses.FunctionShellTool + +/** + * Converts a standard Tool to OpenAI ShellTool format + */ +export function convertShellToolToAdapterFormat(_tool: Tool): ShellTool { + return { + type: 'shell', + } +} + +/** + * Creates a standard Tool from ShellTool parameters + */ +export function shellTool(): Tool { + return { + type: 'function', + function: { + name: 'shell', + description: 'Execute shell commands', + parameters: {}, + }, + metadata: {}, + } +} diff --git a/packages/typescript/ai-openai/src/tools/tool-choice.ts b/packages/typescript/ai-openai/src/tools/tool-choice.ts index f648c4c40..db6e0b148 100644 --- a/packages/typescript/ai-openai/src/tools/tool-choice.ts +++ b/packages/typescript/ai-openai/src/tools/tool-choice.ts @@ -1,20 +1,31 @@ -export interface MCPToolChoice { - type: "mcp" - server_label: "deepwiki" -} - -export interface FunctionToolChoice { - type: "function" - name: string; -} - -export interface CustomToolChoice { - type: "custom" - name: string; -} - -export interface HostedToolChoice { - type: "file_search" | "web_search_preview" | "computer_use_preview" | "code_interpreter" | "image_generation" | "shell" | "apply_patch" -} - -export type ToolChoice = MCPToolChoice | FunctionToolChoice | CustomToolChoice | HostedToolChoice; \ No newline at end of file +interface MCPToolChoice { + type: 'mcp' + server_label: 'deepwiki' +} + +interface FunctionToolChoice { + type: 'function' + name: string +} + +interface CustomToolChoice { + type: 'custom' + name: string +} + +interface HostedToolChoice { + type: + | 'file_search' + | 'web_search_preview' + | 'computer_use_preview' + | 'code_interpreter' + | 'image_generation' + | 'shell' + | 'apply_patch' +} + +export type ToolChoice = + | MCPToolChoice + | FunctionToolChoice + | CustomToolChoice + | HostedToolChoice diff --git a/packages/typescript/ai-openai/src/tools/tool-converter.ts b/packages/typescript/ai-openai/src/tools/tool-converter.ts index 85bae5c0c..9262789c2 100644 --- a/packages/typescript/ai-openai/src/tools/tool-converter.ts +++ b/packages/typescript/ai-openai/src/tools/tool-converter.ts @@ -1,70 +1,72 @@ -import type { Tool } from "@tanstack/ai"; -import type { OpenAITool } from "./index"; -import { convertApplyPatchToolToAdapterFormat } from "./apply-patch-tool"; -import { convertCodeInterpreterToolToAdapterFormat } from "./code-interpreter-tool"; -import { convertComputerUseToolToAdapterFormat } from "./computer-use-tool"; -import { convertCustomToolToAdapterFormat } from "./custom-tool"; -import { convertFileSearchToolToAdapterFormat } from "./file-search-tool"; -import { convertFunctionToolToAdapterFormat } from "./function-tool"; -import { convertImageGenerationToolToAdapterFormat } from "./image-generation-tool"; -import { convertLocalShellToolToAdapterFormat } from "./local-shell-tool"; -import { convertMCPToolToAdapterFormat } from "./mcp-tool"; -import { convertShellToolToAdapterFormat } from "./shell-tool"; -import { convertWebSearchPreviewToolToAdapterFormat } from "./web-search-preview-tool"; -import { convertWebSearchToolToAdapterFormat } from "./web-search-tool"; - -/** - * Converts an array of standard Tools to OpenAI-specific format - */ -export function convertToolsToProviderFormat(tools: Array): Array { - return tools.map((tool) => { - // Special tool names that map to specific OpenAI tool types - const specialToolNames = new Set([ - "apply_patch", - "code_interpreter", - "computer_use_preview", - "file_search", - "image_generation", - "local_shell", - "mcp", - "shell", - "web_search_preview", - "web_search", - "custom", - ]); - - const toolName = tool.function.name; - - // If it's a special tool name, route to the appropriate converter - if (specialToolNames.has(toolName)) { - switch (toolName) { - case "apply_patch": - return convertApplyPatchToolToAdapterFormat(tool); - case "code_interpreter": - return convertCodeInterpreterToolToAdapterFormat(tool); - case "computer_use_preview": - return convertComputerUseToolToAdapterFormat(tool); - case "file_search": - return convertFileSearchToolToAdapterFormat(tool); - case "image_generation": - return convertImageGenerationToolToAdapterFormat(tool); - case "local_shell": - return convertLocalShellToolToAdapterFormat(tool); - case "mcp": - return convertMCPToolToAdapterFormat(tool); - case "shell": - return convertShellToolToAdapterFormat(tool); - case "web_search_preview": - return convertWebSearchPreviewToolToAdapterFormat(tool); - case "web_search": - return convertWebSearchToolToAdapterFormat(tool); - case "custom": - return convertCustomToolToAdapterFormat(tool); - } - } - - // For regular function tools (not special names), convert as function tool - // This handles tools like "getGuitars", "recommendGuitar", etc. - return convertFunctionToolToAdapterFormat(tool); - }); -} +import { convertApplyPatchToolToAdapterFormat } from './apply-patch-tool' +import { convertCodeInterpreterToolToAdapterFormat } from './code-interpreter-tool' +import { convertComputerUseToolToAdapterFormat } from './computer-use-tool' +import { convertCustomToolToAdapterFormat } from './custom-tool' +import { convertFileSearchToolToAdapterFormat } from './file-search-tool' +import { convertFunctionToolToAdapterFormat } from './function-tool' +import { convertImageGenerationToolToAdapterFormat } from './image-generation-tool' +import { convertLocalShellToolToAdapterFormat } from './local-shell-tool' +import { convertMCPToolToAdapterFormat } from './mcp-tool' +import { convertShellToolToAdapterFormat } from './shell-tool' +import { convertWebSearchPreviewToolToAdapterFormat } from './web-search-preview-tool' +import { convertWebSearchToolToAdapterFormat } from './web-search-tool' +import type { OpenAITool } from './index' +import type { Tool } from '@tanstack/ai' + +/** + * Converts an array of standard Tools to OpenAI-specific format + */ +export function convertToolsToProviderFormat( + tools: Array, +): Array { + return tools.map((tool) => { + // Special tool names that map to specific OpenAI tool types + const specialToolNames = new Set([ + 'apply_patch', + 'code_interpreter', + 'computer_use_preview', + 'file_search', + 'image_generation', + 'local_shell', + 'mcp', + 'shell', + 'web_search_preview', + 'web_search', + 'custom', + ]) + + const toolName = tool.function.name + + // If it's a special tool name, route to the appropriate converter + if (specialToolNames.has(toolName)) { + switch (toolName) { + case 'apply_patch': + return convertApplyPatchToolToAdapterFormat(tool) + case 'code_interpreter': + return convertCodeInterpreterToolToAdapterFormat(tool) + case 'computer_use_preview': + return convertComputerUseToolToAdapterFormat(tool) + case 'file_search': + return convertFileSearchToolToAdapterFormat(tool) + case 'image_generation': + return convertImageGenerationToolToAdapterFormat(tool) + case 'local_shell': + return convertLocalShellToolToAdapterFormat(tool) + case 'mcp': + return convertMCPToolToAdapterFormat(tool) + case 'shell': + return convertShellToolToAdapterFormat(tool) + case 'web_search_preview': + return convertWebSearchPreviewToolToAdapterFormat(tool) + case 'web_search': + return convertWebSearchToolToAdapterFormat(tool) + case 'custom': + return convertCustomToolToAdapterFormat(tool) + } + } + + // For regular function tools (not special names), convert as function tool + // This handles tools like "getGuitars", "recommendGuitar", etc. + return convertFunctionToolToAdapterFormat(tool) + }) +} diff --git a/packages/typescript/ai-openai/src/tools/web-search-preview-tool.ts b/packages/typescript/ai-openai/src/tools/web-search-preview-tool.ts index e7d037e57..e68e21e29 100644 --- a/packages/typescript/ai-openai/src/tools/web-search-preview-tool.ts +++ b/packages/typescript/ai-openai/src/tools/web-search-preview-tool.ts @@ -1,34 +1,33 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type WebSearchPreviewTool = OpenAI.Responses.WebSearchPreviewTool - -/** - * Converts a standard Tool to OpenAI WebSearchPreviewTool format - */ -export function convertWebSearchPreviewToolToAdapterFormat(tool: Tool): WebSearchPreviewTool { - const metadata = tool.metadata as WebSearchPreviewTool; - return { - type: metadata.type, - search_context_size: metadata.search_context_size, - user_location: metadata.user_location, - }; -} - -/** - * Creates a standard Tool from WebSearchPreviewTool parameters - */ -export function webSearchPreviewTool( - toolData: WebSearchPreviewTool -): Tool { - - return { - type: "function", - function: { - name: "web_search_preview", - description: "Search the web (preview version)", - parameters: {}, - }, - metadata: toolData - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type WebSearchPreviewTool = OpenAI.Responses.WebSearchPreviewTool + +/** + * Converts a standard Tool to OpenAI WebSearchPreviewTool format + */ +export function convertWebSearchPreviewToolToAdapterFormat( + tool: Tool, +): WebSearchPreviewTool { + const metadata = tool.metadata as WebSearchPreviewTool + return { + type: metadata.type, + search_context_size: metadata.search_context_size, + user_location: metadata.user_location, + } +} + +/** + * Creates a standard Tool from WebSearchPreviewTool parameters + */ +export function webSearchPreviewTool(toolData: WebSearchPreviewTool): Tool { + return { + type: 'function', + function: { + name: 'web_search_preview', + description: 'Search the web (preview version)', + parameters: {}, + }, + metadata: toolData, + } +} diff --git a/packages/typescript/ai-openai/src/tools/web-search-tool.ts b/packages/typescript/ai-openai/src/tools/web-search-tool.ts index 90b10148d..baa0d7713 100644 --- a/packages/typescript/ai-openai/src/tools/web-search-tool.ts +++ b/packages/typescript/ai-openai/src/tools/web-search-tool.ts @@ -1,29 +1,27 @@ -import type { Tool } from "@tanstack/ai"; -import OpenAI from "openai"; - -export type WebSearchTool = OpenAI.Responses.WebSearchTool - -/** - * Converts a standard Tool to OpenAI WebSearchTool format - */ -export function convertWebSearchToolToAdapterFormat(tool: Tool): WebSearchTool { - const metadata = tool.metadata as WebSearchTool; - return metadata -} - -/** - * Creates a standard Tool from WebSearchTool parameters - */ -export function webSearchTool( - toolData: WebSearchTool -): Tool { - return { - type: "function", - function: { - name: "web_search", - description: "Search the web", - parameters: {}, - }, - metadata: toolData - }; -} \ No newline at end of file +import type OpenAI from 'openai' +import type { Tool } from '@tanstack/ai' + +export type WebSearchTool = OpenAI.Responses.WebSearchTool + +/** + * Converts a standard Tool to OpenAI WebSearchTool format + */ +export function convertWebSearchToolToAdapterFormat(tool: Tool): WebSearchTool { + const metadata = tool.metadata as WebSearchTool + return metadata +} + +/** + * Creates a standard Tool from WebSearchTool parameters + */ +export function webSearchTool(toolData: WebSearchTool): Tool { + return { + type: 'function', + function: { + name: 'web_search', + description: 'Search the web', + parameters: {}, + }, + metadata: toolData, + } +} diff --git a/packages/typescript/ai-openai/tests/model-meta.test.ts b/packages/typescript/ai-openai/tests/model-meta.test.ts index ba53c190d..a725764a6 100644 --- a/packages/typescript/ai-openai/tests/model-meta.test.ts +++ b/packages/typescript/ai-openai/tests/model-meta.test.ts @@ -1,620 +1,796 @@ -import { describe, it, expectTypeOf } from "vitest"; -import type { - OpenAIChatModelProviderOptionsByName, -} from "../src/model-meta"; -import type { - OpenAIBaseOptions, - OpenAIReasoningOptions, - OpenAIStructuredOutputOptions, - OpenAIToolsOptions, - OpenAIStreamingOptions, - OpenAIMetadataOptions, -} from "../src/text/text-provider-options"; - -/** - * Type assertion tests for OpenAI model provider options. - * - * These tests verify that: - * 1. Models with reasoning support have OpenAIReasoningOptions in their provider options - * 2. Models without reasoning support do NOT have OpenAIReasoningOptions - * 3. Models with structured output support have OpenAIStructuredOutputOptions - * 4. Models without structured output support do NOT have OpenAIStructuredOutputOptions - * 5. Models with tools support have OpenAIToolsOptions - * 6. All chat models have base options (OpenAIBaseOptions, OpenAIMetadataOptions) - */ - -// Base options that ALL chat models should have -type BaseOptions = OpenAIBaseOptions & OpenAIMetadataOptions; - -// Full featured model options (reasoning + structured output + tools + streaming) -type FullFeaturedOptions = BaseOptions & - OpenAIReasoningOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions; - -// Standard model options (structured output + tools + streaming, no reasoning) -type StandardOptions = BaseOptions & - OpenAIStructuredOutputOptions & - OpenAIToolsOptions & - OpenAIStreamingOptions; - -// Reasoning-only model options (reasoning but no tools/structured output streaming) -type ReasoningOnlyOptions = BaseOptions & OpenAIReasoningOptions; - -describe("OpenAI Chat Model Provider Options Type Assertions", () => { - describe("Models WITH reasoning AND structured output AND tools support (Full Featured)", () => { - it("gpt-5.1 should support all features", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5.1"]; - - // Should have reasoning options - expectTypeOf().toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have tools options - expectTypeOf().toExtend(); - - // Should have streaming options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - - // Verify specific properties exist - expectTypeOf().toHaveProperty("reasoning"); - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("store"); - }); - - it("gpt-5.1-codex should support all features", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5.1-codex"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-5 should support all features", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-5-pro should support all features", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5-pro"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Models WITH structured output AND tools but WITHOUT reasoning (Standard)", () => { - it("gpt-5-mini should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5-mini"]; - - // Should NOT have reasoning options - expectTypeOf().not.toExtend(); - - // Should have structured output options - expectTypeOf().toExtend(); - - // Should have tools options - expectTypeOf().toExtend(); - - // Should have streaming options - expectTypeOf().toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - - it("gpt-5-nano should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5-nano"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-5-codex should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5-codex"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4.1 should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4.1"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4.1-mini should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4.1-mini"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4.1-nano should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4.1-nano"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4o should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4o"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4o-mini should have structured output and tools but NOT reasoning", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4o-mini"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Models WITH reasoning but LIMITED other features (Reasoning Models)", () => { - it("o3 should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o3"]; - - // Should have reasoning options - expectTypeOf().toExtend(); - - // Should NOT have structured output options - expectTypeOf().not.toExtend(); - - // Should NOT have tools options - expectTypeOf().not.toExtend(); - - // Should NOT have streaming options - expectTypeOf().not.toExtend(); - - // Should have base options - expectTypeOf().toExtend(); - }); - - it("o3-pro should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o3-pro"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("o3-mini should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o3-mini"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("o4-mini should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o4-mini"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("o3-deep-research should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o3-deep-research"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("o4-mini-deep-research should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o4-mini-deep-research"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("o1 should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o1"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("o1-pro should have reasoning but NOT structured output or tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["o1-pro"]; - - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Models WITH tools but WITHOUT structured output or reasoning (Legacy Models)", () => { - it("gpt-4 should have tools and streaming but NOT reasoning or structured output", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4-turbo should have tools and streaming but NOT reasoning or structured output", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4-turbo"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-3.5-turbo should have tools and streaming but NOT reasoning or structured output", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-3.5-turbo"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Models WITH minimal features (Basic Models)", () => { - it("chatgpt-4.0 should only have streaming and base options", () => { - type Options = OpenAIChatModelProviderOptionsByName["chatgpt-4.0"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-audio should only have streaming and base options", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-audio"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-audio-mini should only have streaming and base options", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-audio-mini"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4o-audio should only have streaming and base options", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4o-audio"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4o-mini-audio should only have streaming and base options", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4o-mini-audio"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Chat-only models WITH reasoning AND structured output but WITHOUT tools", () => { - it("gpt-5.1-chat should have reasoning and structured output but NOT tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5.1-chat"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-5-chat should have reasoning and structured output but NOT tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5-chat"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Codex/Preview models", () => { - it("gpt-5.1-codex-mini should have structured output and tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-5.1-codex-mini"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("codex-mini-latest should have structured output and tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["codex-mini-latest"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4o-search-preview should have structured output and tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4o-search-preview"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("gpt-4o-mini-search-preview should have structured output and tools", () => { - type Options = OpenAIChatModelProviderOptionsByName["gpt-4o-mini-search-preview"]; - - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("computer-use-preview should have tools but NOT structured output", () => { - type Options = OpenAIChatModelProviderOptionsByName["computer-use-preview"]; - - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); - - describe("Provider options type completeness", () => { - it("OpenAIChatModelProviderOptionsByName should have entries for all chat models", () => { - type Keys = keyof OpenAIChatModelProviderOptionsByName; - - // Full featured models - expectTypeOf<"gpt-5.1">().toExtend(); - expectTypeOf<"gpt-5.1-codex">().toExtend(); - expectTypeOf<"gpt-5">().toExtend(); - expectTypeOf<"gpt-5-pro">().toExtend(); - - // Standard models (structured output + tools, no reasoning) - expectTypeOf<"gpt-5-mini">().toExtend(); - expectTypeOf<"gpt-5-nano">().toExtend(); - expectTypeOf<"gpt-5-codex">().toExtend(); - expectTypeOf<"gpt-4.1">().toExtend(); - expectTypeOf<"gpt-4.1-mini">().toExtend(); - expectTypeOf<"gpt-4.1-nano">().toExtend(); - expectTypeOf<"gpt-4o">().toExtend(); - expectTypeOf<"gpt-4o-mini">().toExtend(); - - // Reasoning-only models - expectTypeOf<"o3">().toExtend(); - expectTypeOf<"o3-pro">().toExtend(); - expectTypeOf<"o3-mini">().toExtend(); - expectTypeOf<"o4-mini">().toExtend(); - expectTypeOf<"o3-deep-research">().toExtend(); - expectTypeOf<"o4-mini-deep-research">().toExtend(); - expectTypeOf<"o1">().toExtend(); - expectTypeOf<"o1-pro">().toExtend(); - - // Legacy models - expectTypeOf<"gpt-4">().toExtend(); - expectTypeOf<"gpt-4-turbo">().toExtend(); - expectTypeOf<"gpt-3.5-turbo">().toExtend(); - - // Basic models - expectTypeOf<"chatgpt-4.0">().toExtend(); - expectTypeOf<"gpt-audio">().toExtend(); - expectTypeOf<"gpt-audio-mini">().toExtend(); - expectTypeOf<"gpt-4o-audio">().toExtend(); - expectTypeOf<"gpt-4o-mini-audio">().toExtend(); - - // Chat-only models - expectTypeOf<"gpt-5.1-chat">().toExtend(); - expectTypeOf<"gpt-5-chat">().toExtend(); - - // Codex/Preview models - expectTypeOf<"gpt-5.1-codex-mini">().toExtend(); - expectTypeOf<"codex-mini-latest">().toExtend(); - expectTypeOf<"gpt-4o-search-preview">().toExtend(); - expectTypeOf<"gpt-4o-mini-search-preview">().toExtend(); - expectTypeOf<"computer-use-preview">().toExtend(); - }); - }); - - describe("Detailed property type assertions", () => { - it("all models should have metadata option", () => { - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - expectTypeOf().toHaveProperty("metadata"); - }); - - it("all models should have store option", () => { - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - expectTypeOf().toHaveProperty("store"); - }); - - it("all models should have service_tier option", () => { - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - expectTypeOf().toHaveProperty("service_tier"); - }); - - it("models with tools support should have tool_choice and parallel_tool_calls", () => { - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("parallel_tool_calls"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - expectTypeOf().toHaveProperty("tool_choice"); - }); - - it("models with structured output should have text option", () => { - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("text"); - expectTypeOf().toHaveProperty("text"); - }); - - it("models with streaming should have stream_options", () => { - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("stream_options"); - expectTypeOf().toHaveProperty("stream_options"); - }); - }); - - describe("Type discrimination between model categories", () => { - it("full featured models should extend all options", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - - it("standard models should NOT extend reasoning options but should extend structured output and tools", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - - // Verify these do NOT extend reasoning options (discrimination already tested above) - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - }); - - it("reasoning-only models should extend reasoning options but NOT structured output or tools", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - - // Verify these do NOT extend structured output or tools options - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - expectTypeOf().not.toExtend(); - }); - - it("all models should extend base options", () => { - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - expectTypeOf().toExtend(); - }); - }); -}); +import { describe, it, expectTypeOf } from 'vitest' +import type { OpenAIChatModelProviderOptionsByName } from '../src/model-meta' +import type { + OpenAIBaseOptions, + OpenAIReasoningOptions, + OpenAIStructuredOutputOptions, + OpenAIToolsOptions, + OpenAIStreamingOptions, + OpenAIMetadataOptions, +} from '../src/text/text-provider-options' + +/** + * Type assertion tests for OpenAI model provider options. + * + * These tests verify that: + * 1. Models with reasoning support have OpenAIReasoningOptions in their provider options + * 2. Models without reasoning support do NOT have OpenAIReasoningOptions + * 3. Models with structured output support have OpenAIStructuredOutputOptions + * 4. Models without structured output support do NOT have OpenAIStructuredOutputOptions + * 5. Models with tools support have OpenAIToolsOptions + * 6. All chat models have base options (OpenAIBaseOptions, OpenAIMetadataOptions) + */ + +// Base options that ALL chat models should have +type BaseOptions = OpenAIBaseOptions & OpenAIMetadataOptions + +// Full featured model options (reasoning + structured output + tools + streaming) +type FullFeaturedOptions = BaseOptions & + OpenAIReasoningOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions + +// Standard model options (structured output + tools + streaming, no reasoning) +type StandardOptions = BaseOptions & + OpenAIStructuredOutputOptions & + OpenAIToolsOptions & + OpenAIStreamingOptions + +// Reasoning-only model options (reasoning but no tools/structured output streaming) +type ReasoningOnlyOptions = BaseOptions & OpenAIReasoningOptions + +describe('OpenAI Chat Model Provider Options Type Assertions', () => { + describe('Models WITH reasoning AND structured output AND tools support (Full Featured)', () => { + it('gpt-5.1 should support all features', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5.1'] + + // Should have reasoning options + expectTypeOf().toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have tools options + expectTypeOf().toExtend() + + // Should have streaming options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + + // Verify specific properties exist + expectTypeOf().toHaveProperty('reasoning') + expectTypeOf().toHaveProperty('text') + expectTypeOf().toHaveProperty('tool_choice') + expectTypeOf().toHaveProperty('stream_options') + expectTypeOf().toHaveProperty('metadata') + expectTypeOf().toHaveProperty('store') + }) + + it('gpt-5.1-codex should support all features', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5.1-codex'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-5 should support all features', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-5-pro should support all features', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5-pro'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Models WITH structured output AND tools but WITHOUT reasoning (Standard)', () => { + it('gpt-5-mini should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + + // Should NOT have reasoning options + expectTypeOf().not.toExtend() + + // Should have structured output options + expectTypeOf().toExtend() + + // Should have tools options + expectTypeOf().toExtend() + + // Should have streaming options + expectTypeOf().toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + + it('gpt-5-nano should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5-nano'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-5-codex should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5-codex'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4.1 should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4.1'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4.1-mini should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4.1-mini'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4.1-nano should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4.1-nano'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4o should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4o'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4o-mini should have structured output and tools but NOT reasoning', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4o-mini'] + + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Models WITH reasoning but LIMITED other features (Reasoning Models)', () => { + it('o3 should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o3'] + + // Should have reasoning options + expectTypeOf().toExtend() + + // Should NOT have structured output options + expectTypeOf().not.toExtend() + + // Should NOT have tools options + expectTypeOf().not.toExtend() + + // Should NOT have streaming options + expectTypeOf().not.toExtend() + + // Should have base options + expectTypeOf().toExtend() + }) + + it('o3-pro should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o3-pro'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('o3-mini should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o3-mini'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('o4-mini should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o4-mini'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('o3-deep-research should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o3-deep-research'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('o4-mini-deep-research should have reasoning but NOT structured output or tools', () => { + type Options = + OpenAIChatModelProviderOptionsByName['o4-mini-deep-research'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('o1 should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o1'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('o1-pro should have reasoning but NOT structured output or tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['o1-pro'] + + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Models WITH tools but WITHOUT structured output or reasoning (Legacy Models)', () => { + it('gpt-4 should have tools and streaming but NOT reasoning or structured output', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4-turbo should have tools and streaming but NOT reasoning or structured output', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4-turbo'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-3.5-turbo should have tools and streaming but NOT reasoning or structured output', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-3.5-turbo'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Models WITH minimal features (Basic Models)', () => { + it('chatgpt-4.0 should only have streaming and base options', () => { + type Options = OpenAIChatModelProviderOptionsByName['chatgpt-4.0'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-audio should only have streaming and base options', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-audio'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-audio-mini should only have streaming and base options', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-audio-mini'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4o-audio should only have streaming and base options', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4o-audio'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4o-mini-audio should only have streaming and base options', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-4o-mini-audio'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Chat-only models WITH reasoning AND structured output but WITHOUT tools', () => { + it('gpt-5.1-chat should have reasoning and structured output but NOT tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5.1-chat'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-5-chat should have reasoning and structured output but NOT tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5-chat'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Codex/Preview models', () => { + it('gpt-5.1-codex-mini should have structured output and tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['gpt-5.1-codex-mini'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('codex-mini-latest should have structured output and tools', () => { + type Options = OpenAIChatModelProviderOptionsByName['codex-mini-latest'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4o-search-preview should have structured output and tools', () => { + type Options = + OpenAIChatModelProviderOptionsByName['gpt-4o-search-preview'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('gpt-4o-mini-search-preview should have structured output and tools', () => { + type Options = + OpenAIChatModelProviderOptionsByName['gpt-4o-mini-search-preview'] + + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + + it('computer-use-preview should have tools but NOT structured output', () => { + type Options = + OpenAIChatModelProviderOptionsByName['computer-use-preview'] + + expectTypeOf().not.toExtend() + expectTypeOf().not.toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + expectTypeOf().toExtend() + }) + }) + + describe('Provider options type completeness', () => { + it('OpenAIChatModelProviderOptionsByName should have entries for all chat models', () => { + type Keys = keyof OpenAIChatModelProviderOptionsByName + + // Full featured models + expectTypeOf<'gpt-5.1'>().toExtend() + expectTypeOf<'gpt-5.1-codex'>().toExtend() + expectTypeOf<'gpt-5'>().toExtend() + expectTypeOf<'gpt-5-pro'>().toExtend() + + // Standard models (structured output + tools, no reasoning) + expectTypeOf<'gpt-5-mini'>().toExtend() + expectTypeOf<'gpt-5-nano'>().toExtend() + expectTypeOf<'gpt-5-codex'>().toExtend() + expectTypeOf<'gpt-4.1'>().toExtend() + expectTypeOf<'gpt-4.1-mini'>().toExtend() + expectTypeOf<'gpt-4.1-nano'>().toExtend() + expectTypeOf<'gpt-4o'>().toExtend() + expectTypeOf<'gpt-4o-mini'>().toExtend() + + // Reasoning-only models + expectTypeOf<'o3'>().toExtend() + expectTypeOf<'o3-pro'>().toExtend() + expectTypeOf<'o3-mini'>().toExtend() + expectTypeOf<'o4-mini'>().toExtend() + expectTypeOf<'o3-deep-research'>().toExtend() + expectTypeOf<'o4-mini-deep-research'>().toExtend() + expectTypeOf<'o1'>().toExtend() + expectTypeOf<'o1-pro'>().toExtend() + + // Legacy models + expectTypeOf<'gpt-4'>().toExtend() + expectTypeOf<'gpt-4-turbo'>().toExtend() + expectTypeOf<'gpt-3.5-turbo'>().toExtend() + + // Basic models + expectTypeOf<'chatgpt-4.0'>().toExtend() + expectTypeOf<'gpt-audio'>().toExtend() + expectTypeOf<'gpt-audio-mini'>().toExtend() + expectTypeOf<'gpt-4o-audio'>().toExtend() + expectTypeOf<'gpt-4o-mini-audio'>().toExtend() + + // Chat-only models + expectTypeOf<'gpt-5.1-chat'>().toExtend() + expectTypeOf<'gpt-5-chat'>().toExtend() + + // Codex/Preview models + expectTypeOf<'gpt-5.1-codex-mini'>().toExtend() + expectTypeOf<'codex-mini-latest'>().toExtend() + expectTypeOf<'gpt-4o-search-preview'>().toExtend() + expectTypeOf<'gpt-4o-mini-search-preview'>().toExtend() + expectTypeOf<'computer-use-preview'>().toExtend() + }) + }) + + describe('Detailed property type assertions', () => { + it('all models should have metadata option', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('metadata') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toHaveProperty('metadata') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toHaveProperty('metadata') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toHaveProperty('metadata') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toHaveProperty('metadata') + expectTypeOf().toHaveProperty( + 'metadata', + ) + expectTypeOf().toHaveProperty( + 'metadata', + ) + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4'] + >().toHaveProperty('metadata') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-3.5-turbo'] + >().toHaveProperty('metadata') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['chatgpt-4.0'] + >().toHaveProperty('metadata') + }) + + it('all models should have store option', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('store') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toHaveProperty('store') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toHaveProperty('store') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toHaveProperty('store') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toHaveProperty('store') + expectTypeOf().toHaveProperty( + 'store', + ) + expectTypeOf().toHaveProperty( + 'store', + ) + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4'] + >().toHaveProperty('store') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-3.5-turbo'] + >().toHaveProperty('store') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['chatgpt-4.0'] + >().toHaveProperty('store') + }) + + it('all models should have service_tier option', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('service_tier') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toHaveProperty('service_tier') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toHaveProperty('service_tier') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toHaveProperty('service_tier') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toHaveProperty('service_tier') + expectTypeOf().toHaveProperty( + 'service_tier', + ) + expectTypeOf().toHaveProperty( + 'service_tier', + ) + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4'] + >().toHaveProperty('service_tier') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-3.5-turbo'] + >().toHaveProperty('service_tier') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['chatgpt-4.0'] + >().toHaveProperty('service_tier') + }) + + it('models with tools support should have tool_choice and parallel_tool_calls', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('tool_choice') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('parallel_tool_calls') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toHaveProperty('tool_choice') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toHaveProperty('tool_choice') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toHaveProperty('tool_choice') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toHaveProperty('tool_choice') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4'] + >().toHaveProperty('tool_choice') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-3.5-turbo'] + >().toHaveProperty('tool_choice') + }) + + it('models with structured output should have text option', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('text') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toHaveProperty('text') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toHaveProperty('text') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toHaveProperty('text') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toHaveProperty('text') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1-chat'] + >().toHaveProperty('text') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-chat'] + >().toHaveProperty('text') + }) + + it('models with streaming should have stream_options', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toHaveProperty('stream_options') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toHaveProperty('stream_options') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toHaveProperty('stream_options') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toHaveProperty('stream_options') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toHaveProperty('stream_options') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4'] + >().toHaveProperty('stream_options') + expectTypeOf< + OpenAIChatModelProviderOptionsByName['chatgpt-4.0'] + >().toHaveProperty('stream_options') + }) + }) + + describe('Type discrimination between model categories', () => { + it('full featured models should extend all options', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1-codex'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-pro'] + >().toExtend() + }) + + it('standard models should NOT extend reasoning options but should extend structured output and tools', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-nano'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1-mini'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o-mini'] + >().toExtend() + + // Verify these do NOT extend reasoning options (discrimination already tested above) + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().not.toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().not.toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().not.toExtend() + }) + + it('reasoning-only models should extend reasoning options but NOT structured output or tools', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o3'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o3-pro'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o3-mini'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o4-mini'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o1'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o1-pro'] + >().toExtend() + + // Verify these do NOT extend structured output or tools options + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o3'] + >().not.toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o3'] + >().not.toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o1'] + >().not.toExtend() + }) + + it('all models should extend base options', () => { + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-mini'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4.1'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4o'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o3'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['o1'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-4'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-3.5-turbo'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['chatgpt-4.0'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5.1-chat'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['gpt-5-chat'] + >().toExtend() + expectTypeOf< + OpenAIChatModelProviderOptionsByName['computer-use-preview'] + >().toExtend() + }) + }) +}) diff --git a/packages/typescript/ai-openai/tests/openai-adapter.test.ts b/packages/typescript/ai-openai/tests/openai-adapter.test.ts index e220161b3..9620438ff 100644 --- a/packages/typescript/ai-openai/tests/openai-adapter.test.ts +++ b/packages/typescript/ai-openai/tests/openai-adapter.test.ts @@ -1,180 +1,142 @@ -import { describe, it, expect, beforeEach, vi } from "vitest"; -import { chat, type Tool, type StreamChunk } from "@tanstack/ai"; -import { OpenAI, type OpenAIProviderOptions } from "../src/openai-adapter"; +import { describe, it, expect, beforeEach, vi } from 'vitest' +import { chat, type Tool, type StreamChunk } from '@tanstack/ai' +import { OpenAI, type OpenAIProviderOptions } from '../src/openai-adapter' -const createAdapter = () => new OpenAI({ apiKey: "test-key" }); +const createAdapter = () => new OpenAI({ apiKey: 'test-key' }) -const toolArguments = JSON.stringify({ location: "Berlin" }); +const toolArguments = JSON.stringify({ location: 'Berlin' }) const weatherTool: Tool = { - type: "function", + type: 'function', function: { - name: "lookup_weather", - description: "Return the forecast for a location", + name: 'lookup_weather', + description: 'Return the forecast for a location', parameters: { - type: "object", + type: 'object', properties: { - location: { type: "string" }, + location: { type: 'string' }, }, - required: ["location"], + required: ['location'], }, }, -}; +} function createMockChatCompletionsStream( - chunks: Array> + chunks: Array>, ): AsyncIterable> { return { async *[Symbol.asyncIterator]() { for (const chunk of chunks) { - yield chunk; + yield chunk } }, - }; + } } -describe("OpenAI adapter option mapping", () => { +describe('OpenAI adapter option mapping', () => { beforeEach(() => { - vi.clearAllMocks(); - }); + vi.clearAllMocks() + }) - it("maps options into the Chat Completions payload", async () => { + it('maps options into the Responses API payload', async () => { + // Mock the Responses API event stream format const mockStream = createMockChatCompletionsStream([ { - id: "chatcmpl-123", - object: "chat.completion.chunk", - created: 1234567890, - model: "gpt-4o-mini", - choices: [ - { - index: 0, - delta: { content: "It is sunny" }, - finish_reason: null, - }, - ], + type: 'response.created', + response: { + id: 'resp-123', + model: 'gpt-4o-mini', + status: 'in_progress', + created_at: 1234567890, + }, }, { - id: "chatcmpl-123", - object: "chat.completion.chunk", - created: 1234567891, - model: "gpt-4o-mini", - choices: [ - { - index: 0, - delta: {}, - finish_reason: "stop", + type: 'response.content_part.added', + part: { + type: 'output_text', + text: 'It is sunny', + }, + }, + { + type: 'response.done', + response: { + id: 'resp-123', + model: 'gpt-4o-mini', + status: 'completed', + created_at: 1234567891, + usage: { + input_tokens: 12, + output_tokens: 4, }, - ], - usage: { - prompt_tokens: 12, - completion_tokens: 4, - total_tokens: 16, }, }, - ]); + ]) - const chatCompletionsCreate = vi.fn().mockResolvedValueOnce(mockStream); + const responsesCreate = vi.fn().mockResolvedValueOnce(mockStream) - const adapter = createAdapter(); + const adapter = createAdapter() // Replace the internal OpenAI SDK client with our mock - (adapter as any).client = { - chat: { - completions: { - create: chatCompletionsCreate, - }, + ;(adapter as any).client = { + responses: { + create: responsesCreate, }, - }; + } const providerOptions: OpenAIProviderOptions = { - tool_choice: "required", - }; + tool_choice: 'required', + } - const chunks: StreamChunk[] = []; + const chunks: StreamChunk[] = [] for await (const chunk of chat({ adapter, - model: "gpt-4o-mini", + model: 'gpt-4o-mini', messages: [ - { role: "system", content: "Stay concise" }, - { role: "user", content: "How is the weather?" }, + { role: 'system', content: 'Stay concise' }, + { role: 'user', content: 'How is the weather?' }, { - role: "assistant", - content: "Let me check", + role: 'assistant', + content: 'Let me check', toolCalls: [ { - id: "call_weather", - type: "function", - function: { name: "lookup_weather", arguments: toolArguments }, + id: 'call_weather', + type: 'function', + function: { name: 'lookup_weather', arguments: toolArguments }, }, ], }, - { role: "tool", toolCallId: "call_weather", content: '{"temp":72}' }, + { role: 'tool', toolCallId: 'call_weather', content: '{"temp":72}' }, ], tools: [weatherTool], options: { temperature: 0.25, topP: 0.6, maxTokens: 1024, - metadata: { requestId: "req-42" }, + metadata: { requestId: 'req-42' }, }, providerOptions, })) { - chunks.push(chunk); + chunks.push(chunk) } - expect(chatCompletionsCreate).toHaveBeenCalledTimes(1); - const [payload] = chatCompletionsCreate.mock.calls[0]; + expect(responsesCreate).toHaveBeenCalledTimes(1) + const [payload] = responsesCreate.mock.calls[0] + // Responses API uses different field names and structure expect(payload).toMatchObject({ - model: "gpt-4o-mini", + model: 'gpt-4o-mini', temperature: 0.25, top_p: 0.6, - max_tokens: 1024, + max_output_tokens: 1024, // Responses API uses max_output_tokens instead of max_tokens stream: true, - tools: [ - { - type: "function", - function: { - name: "lookup_weather", - parameters: { - type: "object", - properties: { - location: { type: "string" }, - }, - required: ["location"], - }, - }, - }, - ], - }); + tool_choice: 'required', // From providerOptions + }) - expect(payload.messages).toEqual([ - { - role: "system", - content: "Stay concise", - }, - { - role: "user", - content: "How is the weather?", - }, - { - role: "assistant", - content: "Let me check", - tool_calls: [ - { - id: "call_weather", - type: "function", - function: { - name: "lookup_weather", - arguments: toolArguments, - }, - }, - ], - }, - { - role: "tool", - tool_call_id: "call_weather", - content: '{"temp":72}', - }, - ]); - }); -}); + // Responses API uses 'input' instead of 'messages' + expect(payload.input).toBeDefined() + + // Verify tools are included + expect(payload.tools).toBeDefined() + expect(Array.isArray(payload.tools)).toBe(true) + expect(payload.tools.length).toBeGreaterThan(0) + }) +}) diff --git a/packages/typescript/ai-openai/tsconfig.json b/packages/typescript/ai-openai/tsconfig.json index 204ca8d3f..ea11c1096 100644 --- a/packages/typescript/ai-openai/tsconfig.json +++ b/packages/typescript/ai-openai/tsconfig.json @@ -5,6 +5,5 @@ "rootDir": "src" }, "include": ["src/**/*.ts", "src/**/*.tsx"], - "exclude": ["node_modules", "dist", "**/*.config.ts"], - "references": [{ "path": "../ai" }] + "exclude": ["node_modules", "dist", "**/*.config.ts"] } diff --git a/packages/typescript/ai-openai/tsdown.config.ts b/packages/typescript/ai-openai/tsdown.config.ts deleted file mode 100644 index 8008558a5..000000000 --- a/packages/typescript/ai-openai/tsdown.config.ts +++ /dev/null @@ -1,12 +0,0 @@ -import { defineConfig } from "tsdown"; - -export default defineConfig({ - entry: ["./src/index.ts"], - format: ["esm"], - unbundle: true, - dts: true, - sourcemap: true, - clean: true, - minify: false, - external: ["openai"], -}); diff --git a/packages/typescript/ai-openai/vite.config.ts b/packages/typescript/ai-openai/vite.config.ts new file mode 100644 index 000000000..e83c13eb9 --- /dev/null +++ b/packages/typescript/ai-openai/vite.config.ts @@ -0,0 +1,35 @@ +import { defineConfig, mergeConfig } from 'vitest/config' +import { tanstackViteConfig } from '@tanstack/config/vite' +import packageJson from './package.json' + +const config = defineConfig({ + test: { + name: packageJson.name, + dir: './', + watch: false, + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) + +export default mergeConfig( + config, + tanstackViteConfig({ + entry: ['./src/index.ts'], + srcDir: './src', + }), +) diff --git a/packages/typescript/ai-openai/vitest.config.ts b/packages/typescript/ai-openai/vitest.config.ts index 8fa8bfb9e..fa2531743 100644 --- a/packages/typescript/ai-openai/vitest.config.ts +++ b/packages/typescript/ai-openai/vitest.config.ts @@ -1,22 +1,22 @@ -import { defineConfig } from "vitest/config"; - -export default defineConfig({ - test: { - globals: true, - environment: "node", - include: ["tests/**/*.test.ts"], - coverage: { - provider: "v8", - reporter: ["text", "json", "html", "lcov"], - exclude: [ - "node_modules/", - "dist/", - "tests/", - "**/*.test.ts", - "**/*.config.ts", - "**/types.ts", - ], - include: ["src/**/*.ts"], - }, - }, -}); +import { defineConfig } from 'vitest/config' + +export default defineConfig({ + test: { + globals: true, + environment: 'node', + include: ['tests/**/*.test.ts'], + coverage: { + provider: 'v8', + reporter: ['text', 'json', 'html', 'lcov'], + exclude: [ + 'node_modules/', + 'dist/', + 'tests/', + '**/*.test.ts', + '**/*.config.ts', + '**/types.ts', + ], + include: ['src/**/*.ts'], + }, + }, +}) diff --git a/packages/typescript/ai-react-ui/README.md b/packages/typescript/ai-react-ui/README.md index 63e6c4835..7c4143074 100644 --- a/packages/typescript/ai-react-ui/README.md +++ b/packages/typescript/ai-react-ui/README.md @@ -1,282 +1,104 @@ -# @tanstack/ai-react-ui - -Headless React components for building AI chat interfaces with TanStack AI SDK. - -## Features - -🧩 **Parts-Based Messages** - Native support for TanStack AI's message parts (text, thinking, tool calls, results) -šŸ’­ **Thinking/Reasoning** - Collapsible thinking sections that auto-collapse when complete -šŸ” **Tool Approvals** - Built-in UI for tools that require user approval -šŸ’» **Client-Side Tools** - Execute tools in the browser without server round-trips -šŸŽØ **Headless & Customizable** - Fully unstyled with render props for complete control -⚔ **Type-Safe** - Full TypeScript support with proper inference - -## Installation - -```bash -pnpm add @tanstack/ai-react-ui -``` - -## Quick Start - -```tsx -import { Chat } from "@tanstack/ai-react-ui"; -import { fetchServerSentEvents } from "@tanstack/ai-react"; - -function MyChat() { - return ( - - - {(message) => } - - - - ); -} -``` - -## Core Concepts - -### Parts-Based Messages - -Unlike traditional chat libraries that treat messages as simple strings, TanStack AI uses **parts**: - -```typescript -{ - role: "assistant", - parts: [ - { - type: "thinking", - content: "The user wants a guitar recommendation..." - }, - { type: "text", content: "Here's a recommendation:" }, - { - type: "tool-call", - name: "recommendGuitar", - arguments: '{"id":"6"}', - state: "input-complete" - } - ] -} -``` - -This allows: - -- Multiple content types in one message (thinking, text, tool calls, results) -- Proper streaming of thinking/reasoning alongside text -- Collapsible thinking sections that auto-collapse when complete -- Proper streaming of tool calls alongside text -- State tracking for each part independently - -### Tool Approvals - -Tools can require user approval before execution: - -```tsx - { - // Client-side tool execution - if (toolName === "addToWishList") { - const wishList = JSON.parse(localStorage.getItem("wishList") || "[]"); - wishList.push(input.guitarId); - localStorage.setItem("wishList", JSON.stringify(wishList)); - return { success: true }; - } - }} -> - - {(message) => ( - - approval?.needsApproval ? ( - - ) : null, - }} - /> - )} - - -``` - -## API Reference - -### `` - -Root component that provides chat context to all subcomponents. - -**Props:** - -- `connection: ConnectionAdapter` - How to connect to your API -- `onToolCall?: (args) => Promise` - Handler for client-side tools -- `className?: string` - CSS class for root element -- All other `useChat` options - -### `` - -Renders the list of messages. - -**Props:** - -- `children?: (message, index) => ReactNode` - Custom message renderer -- `emptyState?: ReactNode` - Show when no messages -- `loadingState?: ReactNode` - Show while loading -- `autoScroll?: boolean` - Auto-scroll to bottom (default: true) - -### `` - -Renders a single message with all its parts. - -**Props:** - -- `message: UIMessage` - The message to render -- `textPartRenderer?: (props: { content: string }) => ReactNode` - Custom renderer for text parts -- `thinkingPartRenderer?: (props: { content: string; isComplete?: boolean }) => ReactNode` - Custom renderer for thinking parts -- `toolsRenderer?: Record ReactNode>` - Named tool renderers -- `defaultToolRenderer?: (props) => ReactNode` - Default tool renderer -- `toolResultRenderer?: (props) => ReactNode` - Custom renderer for tool results - -### `` - -Auto-growing textarea input. - -**Props:** - -- `children?: (renderProps) => ReactNode` - Render prop for full control -- `placeholder?: string` -- `autoGrow?: boolean` - Auto-grow textarea (default: true) -- `maxHeight?: number` - Max height in pixels (default: 200) -- `submitOnEnter?: boolean` - Submit on Enter, new line on Shift+Enter (default: true) - -### `` - -Renders approve/deny buttons for tools requiring approval. - -**Props:** - -- `toolCallId: string` -- `toolName: string` -- `input: any` - Parsed tool arguments -- `approval: { id, needsApproval, approved? }` -- `children?: (renderProps) => ReactNode` - Custom approval UI - -## Examples - -### Custom Message Styling - -```tsx - ( -
- {content} -
- )} - thinkingPartRenderer={({ content, isComplete }) => ( -
-
- šŸ’­ Thinking... -
{content}
-
-
- )} - toolsRenderer={{ - recommendGuitar: ({ name, state }) => ( -
- Tool: {name} ({state}) -
- ), - }} -/> -``` - -### Custom Input with Send Button - -```tsx - - {({ value, onChange, onSubmit, isLoading, inputRef }) => ( -
-