feat: E2E testing infrastructure with llmock#417
Conversation
…ptionDisplay, SummarizeUI
…al, summarize, media
…nerateTranscription and vite scripts
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a complete Playwright E2E test suite and harness: fixtures, UI components, TanStack router and server API endpoints, LLMock-based mock server (with recording), provider adapters and feature configs, Playwright/Vite/TS configs, test helpers/specs, and a CI workflow to run, cache, and upload artifacts. Changes
Sequence Diagram(s)sequenceDiagram
participant Dev as Developer / Playwright
participant Browser as Browser (UI)
participant Server as App Server
participant LLMock as LLMock (mock LLM)
participant Storage as Fixtures
Dev->>Browser: Open feature route (e.g. /openai/chat)
Browser->>Server: POST /api/chat { provider, feature, messages }
Server->>LLMock: ensureLLMock() (start or reuse)
Server->>LLMock: request model response / tool calls
LLMock-->>Server: mocked response (or stream events)
Server-->>Browser: SSE or JSON response
Browser->>Browser: render ChatUI / ToolCallDisplay / ApprovalPrompt
Browser->>Server: POST approval result (if needed)
Server->>LLMock: forward approval -> continue agent loop
LLMock-->>Storage: write recorded fixtures (if recording)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🚀 Changeset Version PreviewNo changeset entries found. Merging this PR will not cause a version bump for any packages. |
|
View your CI Pipeline Execution ↗ for commit 281ea65
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
…ffected needing main branch
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (18)
testing/e2e/tests/test-matrix.ts (1)
13-123: Avoid duplicating feature-support truth between app and tests.
supportMatrixhere can drift fromsrc/lib/feature-support.ts, which may cause tests to skip/expect the wrong scenarios over time. Prefer reusing the app-sideisSupporteddirectly.♻️ Suggested simplification
import type { Provider, Feature } from '../src/lib/types' +import { isSupported as appIsSupported } from '../src/lib/feature-support' @@ -const supportMatrix: Record<Feature, Set<Provider>> = { - // ... -} - export function isSupported(provider: Provider, feature: Feature): boolean { - return supportMatrix[feature]?.has(provider) ?? false + return appIsSupported(provider, feature) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/test-matrix.ts` around lines 13 - 123, The test duplicates the supportMatrix which can drift from the app; remove the local supportMatrix and reuse the app's logic by importing and calling the canonical isSupported from the app's feature-support module (replace references to the local supportMatrix and the test's isSupported with the imported isSupported). Ensure you import the same Provider and Feature types (or adapt types) from the app module so signatures match, and update any test calls that used supportMatrix[feature].has(provider) to use isSupported(provider, feature).testing/e2e/global-teardown.ts (1)
1-7: Consider adding error handling formock.stop().If
mock.stop()throws an error, it could mask the underlying issue and potentially leave test infrastructure in an inconsistent state. Wrapping the stop call in try-catch ensures graceful teardown.🛡️ Suggested improvement with error handling
export default async function globalTeardown() { const mock = (globalThis as any).__llmock if (mock) { - await mock.stop() - console.log('[global-teardown] llmock stopped') + try { + await mock.stop() + console.log('[global-teardown] llmock stopped') + } catch (error) { + console.error('[global-teardown] Failed to stop llmock:', error) + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/global-teardown.ts` around lines 1 - 7, The globalTeardown function should guard the await mock.stop() call with a try-catch to ensure teardown always proceeds even if stop() throws; locate globalTeardown and the usage of (globalThis as any).__llmock and wrap the await mock.stop() in a try { await mock.stop(); console.log(...) } catch (err) { console.error('[global-teardown] error stopping llmock', err) } so errors are logged with context but do not prevent the rest of teardown.testing/e2e/global-setup.ts (1)
1-6: Make fixture directory resolution independent of current working directory.Line 5 currently relies on
./fixtures, which is cwd-sensitive. Resolving from the setup file location is more robust in CI/script variations.Proposed refactor
+import { dirname, resolve } from 'node:path' +import { fileURLToPath } from 'node:url' import { LLMock } from '@copilotkit/llmock' export default async function globalSetup() { + const __dirname = dirname(fileURLToPath(import.meta.url)) const mock = new LLMock({ port: 4010, host: '127.0.0.1', logLevel: 'info' }) - await mock.loadFixtureDir('./fixtures') + await mock.loadFixtureDir(resolve(__dirname, './fixtures')) await mock.start() console.log(`[global-setup] llmock started at ${mock.url}`) ;(globalThis as any).__llmock = mock }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/global-setup.ts` around lines 1 - 6, The fixture path in globalSetup uses a cwd-sensitive relative path './fixtures'; change loadFixtureDir to use a path resolved relative to the setup file (e.g., compute fixturesDir = path.resolve(__dirname, 'fixtures') or via new URL if ESM) and pass that resolved absolute path to LLMock.loadFixtureDir; also add the appropriate import for Node's path (or use URL) so the fixture directory resolution is robust in CI and different working directories.testing/e2e/tests/structured-output.spec.ts (1)
22-25: Validate JSON structure, not just substrings.For a
structured-outputtest, asserting onlytoContainweakens contract coverage. Parse and verify fields to catch format regressions.Suggested structured assertion
const response = await getLastAssistantMessage(page) - expect(response).toContain('Fender Stratocaster') - expect(response).toContain('1299') + const jsonMatch = response.match(/\{[\s\S]*\}/) + expect(jsonMatch).toBeTruthy() + const parsed = JSON.parse(jsonMatch![0]) as { name?: string; price?: number } + expect(parsed.name).toBe('Fender Stratocaster') + expect(parsed.price).toBe(1299)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/structured-output.spec.ts` around lines 22 - 25, The test currently checks substrings in the assistant message; instead parse the assistant output returned by getLastAssistantMessage(page) as JSON (or extract the JSON payload) and assert the structured fields explicitly (e.g., check that parsed.name === 'Fender Stratocaster' and parsed.price === 1299 and that required keys like name and price exist and have correct types). Update the assertions in structured-output.spec.ts to parse response into an object and use strict equality/type checks instead of toContain so format/regression errors are caught.testing/e2e/tests/agentic-structured.spec.ts (1)
23-25: Assert expected tool identity to tighten agentic validation.Checking only tool-call count can miss wrong-tool regressions. Confirm the expected tool is actually invoked.
Suggested tool assertion improvement
const toolCalls = await getToolCalls(page) expect(toolCalls.length).toBeGreaterThanOrEqual(1) + expect(toolCalls.some((toolCall) => toolCall.name === 'getGuitars')).toBeTruthy()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/agentic-structured.spec.ts` around lines 23 - 25, The test currently only checks toolCalls.length; tighten it by asserting the expected tool identity is invoked: after calling getToolCalls(page) and storing toolCalls, assert that at least one entry has the expected identifier (e.g., toolCalls.some(c => c.toolName === EXPECTED_TOOL_NAME) or matching property like c.toolId) and/or explicitly assert the first call (toolCalls[0].toolName) equals the expected tool. Update the test to reference the expected tool constant/name used by this scenario and fail if no tool call matches that identifier.testing/e2e/tests/summarize-stream.spec.ts (1)
22-23: Strengthen the stream result assertion to reduce false positives.A non-empty check can pass even when the summarization quality regresses. Prefer asserting expected content semantics for this fixture-driven flow.
Suggested assertion tightening
- const result = await getSummarizationResult(page) - expect(result.length).toBeGreaterThan(0) + const result = await getSummarizationResult(page) + expect(result.trim().length).toBeGreaterThan(0) + expect(result).toContain('guitar')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/summarize-stream.spec.ts` around lines 22 - 23, The current test only checks that getSummarizationResult(page) returns a non-empty string; tighten it to assert meaningful content by checking the returned summary (result) contains expected semantic tokens or structure for this fixture-driven flow — e.g., assert that result includes one or more known keywords/phrases from the fixture, meets a minimal word/character count, or matches a simple regex for summary-like sentences. Update the assertion in summarize-stream.spec.ts to replace or augment expect(result.length).toBeGreaterThan(0) with checks against getSummarizationResult's output (result) such as inclusion of expected fixture-specific words/phrases, a minimum word count, or a regex pattern to reduce false positives.testing/e2e/tests/tool-approval.spec.ts (1)
44-46: Strengthen denial-path assertion to validate behavior, not just message presence.
count >= 1can pass even if denial handling regresses. Prefer asserting the final assistant message reflects denial (or at least does not indicate success).Possible tighter assertion
- const messages = page.getByTestId('assistant-message') - const count = await messages.count() - expect(count).toBeGreaterThanOrEqual(1) + const response = await getLastAssistantMessage(page) + expect(response.trim().length).toBeGreaterThan(0) + expect(response.toLowerCase()).not.toContain('added')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/tool-approval.spec.ts` around lines 44 - 46, The current assertion only checks that messages = page.getByTestId('assistant-message') has count >= 1 which can hide denial regressions; instead, after computing count, read the final assistant message text via the locator for the last message (e.g., messages.nth(count - 1) or messages.last()) and assert its text reflects denial behavior (for example contains a denial phrase or does not contain a success/approval phrase). Update the test in tool-approval.spec.ts to replace or extend the count assertion with a concrete assertion against the final message text using the messages locator and the count variable to index the last message.testing/e2e/playwright.config.ts (1)
14-19: Considervideo: 'retain-on-failure'to reduce artifact storage.With
video: 'on', videos are recorded for all tests including passing ones. For a 17-feature × 7-provider matrix, this could generate significant storage overhead. Consider'retain-on-failure'to only keep videos for debugging failed tests.Suggested change
use: { baseURL: 'http://localhost:3010', - video: 'on', + video: 'retain-on-failure', screenshot: 'only-on-failure', trace: 'on-first-retry', },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/playwright.config.ts` around lines 14 - 19, Update the Playwright test configuration in the use block to avoid recording videos for passing runs: change the video setting from 'on' to 'retain-on-failure' in the use object inside playwright.config.ts (the same block that contains baseURL, screenshot, and trace) so only failed tests retain video artifacts and overall storage usage is reduced.testing/e2e/src/lib/providers.ts (1)
31-70: Type assertions bypass model validation.All factories use type assertions like
model as 'gpt-4o'which suppress TypeScript's type checking. IfmodelOverridecontains an invalid model string, it will pass compile-time but may fail at runtime.For E2E testing this is likely acceptable since you control the inputs, but consider adding runtime validation if
modelOverrideis user-provided.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/providers.ts` around lines 31 - 70, The factories (openai, anthropic, gemmini, etc.) currently coerce model via type assertions (e.g., model as 'gpt-4o') which bypasses type checks; update each provider factory (openai, anthropic, gemini, ollama, groq, grok, openrouter) to perform a runtime validation against a small whitelist or mapping of allowed model strings before calling createChatOptions/adapter, and if the provided model (the variable named model) is invalid either throw a clear error or fall back to a safe default; ensure validation logic is centralized (e.g., a getValidatedModel(provider, model) helper) so adapters always receive a verified string rather than using type assertions.testing/e2e/src/lib/types.ts (2)
1-55: Consider deriving types from arrays to reduce duplication.The union types and arrays duplicate the same values. Using
as constwith type inference eliminates the risk of drift:Alternative DRY approach
export const ALL_PROVIDERS = [ 'openai', 'anthropic', 'gemini', 'ollama', 'grok', 'groq', 'openrouter', ] as const export type Provider = (typeof ALL_PROVIDERS)[number] export const ALL_FEATURES = [ 'chat', 'one-shot-text', // ... rest ] as const export type Feature = (typeof ALL_FEATURES)[number]This ensures the array and type are always in sync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/types.ts` around lines 1 - 55, Replace the duplicated union type declarations by deriving Provider and Feature from the canonical arrays: make ALL_PROVIDERS and ALL_FEATURES the single sources of truth (marked as readonly with "as const") and change the Provider and Feature type definitions to infer their unions from (typeof ALL_PROVIDERS)[number] and (typeof ALL_FEATURES)[number] respectively; update any imports/usages of Provider/Feature as needed to compile with the new inferred types while leaving the array constants (ALL_PROVIDERS, ALL_FEATURES) names unchanged.
28-36: UseArray<T>syntax per ESLint configuration.The project's ESLint rules require
Array<Provider>andArray<Feature>instead ofProvider[]andFeature[].Proposed fix
-export const ALL_PROVIDERS: Provider[] = [ +export const ALL_PROVIDERS: Array<Provider> = [ 'openai', 'anthropic', 'gemini', 'ollama', 'grok', 'groq', 'openrouter', ] -export const ALL_FEATURES: Feature[] = [ +export const ALL_FEATURES: Array<Feature> = [ 'chat', 'one-shot-text', ... ]Also applies to: 38-55
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/types.ts` around lines 28 - 36, Replace the shorthand array types with generic Array<T> to satisfy ESLint: change the declaration of ALL_PROVIDERS (and any similar arrays like ALL_FEATURES) from Provider[] to Array<Provider> and Feature[] to Array<Feature>, updating the exported constants' type annotations accordingly (e.g., ALL_PROVIDERS: Array<Provider> = [...] and ALL_FEATURES: Array<Feature> = [...]) so the symbols ALL_PROVIDERS and ALL_FEATURES use Array<T> syntax.testing/e2e/src/components/ApprovalPrompt.tsx (2)
28-41: Button clicks don't await asynconRespondor handle errors.The
onClickhandlers fire-and-forget the asynconRespond. If the promise rejects, the error is swallowed. For E2E testing infrastructure this is likely acceptable, but consider adding error handling or loading states for robustness.Optional improvement
// If you want to handle errors: onClick={async () => { try { await onRespond({ id: part.id, approved: true }) } catch (error) { console.error('Approval failed:', error) } }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/components/ApprovalPrompt.tsx` around lines 28 - 41, The button onClick handlers in ApprovalPrompt currently call the async onRespond without awaiting or catching errors; update the approve/deny handlers (the functions tied to the approve-button-${part.toolName} and deny-button-${part.toolName} buttons) to be async, await onRespond({ id: part.id, approved: true/false }) and wrap the await in try/catch to log or surface errors; additionally consider adding a local loading state in the ApprovalPrompt component to disable the buttons while the request is pending to prevent duplicate clicks.
1-6: Consider exportingApprovalPartinterface.If other components or tests need to construct approval parts, exporting this interface would be helpful.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/components/ApprovalPrompt.tsx` around lines 1 - 6, The ApprovalPart interface is currently module-private; export it so other components/tests can import and construct approval parts. Modify the declaration of ApprovalPart to be exported (e.g., export interface ApprovalPart) and ensure any existing local uses (in ApprovalPrompt component) continue to reference the exported symbol ApprovalPart..github/workflows/e2e.yml (1)
32-39: Cache key may miss Playwright version updates.The cache key only hashes
testing/e2e/package.json. If Playwright is updated via a lockfile change (e.g., patch version bump) without modifyingpackage.json, stale browser binaries could be used.Consider including the lockfile in the hash:
Proposed fix
- name: Cache Playwright Browsers id: playwright-cache uses: actions/cache@v4 with: path: ~/.cache/ms-playwright - key: playwright-${{ hashFiles('testing/e2e/package.json') }} + key: playwright-${{ hashFiles('testing/e2e/package.json', 'pnpm-lock.yaml') }} restore-keys: | playwright-🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/e2e.yml around lines 32 - 39, The Playwright cache step ("Cache Playwright Browsers", id playwright-cache) currently keys only on testing/e2e/package.json which can miss lockfile-only Playwright version updates; update the cache key to hash both the package.json and the lockfile(s) used in the repo (e.g., testing/e2e/package-lock.json, yarn.lock, or pnpm-lock.yaml as applicable) by passing both paths into hashFiles so the key changes when browser binaries change and avoids restoring stale Playwright caches.testing/e2e/src/lib/features.ts (1)
3-3: Sort import members alphabetically per ESLint.Proposed fix
-import { getGuitars, compareGuitars, addToCart } from '@/lib/tools' +import { addToCart, compareGuitars, getGuitars } from '@/lib/tools'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/features.ts` at line 3, The import members are not alphabetized; reorder the named imports in the import statement so they are sorted alphabetically (addToCart, compareGuitars, getGuitars) to satisfy ESLint — update the import that currently lists getGuitars, compareGuitars, addToCart to the alphabetical order referencing the same module.testing/e2e/src/components/ChatUI.tsx (1)
122-136: Unintuitive image upload flow: user must type text first, then file selection auto-sends.The current logic requires the user to fill the text input before selecting an image, and the message is sent immediately on file selection. This is non-standard UX—typically users select a file first, then click send.
For E2E testing purposes this may be acceptable, but consider whether a more conventional flow (file selection → preview → send button) would be clearer.
♻️ Alternative: store selected file in state, send on button click
export function ChatUI({ ... }: ChatUIProps) { const [input, setInput] = useState('') + const [selectedFile, setSelectedFile] = useState<File | null>(null) const messagesRef = useRef<HTMLDivElement>(null) const handleSubmit = () => { if (!input.trim()) return - onSendMessage(input.trim()) + if (selectedFile && onSendMessageWithImage) { + onSendMessageWithImage(input.trim(), selectedFile) + setSelectedFile(null) + } else { + onSendMessage(input.trim()) + } setInput('') } // In the file input onChange: - onChange={(e) => { - const file = e.target.files?.[0] - if (file && input.trim() && onSendMessageWithImage) { - onSendMessageWithImage(input.trim(), file) - setInput('') - } - }} + onChange={(e) => { + const file = e.target.files?.[0] + if (file) setSelectedFile(file) + }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/components/ChatUI.tsx` around lines 122 - 136, The current image input handler in ChatUI (the JSX block using showImageInput and the onChange callback) requires a non-empty text input and auto-sends on file selection; change this to store the selected File in component state (e.g., add selectedImage state) instead of calling onSendMessageWithImage immediately, clear the immediate send logic from the onChange handler, and update the send button handler (the same place that currently uses setInput and input) to call onSendMessageWithImage(input.trim(), selectedImage) so users can choose a file first, preview or edit text, then click send; ensure you still reset selectedImage and input after successful send.testing/e2e/src/routes/$provider/$feature.tsx (1)
126-140: Missing error handling for fetch failures.If the
/api/summarizecall fails or returns a non-JSON response,res.json()will throw, leaving the UI in an inconsistent state (loading cleared but no result/error shown). This pattern repeats inImageGenFeature,TTSFeature, andTranscriptionFeature.♻️ Add error state handling
function SummarizeFeature({ provider, stream, }: { provider: Provider stream: boolean }) { const [result, setResult] = useState<string | null>(null) const [isLoading, setIsLoading] = useState(false) + const [error, setError] = useState<string | null>(null) const handleSubmit = async (text: string) => { setIsLoading(true) setResult(null) + setError(null) try { const res = await fetch('/api/summarize', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ text, provider, stream }), }) + if (!res.ok) { + throw new Error(`Request failed: ${res.status}`) + } const data = await res.json() setResult(data.summary) + } catch (e) { + setError(e instanceof Error ? e.message : 'An error occurred') } finally { setIsLoading(false) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/`$provider/$feature.tsx around lines 126 - 140, The handleSubmit function lacks error handling around the fetch and res.json() call which can throw on network errors or non-JSON/ non-OK responses; wrap the fetch + res.json() in a try/catch (or check res.ok before calling res.json()), set an explicit error state (e.g., setError) on failure, and ensure setIsLoading(false) is still called in finally while clearing or not overwriting result when an error occurs; apply the same pattern to ImageGenFeature, TTSFeature, and TranscriptionFeature functions (their submit/handle... methods) so all fetches check res.ok, catch JSON/parsing/network errors, and update error/result/loading state appropriately.testing/e2e/tests/helpers.ts (1)
54-56: Inconsistent pattern:getStructuredOutputdoesn't wait for visibility.Unlike
getSummarizationResult(lines 79-82) andgetTranscriptionResult(lines 90-92), this helper callsinnerText()immediately without waiting for the element to be visible. This could cause flaky tests if the structured output takes time to render.♻️ Add waitFor for consistency
export async function getStructuredOutput(page: Page): Promise<string> { + await page + .getByTestId('structured-output') + .waitFor({ state: 'visible', timeout: 15_000 }) return page.getByTestId('structured-output').innerText() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/helpers.ts` around lines 54 - 56, getStructuredOutput currently calls innerText() immediately and can be flaky; update it to wait for the element to be visible first (mirroring getSummarizationResult and getTranscriptionResult) by using the same test-id locator (page.getByTestId('structured-output')) and calling waitFor with state: 'visible' before returning innerText(), ensuring the helper reliably waits for the element to render.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testing/e2e/package.json`:
- Line 14: The package.json dependency for "@copilotkit/llmock" is pinned to
"latest", which makes test runs non-reproducible; update the dependency entry
for "@copilotkit/llmock" to an explicit version (e.g., "1.4.0") instead of
"latest" so CI and local runs use a fixed, deterministic package version.
In `@testing/e2e/src/components/SummarizeUI.tsx`:
- Around line 23-35: Pass a trimmed input to the submit handler and render
empty-string results: change the button onClick to call onSubmit(input.trim())
(use the existing onClick, input, isLoading symbols) so submitted text is
normalized, and replace the truthy check {result && (...)} with an explicit
existence check such as {result !== undefined && result !== null && (...)}
(referencing result and the render block) so valid empty-string summaries are
displayed.
In `@testing/e2e/src/lib/llmock-server.ts`:
- Around line 20-25: The upstreamProviders map is missing entries for groq,
grok, and openrouter which prevents the E2E suite (see providers.ts) from
exercising those backends; update the upstreamProviders object (symbol:
upstreamProviders in lllmock-server.ts) to include keys for "groq", "grok", and
"openrouter" wired to the appropriate environment variables (e.g.,
process.env.GROQ_API_KEY, process.env.GROK_API_KEY,
process.env.OPENROUTER_API_KEY) or sensible localhost/default endpoints for
recording mode so tests can run and produce complete fixtures.
In `@testing/e2e/src/lib/tools.ts`:
- Around line 47-62: The compareGuitars server handler (compareGuitars) assumes
selected and prices are non-empty and uses Math.min/Math.max and non-null
assertions which will throw if no guitars match; update the function to
defensively handle empty or partial matches by checking if selected.length === 0
(either throw a clear error or return a result with cheapest/mostExpensive set
to null or an empty string), and guard the cheapest/mostExpensive computation by
only calling Math.min/Math.max when prices.length > 0 and using a safe lookup
(avoid `!`) such as finding the item only if the computed min/max is finite;
ensure the return shape is consistent with consumers.
- Around line 64-85: The addToCartToolDef currently types guitarId as
z.string(); change both the inputSchema.guitarId and outputSchema.guitarId to
z.number() so the tool aligns with getGuitarsToolDef and compareGuitarsToolDef.
Update any dependent usage in the addToCart server implementation (the addToCart
function) to treat args.guitarId as a number (no string-specific operations)
while leaving cartId generation as-is.
In `@testing/e2e/src/routes/__root.tsx`:
- Line 16: The type React.ReactNode is referenced in the RootDocument function
signature but React is not imported; fix by importing the type and updating the
signature: add an import like `import type { ReactNode } from "react"` at the
top and change the parameter type to use `ReactNode` (e.g., `function
RootDocument({ children }: { children: ReactNode })`) so the RootDocument
function's children type resolves correctly.
In `@testing/e2e/src/routes/`$provider/$feature.tsx:
- Around line 88-105: The FileReader onload handler (reader.onload) unsafely
assumes reader.result is a data URL and does split(',')[1]; instead, validate
reader.result is a string and contains a comma before extracting base64 and only
call sendMessage when base64 is defined; in the reader.onload inside the
anonymous file handler, check type and format of reader.result (e.g., typeof ===
'string' && reader.result.includes(',')), extract the part after the comma into
base64, and handle the fallback (log/abort/notify) when base64 is missing so
sendMessage never receives undefined for the image part; keep this logic around
the FileReader usage (readAsDataURL) and sendMessage call to ensure safety.
In `@testing/e2e/src/routes/`$provider/index.tsx:
- Around line 11-15: The code calls getSupportedFeatures(provider) immediately
after extracting provider from Route.useParams, which can throw for invalid
params; move the ALL_PROVIDERS inclusion check to run before calling
getSupportedFeatures and only call getSupportedFeatures(provider) after
confirming provider is included in ALL_PROVIDERS (use the same Route.useParams
cast and the existing ALL_PROVIDERS, provider, and getSupportedFeatures symbols
to locate and reorder the logic).
In `@testing/e2e/src/routes/api.chat.ts`:
- Around line 12-16: The new AbortController created as abortController is not
wired to the incoming request.signal, so client disconnects won't cancel
downstream LLM work; hook them up by adding a listener: when request.signal
aborts, call abortController.abort(), and then pass abortController.signal into
the LLM/streaming invocation in this handler (the call to the LLM/streaming
function in api.chat.ts) so the model request is cancelled on client disconnect.
In `@testing/e2e/src/routes/api.image.ts`:
- Around line 22-48: Validate incoming body in the POST handler: ensure prompt
is a non-empty string and provider is a valid string (return a 400 JSON Response
like the existing adapter error if invalid) before calling createImageAdapter;
then call createImageAdapter(provider) and keep the same 400 return for
unsupported providers. Treat generateImage as potentially async/erroring during
iteration: await it if it returns a Promise, and ensure errors from the returned
stream are propagated into a catch (either by wrapping the
async-generator/stream iteration inside a try/catch or by registering an 'error'
handler that rejects so the surrounding catch can return a 500 JSON). Update
references in this handler (POST, createImageAdapter, generateImage,
toServerSentEventsResponse) accordingly so runtime and async stream errors
result in proper JSON error responses.
In `@testing/e2e/src/routes/api.tts.ts`:
- Around line 8-13: createTTSAdapter currently only returns an OpenAI TTS
factory causing Gemini-marked TTS to fail; add a "gemini" entry to the factories
map that returns the gemini TTS adapter (following the same pattern used in
api.image.ts) so createTTSAdapter supports both providers. Locate the
createTTSAdapter function and add a factories["gemini"] = () => geminiSpeech({
baseURL: LLMOCK_URL }) (or the equivalent geminiSpeech factory used elsewhere)
ensuring the Provider type string matches "gemini".
In `@testing/e2e/tests/multimodal-image.spec.ts`:
- Around line 9-12: Replace the ESM-unsafe use of __dirname when building
testImagePath: import path remains, but compute the current file directory via
fileURLToPath(import.meta.url) (from 'url') and use that result instead of
__dirname to resolve '../test-assets/guitar-meme.jpg'; update the top-level
imports to include fileURLToPath (and URL if needed) and change the
testImagePath construction to use the derived directory value so the file path
works under "type":"module".
In `@testing/e2e/tests/multimodal-structured.spec.ts`:
- Around line 9-12: The tests use __dirname with path.resolve to build
testImagePath which breaks in ESM; replace that with an import.meta.url-based
resolution: either import { fileURLToPath } from 'url' and derive __filename =
fileURLToPath(import.meta.url) and __dirname = path.dirname(__filename) and keep
const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png'),
or simply compute testImagePath as new URL('../test-assets/guitar-shop.png',
import.meta.url).pathname (or fileURLToPath(new URL(..., import.meta.url))) and
remove __dirname; apply the same change in the multimodal-image.spec test as
well so testImagePath no longer references __dirname.
---
Nitpick comments:
In @.github/workflows/e2e.yml:
- Around line 32-39: The Playwright cache step ("Cache Playwright Browsers", id
playwright-cache) currently keys only on testing/e2e/package.json which can miss
lockfile-only Playwright version updates; update the cache key to hash both the
package.json and the lockfile(s) used in the repo (e.g.,
testing/e2e/package-lock.json, yarn.lock, or pnpm-lock.yaml as applicable) by
passing both paths into hashFiles so the key changes when browser binaries
change and avoids restoring stale Playwright caches.
In `@testing/e2e/global-setup.ts`:
- Around line 1-6: The fixture path in globalSetup uses a cwd-sensitive relative
path './fixtures'; change loadFixtureDir to use a path resolved relative to the
setup file (e.g., compute fixturesDir = path.resolve(__dirname, 'fixtures') or
via new URL if ESM) and pass that resolved absolute path to
LLMock.loadFixtureDir; also add the appropriate import for Node's path (or use
URL) so the fixture directory resolution is robust in CI and different working
directories.
In `@testing/e2e/global-teardown.ts`:
- Around line 1-7: The globalTeardown function should guard the await
mock.stop() call with a try-catch to ensure teardown always proceeds even if
stop() throws; locate globalTeardown and the usage of (globalThis as
any).__llmock and wrap the await mock.stop() in a try { await mock.stop();
console.log(...) } catch (err) { console.error('[global-teardown] error stopping
llmock', err) } so errors are logged with context but do not prevent the rest of
teardown.
In `@testing/e2e/playwright.config.ts`:
- Around line 14-19: Update the Playwright test configuration in the use block
to avoid recording videos for passing runs: change the video setting from 'on'
to 'retain-on-failure' in the use object inside playwright.config.ts (the same
block that contains baseURL, screenshot, and trace) so only failed tests retain
video artifacts and overall storage usage is reduced.
In `@testing/e2e/src/components/ApprovalPrompt.tsx`:
- Around line 28-41: The button onClick handlers in ApprovalPrompt currently
call the async onRespond without awaiting or catching errors; update the
approve/deny handlers (the functions tied to the approve-button-${part.toolName}
and deny-button-${part.toolName} buttons) to be async, await onRespond({ id:
part.id, approved: true/false }) and wrap the await in try/catch to log or
surface errors; additionally consider adding a local loading state in the
ApprovalPrompt component to disable the buttons while the request is pending to
prevent duplicate clicks.
- Around line 1-6: The ApprovalPart interface is currently module-private;
export it so other components/tests can import and construct approval parts.
Modify the declaration of ApprovalPart to be exported (e.g., export interface
ApprovalPart) and ensure any existing local uses (in ApprovalPrompt component)
continue to reference the exported symbol ApprovalPart.
In `@testing/e2e/src/components/ChatUI.tsx`:
- Around line 122-136: The current image input handler in ChatUI (the JSX block
using showImageInput and the onChange callback) requires a non-empty text input
and auto-sends on file selection; change this to store the selected File in
component state (e.g., add selectedImage state) instead of calling
onSendMessageWithImage immediately, clear the immediate send logic from the
onChange handler, and update the send button handler (the same place that
currently uses setInput and input) to call onSendMessageWithImage(input.trim(),
selectedImage) so users can choose a file first, preview or edit text, then
click send; ensure you still reset selectedImage and input after successful
send.
In `@testing/e2e/src/lib/features.ts`:
- Line 3: The import members are not alphabetized; reorder the named imports in
the import statement so they are sorted alphabetically (addToCart,
compareGuitars, getGuitars) to satisfy ESLint — update the import that currently
lists getGuitars, compareGuitars, addToCart to the alphabetical order
referencing the same module.
In `@testing/e2e/src/lib/providers.ts`:
- Around line 31-70: The factories (openai, anthropic, gemmini, etc.) currently
coerce model via type assertions (e.g., model as 'gpt-4o') which bypasses type
checks; update each provider factory (openai, anthropic, gemini, ollama, groq,
grok, openrouter) to perform a runtime validation against a small whitelist or
mapping of allowed model strings before calling createChatOptions/adapter, and
if the provided model (the variable named model) is invalid either throw a clear
error or fall back to a safe default; ensure validation logic is centralized
(e.g., a getValidatedModel(provider, model) helper) so adapters always receive a
verified string rather than using type assertions.
In `@testing/e2e/src/lib/types.ts`:
- Around line 1-55: Replace the duplicated union type declarations by deriving
Provider and Feature from the canonical arrays: make ALL_PROVIDERS and
ALL_FEATURES the single sources of truth (marked as readonly with "as const")
and change the Provider and Feature type definitions to infer their unions from
(typeof ALL_PROVIDERS)[number] and (typeof ALL_FEATURES)[number] respectively;
update any imports/usages of Provider/Feature as needed to compile with the new
inferred types while leaving the array constants (ALL_PROVIDERS, ALL_FEATURES)
names unchanged.
- Around line 28-36: Replace the shorthand array types with generic Array<T> to
satisfy ESLint: change the declaration of ALL_PROVIDERS (and any similar arrays
like ALL_FEATURES) from Provider[] to Array<Provider> and Feature[] to
Array<Feature>, updating the exported constants' type annotations accordingly
(e.g., ALL_PROVIDERS: Array<Provider> = [...] and ALL_FEATURES: Array<Feature> =
[...]) so the symbols ALL_PROVIDERS and ALL_FEATURES use Array<T> syntax.
In `@testing/e2e/src/routes/`$provider/$feature.tsx:
- Around line 126-140: The handleSubmit function lacks error handling around the
fetch and res.json() call which can throw on network errors or non-JSON/ non-OK
responses; wrap the fetch + res.json() in a try/catch (or check res.ok before
calling res.json()), set an explicit error state (e.g., setError) on failure,
and ensure setIsLoading(false) is still called in finally while clearing or not
overwriting result when an error occurs; apply the same pattern to
ImageGenFeature, TTSFeature, and TranscriptionFeature functions (their
submit/handle... methods) so all fetches check res.ok, catch
JSON/parsing/network errors, and update error/result/loading state
appropriately.
In `@testing/e2e/tests/agentic-structured.spec.ts`:
- Around line 23-25: The test currently only checks toolCalls.length; tighten it
by asserting the expected tool identity is invoked: after calling
getToolCalls(page) and storing toolCalls, assert that at least one entry has the
expected identifier (e.g., toolCalls.some(c => c.toolName ===
EXPECTED_TOOL_NAME) or matching property like c.toolId) and/or explicitly assert
the first call (toolCalls[0].toolName) equals the expected tool. Update the test
to reference the expected tool constant/name used by this scenario and fail if
no tool call matches that identifier.
In `@testing/e2e/tests/helpers.ts`:
- Around line 54-56: getStructuredOutput currently calls innerText() immediately
and can be flaky; update it to wait for the element to be visible first
(mirroring getSummarizationResult and getTranscriptionResult) by using the same
test-id locator (page.getByTestId('structured-output')) and calling waitFor with
state: 'visible' before returning innerText(), ensuring the helper reliably
waits for the element to render.
In `@testing/e2e/tests/structured-output.spec.ts`:
- Around line 22-25: The test currently checks substrings in the assistant
message; instead parse the assistant output returned by
getLastAssistantMessage(page) as JSON (or extract the JSON payload) and assert
the structured fields explicitly (e.g., check that parsed.name === 'Fender
Stratocaster' and parsed.price === 1299 and that required keys like name and
price exist and have correct types). Update the assertions in
structured-output.spec.ts to parse response into an object and use strict
equality/type checks instead of toContain so format/regression errors are
caught.
In `@testing/e2e/tests/summarize-stream.spec.ts`:
- Around line 22-23: The current test only checks that
getSummarizationResult(page) returns a non-empty string; tighten it to assert
meaningful content by checking the returned summary (result) contains expected
semantic tokens or structure for this fixture-driven flow — e.g., assert that
result includes one or more known keywords/phrases from the fixture, meets a
minimal word/character count, or matches a simple regex for summary-like
sentences. Update the assertion in summarize-stream.spec.ts to replace or
augment expect(result.length).toBeGreaterThan(0) with checks against
getSummarizationResult's output (result) such as inclusion of expected
fixture-specific words/phrases, a minimum word count, or a regex pattern to
reduce false positives.
In `@testing/e2e/tests/test-matrix.ts`:
- Around line 13-123: The test duplicates the supportMatrix which can drift from
the app; remove the local supportMatrix and reuse the app's logic by importing
and calling the canonical isSupported from the app's feature-support module
(replace references to the local supportMatrix and the test's isSupported with
the imported isSupported). Ensure you import the same Provider and Feature types
(or adapt types) from the app module so signatures match, and update any test
calls that used supportMatrix[feature].has(provider) to use
isSupported(provider, feature).
In `@testing/e2e/tests/tool-approval.spec.ts`:
- Around line 44-46: The current assertion only checks that messages =
page.getByTestId('assistant-message') has count >= 1 which can hide denial
regressions; instead, after computing count, read the final assistant message
text via the locator for the last message (e.g., messages.nth(count - 1) or
messages.last()) and assert its text reflects denial behavior (for example
contains a denial phrase or does not contain a success/approval phrase). Update
the test in tool-approval.spec.ts to replace or extend the count assertion with
a concrete assertion against the final message text using the messages locator
and the count variable to index the last message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4497af8f-80dd-4e2e-bfd1-ac83325b8f75
⛔ Files ignored due to path filters (3)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yamltesting/e2e/test-assets/guitar-meme.jpgis excluded by!**/*.jpgtesting/e2e/test-assets/guitar-shop.pngis excluded by!**/*.png
📒 Files selected for processing (70)
.github/workflows/e2e.ymltesting/e2e/.envtesting/e2e/README.mdtesting/e2e/fixtures/agentic-structured/basic.jsontesting/e2e/fixtures/chat/basic.jsontesting/e2e/fixtures/image-gen/basic.jsontesting/e2e/fixtures/multi-turn/conversation.jsontesting/e2e/fixtures/multimodal-image/basic.jsontesting/e2e/fixtures/multimodal-structured/basic.jsontesting/e2e/fixtures/one-shot-text/basic.jsontesting/e2e/fixtures/reasoning/basic.jsontesting/e2e/fixtures/recorded/.gitkeeptesting/e2e/fixtures/structured-output/basic.jsontesting/e2e/fixtures/summarize/basic.jsontesting/e2e/fixtures/tool-approval/approval.jsontesting/e2e/fixtures/tool-calling/parallel.jsontesting/e2e/fixtures/tool-calling/single.jsontesting/e2e/fixtures/transcription/basic.jsontesting/e2e/fixtures/tts/basic.jsontesting/e2e/global-setup.tstesting/e2e/global-teardown.tstesting/e2e/package.jsontesting/e2e/playwright.config.tstesting/e2e/src/components/ApprovalPrompt.tsxtesting/e2e/src/components/AudioPlayer.tsxtesting/e2e/src/components/ChatUI.tsxtesting/e2e/src/components/ImageDisplay.tsxtesting/e2e/src/components/NotSupported.tsxtesting/e2e/src/components/SummarizeUI.tsxtesting/e2e/src/components/ToolCallDisplay.tsxtesting/e2e/src/components/TranscriptionDisplay.tsxtesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/features.tstesting/e2e/src/lib/guitar-data.tstesting/e2e/src/lib/llmock-server.tstesting/e2e/src/lib/providers.tstesting/e2e/src/lib/schemas.tstesting/e2e/src/lib/tools.tstesting/e2e/src/lib/types.tstesting/e2e/src/router.tsxtesting/e2e/src/routes/$provider/$feature.tsxtesting/e2e/src/routes/$provider/index.tsxtesting/e2e/src/routes/__root.tsxtesting/e2e/src/routes/api.chat.tstesting/e2e/src/routes/api.image.tstesting/e2e/src/routes/api.summarize.tstesting/e2e/src/routes/api.transcription.tstesting/e2e/src/routes/api.tts.tstesting/e2e/src/routes/index.tsxtesting/e2e/src/styles.csstesting/e2e/tests/agentic-structured.spec.tstesting/e2e/tests/chat.spec.tstesting/e2e/tests/helpers.tstesting/e2e/tests/image-gen.spec.tstesting/e2e/tests/multi-turn.spec.tstesting/e2e/tests/multimodal-image.spec.tstesting/e2e/tests/multimodal-structured.spec.tstesting/e2e/tests/one-shot-text.spec.tstesting/e2e/tests/parallel-tool-calls.spec.tstesting/e2e/tests/reasoning.spec.tstesting/e2e/tests/structured-output.spec.tstesting/e2e/tests/summarize-stream.spec.tstesting/e2e/tests/summarize.spec.tstesting/e2e/tests/test-matrix.tstesting/e2e/tests/tool-approval.spec.tstesting/e2e/tests/tool-calling.spec.tstesting/e2e/tests/transcription.spec.tstesting/e2e/tests/tts.spec.tstesting/e2e/tsconfig.jsontesting/e2e/vite.config.ts
| "postinstall": "playwright install chromium" | ||
| }, | ||
| "dependencies": { | ||
| "@copilotkit/llmock": "latest", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the current stable npm version of @copilotkit/llmock, and what are the reproducibility risks of using "latest" in package.json for CI?
💡 Result:
The current stable npm version of @copilotkit/llmock is v1.4.0. This is the latest release as of 2026-03-16, per the official GitHub repository, which serves as the primary source for the package's development and releases. The README explicitly instructs installation via npm install @copilotkit/llmock, confirming its availability on npm, though direct npm registry page not surfaced in searches. Using "latest" in package.json for CI carries reproducibility risks including: non-deterministic builds due to potential version changes between CI runs; breaking changes from untested updates causing test failures or production issues; supply chain vulnerabilities from unexpected package alterations; and violation of reproducible builds best practices, as pinned versions (e.g., semver ranges like ^1.4.0 or exact 1.4.0) ensure consistent dependency resolution across environments.
Citations:
- 1: https://github.com/CopilotKit/llmock
- 2: https://llmock.copilotkit.dev/
- 3: https://www.npmjs.com
- 4: https://www.greenkeeper.io
Pin @copilotkit/llmock to an explicit version instead of latest.
Using latest makes test runs non-reproducible and can silently change fixture behavior in CI. Use version 1.4.0 or another explicit pinned version.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/package.json` at line 14, The package.json dependency for
"@copilotkit/llmock" is pinned to "latest", which makes test runs
non-reproducible; update the dependency entry for "@copilotkit/llmock" to an
explicit version (e.g., "1.4.0") instead of "latest" so CI and local runs use a
fixed, deterministic package version.
| onClick={() => onSubmit(input)} | ||
| disabled={!input.trim() || isLoading} | ||
| className="px-4 py-2 bg-orange-500 text-white rounded text-sm font-medium disabled:opacity-50 self-start" | ||
| > | ||
| {isLoading ? 'Summarizing...' : 'Summarize'} | ||
| </button> | ||
| {result && ( | ||
| <div | ||
| data-testid="summarize-result" | ||
| className="p-3 bg-gray-800/50 border border-gray-700 rounded text-sm" | ||
| > | ||
| {result} | ||
| </div> |
There was a problem hiding this comment.
Handle empty-string summaries and normalize submitted input.
onSubmit gets untrimmed input, and {result && ...} hides valid empty-string responses. This can cause subtle fixture mismatches and ambiguous UI state.
✅ Minimal fix
- onClick={() => onSubmit(input)}
+ onClick={() => onSubmit(input.trim())}
@@
- {result && (
+ {result !== null && (
<div
data-testid="summarize-result"
className="p-3 bg-gray-800/50 border border-gray-700 rounded text-sm"
>
{result}
</div>
)}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onClick={() => onSubmit(input)} | |
| disabled={!input.trim() || isLoading} | |
| className="px-4 py-2 bg-orange-500 text-white rounded text-sm font-medium disabled:opacity-50 self-start" | |
| > | |
| {isLoading ? 'Summarizing...' : 'Summarize'} | |
| </button> | |
| {result && ( | |
| <div | |
| data-testid="summarize-result" | |
| className="p-3 bg-gray-800/50 border border-gray-700 rounded text-sm" | |
| > | |
| {result} | |
| </div> | |
| onClick={() => onSubmit(input.trim())} | |
| disabled={!input.trim() || isLoading} | |
| className="px-4 py-2 bg-orange-500 text-white rounded text-sm font-medium disabled:opacity-50 self-start" | |
| > | |
| {isLoading ? 'Summarizing...' : 'Summarize'} | |
| </button> | |
| {result !== null && ( | |
| <div | |
| data-testid="summarize-result" | |
| className="p-3 bg-gray-800/50 border border-gray-700 rounded text-sm" | |
| > | |
| {result} | |
| </div> | |
| )} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/components/SummarizeUI.tsx` around lines 23 - 35, Pass a
trimmed input to the submit handler and render empty-string results: change the
button onClick to call onSubmit(input.trim()) (use the existing onClick, input,
isLoading symbols) so submitted text is normalized, and replace the truthy check
{result && (...)} with an explicit existence check such as {result !== undefined
&& result !== null && (...)} (referencing result and the render block) so valid
empty-string summaries are displayed.
| export const compareGuitars = compareGuitarsToolDef.server((args) => { | ||
| const selected = args.guitarIds | ||
| .map((id) => guitars.find((g) => g.id === id)) | ||
| .filter(Boolean) as typeof guitars | ||
|
|
||
| const prices = selected.map((g) => g.price) | ||
| return { | ||
| comparison: selected.map((g) => ({ | ||
| id: g.id, | ||
| name: g.name, | ||
| price: g.price, | ||
| })), | ||
| cheapest: selected.find((g) => g.price === Math.min(...prices))!.name, | ||
| mostExpensive: selected.find((g) => g.price === Math.max(...prices))!.name, | ||
| } | ||
| }) |
There was a problem hiding this comment.
Potential runtime error when no matching guitars are found.
If guitarIds contains IDs that don't exist in the guitars array, selected could be empty or have fewer items than expected. The Math.min(...prices) / Math.max(...prices) calls on an empty array return Infinity / -Infinity, and the subsequent .find() will return undefined, causing the non-null assertion (!) to throw.
🛡️ Proposed defensive fix
export const compareGuitars = compareGuitarsToolDef.server((args) => {
const selected = args.guitarIds
.map((id) => guitars.find((g) => g.id === id))
.filter(Boolean) as typeof guitars
+ if (selected.length < 2) {
+ throw new Error('At least 2 valid guitar IDs are required for comparison')
+ }
+
const prices = selected.map((g) => g.price)
return {
comparison: selected.map((g) => ({
id: g.id,
name: g.name,
price: g.price,
})),
- cheapest: selected.find((g) => g.price === Math.min(...prices))!.name,
- mostExpensive: selected.find((g) => g.price === Math.max(...prices))!.name,
+ cheapest: selected.reduce((a, b) => (a.price < b.price ? a : b)).name,
+ mostExpensive: selected.reduce((a, b) => (a.price > b.price ? a : b)).name,
}
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/lib/tools.ts` around lines 47 - 62, The compareGuitars server
handler (compareGuitars) assumes selected and prices are non-empty and uses
Math.min/Math.max and non-null assertions which will throw if no guitars match;
update the function to defensively handle empty or partial matches by checking
if selected.length === 0 (either throw a clear error or return a result with
cheapest/mostExpensive set to null or an empty string), and guard the
cheapest/mostExpensive computation by only calling Math.min/Math.max when
prices.length > 0 and using a safe lookup (avoid `!`) such as finding the item
only if the computed min/max is finite; ensure the return shape is consistent
with consumers.
| export const addToCartToolDef = toolDefinition({ | ||
| name: 'addToCart', | ||
| description: 'Add a guitar to the shopping cart', | ||
| inputSchema: z.object({ | ||
| guitarId: z.string(), | ||
| quantity: z.number(), | ||
| }), | ||
| outputSchema: z.object({ | ||
| success: z.boolean(), | ||
| cartId: z.string(), | ||
| guitarId: z.string(), | ||
| quantity: z.number(), | ||
| }), | ||
| needsApproval: true, | ||
| }) | ||
|
|
||
| export const addToCart = addToCartToolDef.server((args) => ({ | ||
| success: true, | ||
| cartId: 'CART_' + Date.now(), | ||
| guitarId: args.guitarId, | ||
| quantity: args.quantity, | ||
| })) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify guitar ID types in guitar-data.ts
fd 'guitar-data' --extension ts --exec cat {}Repository: TanStack/ai
Length of output: 1450
🏁 Script executed:
#!/bin/bash
# Read the tools.ts file to verify guitar ID types across all tool definitions
fd 'tools\.ts' testing/e2e/src/lib --exec cat -n {} | head -100Repository: TanStack/ai
Length of output: 2771
Change guitarId from z.string() to z.number() in addToCartToolDef.
The addToCartToolDef defines guitarId as a string (line 68), but all guitar IDs throughout the codebase are numeric: getGuitarsToolDef returns id: z.number() (line 11), compareGuitarsToolDef accepts guitarIds: z.array(z.number()) (line 32), and the source data in guitar-data.ts defines IDs as number. This inconsistency breaks compatibility with other tools and the data model.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/lib/tools.ts` around lines 64 - 85, The addToCartToolDef
currently types guitarId as z.string(); change both the inputSchema.guitarId and
outputSchema.guitarId to z.number() so the tool aligns with getGuitarsToolDef
and compareGuitarsToolDef. Update any dependent usage in the addToCart server
implementation (the addToCart function) to treat args.guitarId as a number (no
string-specific operations) while leaving cartId generation as-is.
| if (request.signal.aborted) { | ||
| return new Response(null, { status: 499 }) | ||
| } | ||
|
|
||
| const abortController = new AbortController() |
There was a problem hiding this comment.
abortController is not connected to request.signal.
A new AbortController is created but never linked to the incoming request's signal. If the client disconnects mid-stream, the LLM call continues running, wasting resources.
Proposed fix
if (request.signal.aborted) {
return new Response(null, { status: 499 })
}
const abortController = new AbortController()
+ request.signal.addEventListener('abort', () => {
+ abortController.abort()
+ })
const body = await request.json()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.chat.ts` around lines 12 - 16, The new
AbortController created as abortController is not wired to the incoming
request.signal, so client disconnects won't cancel downstream LLM work; hook
them up by adding a listener: when request.signal aborts, call
abortController.abort(), and then pass abortController.signal into the
LLM/streaming invocation in this handler (the call to the LLM/streaming function
in api.chat.ts) so the model request is cancelled on client disconnect.
| POST: async ({ request }) => { | ||
| await import('@/lib/llmock-server').then((m) => m.ensureLLMock()) | ||
| const body = await request.json() | ||
| const { prompt, provider } = body | ||
|
|
||
| const adapter = createImageAdapter(provider) | ||
| if (!adapter) { | ||
| return new Response( | ||
| JSON.stringify({ | ||
| error: 'Provider does not support image generation', | ||
| }), | ||
| { | ||
| status: 400, | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| }, | ||
| ) | ||
| } | ||
|
|
||
| try { | ||
| const stream = generateImage({ adapter, prompt, stream: true }) | ||
| return toServerSentEventsResponse(stream) | ||
| } catch (error: any) { | ||
| return new Response(JSON.stringify({ error: error.message }), { | ||
| status: 500, | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Missing input validation and async stream errors may escape the catch block.
promptandproviderare destructured without validation—missing or malformed fields will cause undefined behavior.- The
try-catchonly captures synchronous exceptions. IfgenerateImagereturns a stream that errors during iteration, those errors won't be caught here.
For a testing harness this may be acceptable, but consider adding minimal validation:
Proposed input validation
const body = await request.json()
const { prompt, provider } = body
+
+ if (!prompt || typeof prompt !== 'string') {
+ return new Response(
+ JSON.stringify({ error: 'Missing or invalid prompt' }),
+ { status: 400, headers: { 'Content-Type': 'application/json' } },
+ )
+ }
const adapter = createImageAdapter(provider)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| POST: async ({ request }) => { | |
| await import('@/lib/llmock-server').then((m) => m.ensureLLMock()) | |
| const body = await request.json() | |
| const { prompt, provider } = body | |
| const adapter = createImageAdapter(provider) | |
| if (!adapter) { | |
| return new Response( | |
| JSON.stringify({ | |
| error: 'Provider does not support image generation', | |
| }), | |
| { | |
| status: 400, | |
| headers: { 'Content-Type': 'application/json' }, | |
| }, | |
| ) | |
| } | |
| try { | |
| const stream = generateImage({ adapter, prompt, stream: true }) | |
| return toServerSentEventsResponse(stream) | |
| } catch (error: any) { | |
| return new Response(JSON.stringify({ error: error.message }), { | |
| status: 500, | |
| headers: { 'Content-Type': 'application/json' }, | |
| }) | |
| } | |
| POST: async ({ request }) => { | |
| await import('@/lib/llmock-server').then((m) => m.ensureLLMock()) | |
| const body = await request.json() | |
| const { prompt, provider } = body | |
| if (!prompt || typeof prompt !== 'string') { | |
| return new Response( | |
| JSON.stringify({ error: 'Missing or invalid prompt' }), | |
| { status: 400, headers: { 'Content-Type': 'application/json' } }, | |
| ) | |
| } | |
| const adapter = createImageAdapter(provider) | |
| if (!adapter) { | |
| return new Response( | |
| JSON.stringify({ | |
| error: 'Provider does not support image generation', | |
| }), | |
| { | |
| status: 400, | |
| headers: { 'Content-Type': 'application/json' }, | |
| }, | |
| ) | |
| } | |
| try { | |
| const stream = generateImage({ adapter, prompt, stream: true }) | |
| return toServerSentEventsResponse(stream) | |
| } catch (error: any) { | |
| return new Response(JSON.stringify({ error: error.message }), { | |
| status: 500, | |
| headers: { 'Content-Type': 'application/json' }, | |
| }) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.image.ts` around lines 22 - 48, Validate incoming
body in the POST handler: ensure prompt is a non-empty string and provider is a
valid string (return a 400 JSON Response like the existing adapter error if
invalid) before calling createImageAdapter; then call
createImageAdapter(provider) and keep the same 400 return for unsupported
providers. Treat generateImage as potentially async/erroring during iteration:
await it if it returns a Promise, and ensure errors from the returned stream are
propagated into a catch (either by wrapping the async-generator/stream iteration
inside a try/catch or by registering an 'error' handler that rejects so the
surrounding catch can return a 500 JSON). Update references in this handler
(POST, createImageAdapter, generateImage, toServerSentEventsResponse)
accordingly so runtime and async stream errors result in proper JSON error
responses.
| function createTTSAdapter(provider: Provider) { | ||
| const factories: Record<string, () => any> = { | ||
| openai: () => openaiSpeech({ baseURL: LLMOCK_URL }), | ||
| } | ||
| return factories[provider]?.() | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that geminiSpeech adapter exists in `@tanstack/ai-gemini`
rg -l "geminiSpeech|gemini.*Speech" --type tsRepository: TanStack/ai
Length of output: 210
🏁 Script executed:
# Check the geminiSpeech export from `@tanstack/ai-gemini`
cat -n packages/typescript/ai-gemini/src/index.tsRepository: TanStack/ai
Length of output: 2924
🏁 Script executed:
# Check the geminiSpeech TTS adapter implementation
cat -n packages/typescript/ai-gemini/src/adapters/tts.tsRepository: TanStack/ai
Length of output: 8252
🏁 Script executed:
# Verify feature-support.ts content
cat -n testing/e2e/src/lib/feature-support.tsRepository: TanStack/ai
Length of output: 3112
🏁 Script executed:
# Check the current api.tts.ts file
cat -n testing/e2e/src/routes/api.tts.tsRepository: TanStack/ai
Length of output: 1837
🏁 Script executed:
# Compare with api.image.ts pattern to understand expected multi-provider handling
cat -n testing/e2e/src/routes/api.image.tsRepository: TanStack/ai
Length of output: 2129
Add missing Gemini TTS adapter implementation.
feature-support.ts declares tts: new Set(['openai', 'gemini']), but createTTSAdapter only supports openai. Gemini TTS requests will fail with a 400 error despite the feature being marked as supported. Implement gemini adapter following the same pattern as api.image.ts.
Suggested fix
+import { geminiSpeech } from '@tanstack/ai-gemini'
import { openaiSpeech } from '@tanstack/ai-openai'
import type { Provider } from '@/lib/types'
const LLMOCK_URL = process.env.LLMOCK_URL || 'http://127.0.0.1:4010'
function createTTSAdapter(provider: Provider) {
const factories: Record<string, () => any> = {
openai: () => openaiSpeech({ baseURL: LLMOCK_URL }),
+ gemini: () => geminiSpeech({ baseURL: LLMOCK_URL }),
}
return factories[provider]?.()
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.tts.ts` around lines 8 - 13, createTTSAdapter
currently only returns an OpenAI TTS factory causing Gemini-marked TTS to fail;
add a "gemini" entry to the factories map that returns the gemini TTS adapter
(following the same pattern used in api.image.ts) so createTTSAdapter supports
both providers. Locate the createTTSAdapter function and add a
factories["gemini"] = () => geminiSpeech({ baseURL: LLMOCK_URL }) (or the
equivalent geminiSpeech factory used elsewhere) ensuring the Provider type
string matches "gemini".
| import path from 'path' | ||
|
|
||
| const testImagePath = path.resolve(__dirname, '../test-assets/guitar-meme.jpg') | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm ESM mode and locate __dirname usage in test files
rg -n '"type"\s*:\s*"module"' testing/e2e/package.json
rg -n '\b__dirname\b' testing/e2e/testsRepository: TanStack/ai
Length of output: 333
🏁 Script executed:
head -25 testing/e2e/tests/multimodal-image.spec.tsRepository: TanStack/ai
Length of output: 832
Replace __dirname with ESM-safe file URL resolution.
This file is configured for ESM ("type": "module" in package.json), where __dirname is undefined and will cause a runtime error. Use fileURLToPath with import.meta.url instead.
Suggested fix
-import path from 'path'
+import { fileURLToPath } from 'node:url'
@@
-const testImagePath = path.resolve(__dirname, '../test-assets/guitar-meme.jpg')
+const testImagePath = fileURLToPath(
+ new URL('../test-assets/guitar-meme.jpg', import.meta.url),
+)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import path from 'path' | |
| const testImagePath = path.resolve(__dirname, '../test-assets/guitar-meme.jpg') | |
| import { fileURLToPath } from 'node:url' | |
| const testImagePath = fileURLToPath( | |
| new URL('../test-assets/guitar-meme.jpg', import.meta.url), | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/tests/multimodal-image.spec.ts` around lines 9 - 12, Replace the
ESM-unsafe use of __dirname when building testImagePath: import path remains,
but compute the current file directory via fileURLToPath(import.meta.url) (from
'url') and use that result instead of __dirname to resolve
'../test-assets/guitar-meme.jpg'; update the top-level imports to include
fileURLToPath (and URL if needed) and change the testImagePath construction to
use the derived directory value so the file path works under "type":"module".
| import path from 'path' | ||
|
|
||
| const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png') | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== package.json module type =="
jq -r '.type // "not-set"' package.json
echo
echo "== Playwright config files =="
fd -i 'playwright.config.*' | while read -r f; do
echo "--- $f ---"
sed -n '1,140p' "$f"
done
echo
echo "== TS module settings =="
fd -i 'tsconfig*.json' | while read -r f; do
echo "--- $f ---"
jq -r '.compilerOptions.module // "unset", .compilerOptions.moduleResolution // "unset"' "$f" 2>/dev/null || true
done
echo
echo "== __dirname usage in e2e tests =="
rg -n --type=ts -C2 '\b__dirname\b|\bimport\.meta\.url\b' testing/e2e/testsRepository: TanStack/ai
Length of output: 6449
Replace __dirname with ESM-compatible path resolution.
File uses __dirname which is undefined in ESM. The project is configured with "type": "module" and TypeScript targets ESNext. This will fail at runtime with ReferenceError: __dirname is not defined.
Apply the suggested pattern:
ESM-safe fix
-import path from 'path'
+import path from 'node:path'
+import { fileURLToPath } from 'node:url'
-const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png')
+const __filename = fileURLToPath(import.meta.url)
+const __dirname = path.dirname(__filename)
+const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png')Note: testing/e2e/tests/multimodal-image.spec.ts has the same issue and should be fixed identically.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import path from 'path' | |
| const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png') | |
| import path from 'node:path' | |
| import { fileURLToPath } from 'node:url' | |
| const __filename = fileURLToPath(import.meta.url) | |
| const __dirname = path.dirname(__filename) | |
| const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png') | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/tests/multimodal-structured.spec.ts` around lines 9 - 12, The
tests use __dirname with path.resolve to build testImagePath which breaks in
ESM; replace that with an import.meta.url-based resolution: either import {
fileURLToPath } from 'url' and derive __filename =
fileURLToPath(import.meta.url) and __dirname = path.dirname(__filename) and keep
const testImagePath = path.resolve(__dirname, '../test-assets/guitar-shop.png'),
or simply compute testImagePath as new URL('../test-assets/guitar-shop.png',
import.meta.url).pathname (or fileURLToPath(new URL(..., import.meta.url))) and
remove __dirname; apply the same change in the multimodal-image.spec test as
well so testImagePath no longer references __dirname.
…irectories individually
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testing/e2e/global-setup.ts`:
- Line 6: The test setup currently creates two LLMock instances causing port
collisions: the direct instantiation new LLMock(...) in global-setup.ts and the
separate ensureLLMock() in llmock-server.ts. Fix by consolidating ownership —
either remove the direct new LLMock(...) from global-setup.ts and let
ensureLLMock() be the sole starter, or modify ensureLLMock() to first check and
reuse globalThis.__llmock (validate it is an LLMock instance and still
listening) before creating a new LLMock; update any teardown to only stop the
single owner instance. Ensure you reference and protect the LLMock constructor,
ensureLLMock(), and globalThis.__llmock when implementing the chosen approach.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6e6da5a2-18c0-4d6e-96d1-4d005c0ebe4e
📒 Files selected for processing (3)
testing/e2e/global-setup.tstesting/e2e/tests/multimodal-image.spec.tstesting/e2e/tests/multimodal-structured.spec.ts
✅ Files skipped from review due to trivial changes (1)
- testing/e2e/tests/multimodal-image.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- testing/e2e/tests/multimodal-structured.spec.ts
| import path from 'path' | ||
|
|
||
| export default async function globalSetup() { | ||
| const mock = new LLMock({ port: 4010, host: '127.0.0.1', logLevel: 'info' }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== LLMock construction/start points =="
rg -n --type=ts "new LLMock|\\.start\\(" testing/e2e -C2
echo
echo "== globalThis llmock usage =="
rg -n --type=ts "__llmock" testing/e2e -C2
echo
echo "== ensureLLMock call sites in routes =="
rg -n --type=ts "\\bensureLLMock\\s*\\(" testing/e2e/src/routes -C2Repository: TanStack/ai
Length of output: 4112
🏁 Script executed:
cat -n testing/e2e/src/lib/llmock-server.tsRepository: TanStack/ai
Length of output: 1187
Fix dual LLMock ownership to prevent port collision.
global-setup.ts line 6 creates and starts a LLMock instance on 127.0.0.1:4010, but ensureLLMock() in llmock-server.ts independently creates and starts a second instance on the same port when LLMOCK_RECORD is enabled. Although ensureLLMock() has idempotency logic within its own module scope, it does not coordinate with the global-setup instance, leading to EADDRINUSE and orphaned instances. The globalThis.__llmock reference is not reliably shared across Playwright worker processes.
Consolidate to a single startup path: either let ensureLLMock() be the sole LLMock owner (removing the global-setup instance), or have ensureLLMock() reuse the global-setup instance via globalThis.__llmock with proper instance checks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/global-setup.ts` at line 6, The test setup currently creates two
LLMock instances causing port collisions: the direct instantiation new
LLMock(...) in global-setup.ts and the separate ensureLLMock() in
llmock-server.ts. Fix by consolidating ownership — either remove the direct new
LLMock(...) from global-setup.ts and let ensureLLMock() be the sole starter, or
modify ensureLLMock() to first check and reuse globalThis.__llmock (validate it
is an LLMock instance and still listening) before creating a new LLMock; update
any teardown to only stop the single owner instance. Ensure you reference and
protect the LLMock constructor, ensureLLMock(), and globalThis.__llmock when
implementing the chosen approach.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (7)
testing/e2e/src/routes/api.tts.ts (1)
20-34: Consider adding request body validation.The request body is parsed directly without validation. If
textis missing/invalid orprovideris not a string, the code may behave unexpectedly. Per coding guidelines, Zod should be used for runtime schema validation.♻️ Suggested validation pattern
+import { z } from 'zod' import { createFileRoute } from '@tanstack/react-router' import { generateSpeech, toServerSentEventsResponse } from '@tanstack/ai' import { createOpenaiSpeech } from '@tanstack/ai-openai' import type { Provider } from '@/lib/types' +const ttsRequestSchema = z.object({ + text: z.string().min(1), + provider: z.string(), +}) + // ... in POST handler: - const body = await request.json() - const { text, provider } = body + const body = await request.json() + const parsed = ttsRequestSchema.safeParse(body) + if (!parsed.success) { + return new Response( + JSON.stringify({ error: 'Invalid request body' }), + { status: 400, headers: { 'Content-Type': 'application/json' } }, + ) + } + const { text, provider } = parsed.dataAs per coding guidelines: "Use Zod for runtime schema validation and type inference."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.tts.ts` around lines 20 - 34, Add Zod-based runtime validation for the POST request body: import z from 'zod', declare a schema (e.g., text: z.string().min(1), provider: z.string()), and run schema.safeParse or schema.parse on the parsed request.json() before using text/provider. If validation fails, return a 400 JSON Response with the validation errors and Content-Type header. Use the validated values when calling createTTSAdapter(provider) so provider is guaranteed to be a string and text is non-empty.testing/e2e/src/routes/api.transcription.ts (1)
9-17: Tighten adapter factory typing (anyand string-key map are too loose).
Record<string, () => any>drops provider/type safety and makes it easier to ship unsupported configs unnoticed. PreferPartial<Record<Provider, ...>>with a concrete return type.Suggested refactor
-function createTranscriptionAdapter(provider: Provider) { - const factories: Record<string, () => any> = { +type TranscriptionAdapter = ReturnType<typeof createOpenaiTranscription> + +function createTranscriptionAdapter(provider: Provider): TranscriptionAdapter | undefined { + const factories: Partial<Record<Provider, () => TranscriptionAdapter>> = { openai: () => createOpenaiTranscription('whisper-1', DUMMY_KEY, { baseURL: LLMOCK_URL, }), } return factories[provider]?.() }As per coding guidelines "
**/*.{ts,tsx}: Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.transcription.ts` around lines 9 - 17, The factories map in createTranscriptionAdapter is too loose (Record<string, () => any>) which loses Provider- and model-specific typing; change the factories type to Partial<Record<Provider, () => ReturnType<typeof createOpenaiTranscription>>> (or a concrete shared transcription adapter interface) so only keys of the Provider union are allowed and the return type is explicit, then update the function signature to return that concrete adapter type instead of any; ensure the factories variable (and its openai entry that calls createOpenaiTranscription with DUMMY_KEY and LLMOCK_URL) uses the strengthened type so unsupported provider keys are a compile-time error.testing/e2e/src/routes/api.summarize.ts (2)
2-7: Switch summarize imports to/adapterssubpaths.These provider imports should use adapter subpath exports to keep the implementation tree-shakeable.
As per coding guidelines: "Use tree-shakeable adapter architecture for provider implementations - export specialized adapters ... from
/adapterssubpath rather than monolithic adapters."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.summarize.ts` around lines 2 - 7, Update the provider imports to use the tree-shakeable adapter subpaths: replace imports of createOpenaiSummarize, createAnthropicSummarize, createGeminiSummarize, createOllamaSummarize, and createGrokSummarize from their package roots to their respective /adapters subpath exports (e.g., import createOpenaiSummarize from '@tanstack/ai-openai/adapters', likewise for anthropic/gemini/ollama/grok) while keeping summarize and toServerSentEventsResponse imports unchanged so the module remains tree-shakeable.
39-41: Validate request payload with Zod before destructuring.
request.json()is used without runtime validation; malformed payloads can flow into adapter creation and produce ambiguous 500s. Parse/validate and return 400 on invalid input.As per coding guidelines: "Use Zod for runtime schema validation and type inference..."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.summarize.ts` around lines 39 - 41, Validate the incoming JSON using a Zod schema before destructuring: define a schema (e.g., const SummarizeBody = z.object({ text: z.string(), provider: z.string().optional(), stream: z.boolean().optional() })) and call SummarizeBody.safeParse(await request.json()); if parse fails return a 400 response with the parse error, otherwise extract { text, provider, stream: shouldStream } from the parsed.data and continue; replace the direct use of request.json() and body variable in the route handler with this validated parsed data to prevent malformed payloads reaching adapter creation.testing/e2e/src/lib/providers.ts (3)
12-13: Rename constants to camelCase to match repository naming rules.
LLMOCK_URLandDUMMY_KEYviolate the project’s camelCase variable naming convention.As per coding guidelines: "
**/*.{ts,tsx,js,jsx}: Use camelCase for function and variable names throughout the codebase."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/providers.ts` around lines 12 - 13, Constants LLMOCK_URL and DUMMY_KEY violate camelCase naming; rename them to camelCase (e.g., lmockUrl or lLMock?—prefer lLMock is odd, use lLMock? Wait no. Use lLMock? Hmm. Need consistent: lLMock is messy. Better: llmockUrl and dummyKey) and update every usage. Specifically, change const LLMOCK_URL to const llmockUrl (or llMockUrl if preferred repo style) and const DUMMY_KEY to const dummyKey, then replace all references to LLMOCK_URL and DUMMY_KEY in this module (and any imports/exports) to the new identifiers to comply with the camelCase rule.
1-9: Use adapter subpath imports for tree-shaking.These imports should come from provider
/adapterssubpaths rather than package roots to preserve the intended tree-shakeable architecture.As per coding guidelines: "Use tree-shakeable adapter architecture for provider implementations - export specialized adapters ... from
/adapterssubpath rather than monolithic adapters."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/providers.ts` around lines 1 - 9, Replace the current package-root imports with their adapter subpath equivalents so the build can tree-shake the providers: import AnyTextAdapter and createChatOptions from '@tanstack/ai' stays the same, but change createOpenaiChat, createAnthropicChat, createGeminiChat, createOllamaChat, createGroqText, createGrokText, and createOpenRouterText to import from their packages' '/adapters' subpaths (e.g., '@tanstack/ai-openai/adapters', '@tanstack/ai-anthropic/adapters', '@tanstack/ai-gemini/adapters', '@tanstack/ai-ollama/adapters', '@tanstack/ai-groq/adapters', '@tanstack/ai-grok/adapters', '@tanstack/ai-openrouter/adapters') so the adapter implementations (createOpenaiChat, createAnthropicChat, createGeminiChat, createOllamaChat, createGroqText, createGrokText, createOpenRouterText) are imported from their tree-shakeable adapter modules.
31-72: Consider refactoring to a discriminated union or provider-specific factory pattern to avoid model casts.The casts (e.g.,
model as 'gpt-4o') bypass type-safe model inference. Since adapter functions likecreateOpenaiChat<TModel extends (typeof OPENAI_CHAT_MODELS)[number]>expect literal model types, a discriminated union pattern would preserve compile-time safety:// Instead of casting, use a discriminated union type ProviderFactory = | { provider: 'openai'; model: OpenAIChatModel } | { provider: 'anthropic'; model: AnthropicChatModel } // ...Alternatively, use provider-specific factory functions that handle model resolution internally, ensuring no unsupported provider/model combinations reach runtime.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/providers.ts` around lines 31 - 72, The factories map currently casts a shared `model` to provider-specific literal types (e.g., `model as 'gpt-4o'`), which defeats compile-time model safety; replace this with a discriminated-union or provider-specific factory pattern so each branch enforces the correct model type. Concretely, introduce a union type or per-provider factory functions that pair a provider tag with its strongly typed model (e.g., use types like OpenAIChatModel, AnthropicChatModel) and change the `factories: Record<Provider, () => { adapter: AnyTextAdapter }>` implementation to use those provider-specific factories (or a discriminated union resolver) and call createOpenaiChat, createAnthropicChat, createGeminiChat, createOllamaChat, createGroqText, createGrokText, createOpenRouterText with their correctly typed model parameters instead of casting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testing/e2e/src/lib/providers.ts`:
- Line 1: Reorder the imports in testing/e2e/src/lib/providers.ts so the
type-only import of AnyTextAdapter from '@tanstack/ai' conforms to the project's
import/order groups — move the "import type { AnyTextAdapter } from
'@tanstack/ai'" statement to the correct position among external imports (or
alongside other type imports) so ESLint's import/order rule is satisfied; keep
it as a type-only import and ensure it sits with other third-party module
imports rather than before/after internal/project imports.
In `@testing/e2e/src/routes/api.summarize.ts`:
- Around line 13-32: The createSummarizeAdapter function currently falls back to
the OpenAI factory when a provider key is missing, hiding unsupported providers;
instead update createSummarizeAdapter to explicitly fail for unknown providers
by checking factories[provider] and throwing a clear error (including the
provider value) when absent, or alternatively add the missing provider entry
(e.g., "groq") to the factories map; reference the createSummarizeAdapter
function and the factories constant to implement the explicit error path rather
than returning factories.openai by default.
In `@testing/e2e/src/routes/api.transcription.ts`:
- Around line 6-7: Rename the top-level constants LLMOCK_URL and DUMMY_KEY to
camelCase per project convention (e.g., llmMockUrl and dummyKey) and update all
references in the file (and tests if any) to use the new names; ensure the
values remain the same and export/usage sites still work by updating any
occurrences of LLMOCK_URL and DUMMY_KEY to the new identifiers (check const
declarations and any places they are imported/used within
testing/e2e/src/routes/api.transcription.ts).
- Around line 24-27: Validate and type the incoming request body before using
it: replace the raw use of request.json() and direct destructure of { audio,
provider } with a Zod schema (use z.enum for Provider values and validate audio
as a string or appropriate type) to parse and infer types, then call
createTranscriptionAdapter(provider) only after successful parse; on
parse/validation failure return a 400 response with the validation error details
instead of letting the try-catch produce a 500. Ensure you reference the
request.json() call, the provider/audio fields, and
createTranscriptionAdapter(provider) when making the change so the runtime
validation and error handling are applied in this route handler.
---
Nitpick comments:
In `@testing/e2e/src/lib/providers.ts`:
- Around line 12-13: Constants LLMOCK_URL and DUMMY_KEY violate camelCase
naming; rename them to camelCase (e.g., lmockUrl or lLMock?—prefer lLMock is
odd, use lLMock? Wait no. Use lLMock? Hmm. Need consistent: lLMock is messy.
Better: llmockUrl and dummyKey) and update every usage. Specifically, change
const LLMOCK_URL to const llmockUrl (or llMockUrl if preferred repo style) and
const DUMMY_KEY to const dummyKey, then replace all references to LLMOCK_URL and
DUMMY_KEY in this module (and any imports/exports) to the new identifiers to
comply with the camelCase rule.
- Around line 1-9: Replace the current package-root imports with their adapter
subpath equivalents so the build can tree-shake the providers: import
AnyTextAdapter and createChatOptions from '@tanstack/ai' stays the same, but
change createOpenaiChat, createAnthropicChat, createGeminiChat,
createOllamaChat, createGroqText, createGrokText, and createOpenRouterText to
import from their packages' '/adapters' subpaths (e.g.,
'@tanstack/ai-openai/adapters', '@tanstack/ai-anthropic/adapters',
'@tanstack/ai-gemini/adapters', '@tanstack/ai-ollama/adapters',
'@tanstack/ai-groq/adapters', '@tanstack/ai-grok/adapters',
'@tanstack/ai-openrouter/adapters') so the adapter implementations
(createOpenaiChat, createAnthropicChat, createGeminiChat, createOllamaChat,
createGroqText, createGrokText, createOpenRouterText) are imported from their
tree-shakeable adapter modules.
- Around line 31-72: The factories map currently casts a shared `model` to
provider-specific literal types (e.g., `model as 'gpt-4o'`), which defeats
compile-time model safety; replace this with a discriminated-union or
provider-specific factory pattern so each branch enforces the correct model
type. Concretely, introduce a union type or per-provider factory functions that
pair a provider tag with its strongly typed model (e.g., use types like
OpenAIChatModel, AnthropicChatModel) and change the `factories: Record<Provider,
() => { adapter: AnyTextAdapter }>` implementation to use those
provider-specific factories (or a discriminated union resolver) and call
createOpenaiChat, createAnthropicChat, createGeminiChat, createOllamaChat,
createGroqText, createGrokText, createOpenRouterText with their correctly typed
model parameters instead of casting.
In `@testing/e2e/src/routes/api.summarize.ts`:
- Around line 2-7: Update the provider imports to use the tree-shakeable adapter
subpaths: replace imports of createOpenaiSummarize, createAnthropicSummarize,
createGeminiSummarize, createOllamaSummarize, and createGrokSummarize from their
package roots to their respective /adapters subpath exports (e.g., import
createOpenaiSummarize from '@tanstack/ai-openai/adapters', likewise for
anthropic/gemini/ollama/grok) while keeping summarize and
toServerSentEventsResponse imports unchanged so the module remains
tree-shakeable.
- Around line 39-41: Validate the incoming JSON using a Zod schema before
destructuring: define a schema (e.g., const SummarizeBody = z.object({ text:
z.string(), provider: z.string().optional(), stream: z.boolean().optional() }))
and call SummarizeBody.safeParse(await request.json()); if parse fails return a
400 response with the parse error, otherwise extract { text, provider, stream:
shouldStream } from the parsed.data and continue; replace the direct use of
request.json() and body variable in the route handler with this validated parsed
data to prevent malformed payloads reaching adapter creation.
In `@testing/e2e/src/routes/api.transcription.ts`:
- Around line 9-17: The factories map in createTranscriptionAdapter is too loose
(Record<string, () => any>) which loses Provider- and model-specific typing;
change the factories type to Partial<Record<Provider, () => ReturnType<typeof
createOpenaiTranscription>>> (or a concrete shared transcription adapter
interface) so only keys of the Provider union are allowed and the return type is
explicit, then update the function signature to return that concrete adapter
type instead of any; ensure the factories variable (and its openai entry that
calls createOpenaiTranscription with DUMMY_KEY and LLMOCK_URL) uses the
strengthened type so unsupported provider keys are a compile-time error.
In `@testing/e2e/src/routes/api.tts.ts`:
- Around line 20-34: Add Zod-based runtime validation for the POST request body:
import z from 'zod', declare a schema (e.g., text: z.string().min(1), provider:
z.string()), and run schema.safeParse or schema.parse on the parsed
request.json() before using text/provider. If validation fails, return a 400
JSON Response with the validation errors and Content-Type header. Use the
validated values when calling createTTSAdapter(provider) so provider is
guaranteed to be a string and text is non-empty.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 62846cfd-a35e-46a6-ad7f-428d16c56684
📒 Files selected for processing (5)
testing/e2e/src/lib/providers.tstesting/e2e/src/routes/api.image.tstesting/e2e/src/routes/api.summarize.tstesting/e2e/src/routes/api.transcription.tstesting/e2e/src/routes/api.tts.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- testing/e2e/src/routes/api.image.ts
| @@ -0,0 +1,75 @@ | |||
| import type { AnyTextAdapter } from '@tanstack/ai' | |||
There was a problem hiding this comment.
Fix import order lint error on Line 1.
ESLint reports import/order: the type import from @tanstack/ai is out of the configured order.
🧰 Tools
🪛 ESLint
[error] 1-1: @tanstack/ai type import should occur after import of @tanstack/ai-openrouter
(import/order)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/lib/providers.ts` at line 1, Reorder the imports in
testing/e2e/src/lib/providers.ts so the type-only import of AnyTextAdapter from
'@tanstack/ai' conforms to the project's import/order groups — move the "import
type { AnyTextAdapter } from '@tanstack/ai'" statement to the correct position
among external imports (or alongside other type imports) so ESLint's
import/order rule is satisfied; keep it as a type-only import and ensure it sits
with other third-party module imports rather than before/after internal/project
imports.
| function createSummarizeAdapter(provider: Provider) { | ||
| const factories: Record<string, () => any> = { | ||
| openai: () => | ||
| createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }), | ||
| anthropic: () => | ||
| createAnthropicSummarize('claude-sonnet-4-5', DUMMY_KEY, { | ||
| baseURL: LLMOCK_URL, | ||
| }), | ||
| gemini: () => | ||
| createGeminiSummarize(DUMMY_KEY, 'gemini-2.0-flash', { | ||
| baseURL: LLMOCK_URL, | ||
| }), | ||
| ollama: () => createOllamaSummarize('mistral', LLMOCK_URL), | ||
| grok: () => | ||
| createGrokSummarize('grok-3', DUMMY_KEY, { baseURL: LLMOCK_URL }), | ||
| openrouter: () => | ||
| createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }), | ||
| } | ||
| return factories[provider]?.() ?? factories.openai!() | ||
| } |
There was a problem hiding this comment.
Do not silently fall back to OpenAI for unsupported providers.
Line 31 masks provider mismatches (groq currently has no factory) by routing to OpenAI, which can hide test coverage gaps and produce incorrect fixture matching. Fail explicitly for unsupported providers (or add the missing provider adapter).
Suggested fix
function createSummarizeAdapter(provider: Provider) {
- const factories: Record<string, () => any> = {
+ const factories: Partial<Record<Provider, () => any>> = {
openai: () =>
createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }),
@@
openrouter: () =>
createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }),
}
- return factories[provider]?.() ?? factories.openai!()
+ const factory = factories[provider]
+ if (!factory) {
+ throw new Error(`Provider '${provider}' is not supported for summarize`)
+ }
+ return factory()
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function createSummarizeAdapter(provider: Provider) { | |
| const factories: Record<string, () => any> = { | |
| openai: () => | |
| createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }), | |
| anthropic: () => | |
| createAnthropicSummarize('claude-sonnet-4-5', DUMMY_KEY, { | |
| baseURL: LLMOCK_URL, | |
| }), | |
| gemini: () => | |
| createGeminiSummarize(DUMMY_KEY, 'gemini-2.0-flash', { | |
| baseURL: LLMOCK_URL, | |
| }), | |
| ollama: () => createOllamaSummarize('mistral', LLMOCK_URL), | |
| grok: () => | |
| createGrokSummarize('grok-3', DUMMY_KEY, { baseURL: LLMOCK_URL }), | |
| openrouter: () => | |
| createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }), | |
| } | |
| return factories[provider]?.() ?? factories.openai!() | |
| } | |
| function createSummarizeAdapter(provider: Provider) { | |
| const factories: Partial<Record<Provider, () => any>> = { | |
| openai: () => | |
| createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }), | |
| anthropic: () => | |
| createAnthropicSummarize('claude-sonnet-4-5', DUMMY_KEY, { | |
| baseURL: LLMOCK_URL, | |
| }), | |
| gemini: () => | |
| createGeminiSummarize(DUMMY_KEY, 'gemini-2.0-flash', { | |
| baseURL: LLMOCK_URL, | |
| }), | |
| ollama: () => createOllamaSummarize('mistral', LLMOCK_URL), | |
| grok: () => | |
| createGrokSummarize('grok-3', DUMMY_KEY, { baseURL: LLMOCK_URL }), | |
| openrouter: () => | |
| createOpenaiSummarize('gpt-4o', DUMMY_KEY, { baseURL: LLMOCK_URL }), | |
| } | |
| const factory = factories[provider] | |
| if (!factory) { | |
| throw new Error(`Provider '${provider}' is not supported for summarize`) | |
| } | |
| return factory() | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.summarize.ts` around lines 13 - 32, The
createSummarizeAdapter function currently falls back to the OpenAI factory when
a provider key is missing, hiding unsupported providers; instead update
createSummarizeAdapter to explicitly fail for unknown providers by checking
factories[provider] and throwing a clear error (including the provider value)
when absent, or alternatively add the missing provider entry (e.g., "groq") to
the factories map; reference the createSummarizeAdapter function and the
factories constant to implement the explicit error path rather than returning
factories.openai by default.
| const LLMOCK_URL = process.env.LLMOCK_URL || 'http://127.0.0.1:4010' | ||
| const DUMMY_KEY = 'sk-e2e-test-dummy-key' |
There was a problem hiding this comment.
Rename constants to camelCase to match repo convention.
LLMOCK_URL and DUMMY_KEY violate the project’s camelCase naming rule for variables.
As per coding guidelines "**/*.{ts,tsx,js,jsx}: Use camelCase for function and variable names throughout the codebase".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.transcription.ts` around lines 6 - 7, Rename the
top-level constants LLMOCK_URL and DUMMY_KEY to camelCase per project convention
(e.g., llmMockUrl and dummyKey) and update all references in the file (and tests
if any) to use the new names; ensure the values remain the same and export/usage
sites still work by updating any occurrences of LLMOCK_URL and DUMMY_KEY to the
new identifiers (check const declarations and any places they are imported/used
within testing/e2e/src/routes/api.transcription.ts).
| const body = await request.json() | ||
| const { audio, provider } = body | ||
|
|
||
| const adapter = createTranscriptionAdapter(provider) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file
cat -n testing/e2e/src/routes/api.transcription.tsRepository: TanStack/ai
Length of output: 2150
🏁 Script executed:
# Check if Zod is in dependencies
fd package.json | head -5 | xargs cat | grep -A 20 '"dependencies"'Repository: TanStack/ai
Length of output: 2545
🏁 Script executed:
# Find the Provider type definition
rg "type Provider|interface Provider" --type ts --type tsxRepository: TanStack/ai
Length of output: 82
🏁 Script executed:
# Find Provider type definition
fd types.ts --type fRepository: TanStack/ai
Length of output: 1528
🏁 Script executed:
# Check the main package.json for Zod
find . -maxdepth 3 -name 'package.json' -type f -exec grep -l 'zod' {} \;Repository: TanStack/ai
Length of output: 302
🏁 Script executed:
# Look for similar validation patterns in API route files
rg "request\.json\(\)" --type ts -A 3 -B 1Repository: TanStack/ai
Length of output: 18082
🏁 Script executed:
# Check the Provider type definition
cat -n testing/e2e/src/lib/types.tsRepository: TanStack/ai
Length of output: 1368
🏁 Script executed:
# Check if Zod is in the e2e package dependencies
cat testing/e2e/package.json | grep -A 30 '"dependencies"'Repository: TanStack/ai
Length of output: 1127
🏁 Script executed:
# Check if there are any existing validation schemas in the e2e testing directory
find testing/e2e -name "*.ts" -type f -exec grep -l "z\.object\|z\.enum" {} \;Repository: TanStack/ai
Length of output: 114
🏁 Script executed:
# Look at one of the similar files to see if they also need validation
cat -n testing/e2e/src/routes/api.image.ts | head -40Repository: TanStack/ai
Length of output: 1761
🏁 Script executed:
# Check existing schemas to understand the pattern used in this project
cat -n testing/e2e/src/lib/schemas.tsRepository: TanStack/ai
Length of output: 461
Validate and type the request body before use.
request.json() can fail with malformed payloads, causing parse errors to be caught by the try-catch at line 40 and returned as 500 errors instead of 400. Validate provider and audio upfront using Zod to return a structured 400 on parse/validation failure.
Use z.enum() for the Provider type validation instead of z.custom(), as it provides better type inference and compile-time safety:
Suggested fix
+import { z } from 'zod'
+import { ALL_PROVIDERS } from '@/lib/types'
+
+const transcriptionRequestSchema = z.object({
+ provider: z.enum(ALL_PROVIDERS),
+ audio: z.any(),
+})
...
POST: async ({ request }) => {
await import('@/lib/llmock-server').then((m) => m.ensureLLMock())
- const body = await request.json()
- const { audio, provider } = body
+ const rawBody = await request.json().catch(() => null)
+ const parsed = transcriptionRequestSchema.safeParse(rawBody)
+ if (!parsed.success) {
+ return new Response(JSON.stringify({ error: 'Invalid transcription request body' }), {
+ status: 400,
+ headers: { 'Content-Type': 'application/json' },
+ })
+ }
+ const { audio, provider } = parsed.dataPer coding guidelines: **/*.{ts,tsx}: Use Zod for runtime schema validation and type inference`.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const body = await request.json() | |
| const { audio, provider } = body | |
| const adapter = createTranscriptionAdapter(provider) | |
| import { z } from 'zod' | |
| import { ALL_PROVIDERS } from '@/lib/types' | |
| const transcriptionRequestSchema = z.object({ | |
| provider: z.enum(ALL_PROVIDERS), | |
| audio: z.any(), | |
| }) | |
| export const POST: async ({ request }) => { | |
| await import('@/lib/llmock-server').then((m) => m.ensureLLMock()) | |
| const rawBody = await request.json().catch(() => null) | |
| const parsed = transcriptionRequestSchema.safeParse(rawBody) | |
| if (!parsed.success) { | |
| return new Response(JSON.stringify({ error: 'Invalid transcription request body' }), { | |
| status: 400, | |
| headers: { 'Content-Type': 'application/json' }, | |
| }) | |
| } | |
| const { audio, provider } = parsed.data | |
| const adapter = createTranscriptionAdapter(provider) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.transcription.ts` around lines 24 - 27, Validate
and type the incoming request body before using it: replace the raw use of
request.json() and direct destructure of { audio, provider } with a Zod schema
(use z.enum for Provider values and validate audio as a string or appropriate
type) to parse and infer types, then call createTranscriptionAdapter(provider)
only after successful parse; on parse/validation failure return a 400 response
with the validation error details instead of letting the try-catch produce a
500. Ensure you reference the request.json() call, the provider/audio fields,
and createTranscriptionAdapter(provider) when making the change so the runtime
validation and error handling are applied in this route handler.
1. sendMessage API: use sendMessage(string) not sendMessage({role, parts})
- useChat's sendMessage accepts a plain string or {content} object
2. TextPart uses `content` not `text`: part.content instead of part.text
- ThinkingPart also uses content, not thinking
3. baseURL wiring: OpenAI-compatible SDKs need /v1 suffix
- OpenAI, Groq, Grok, OpenRouter: baseURL/v1
- Anthropic, Gemini, Ollama: bare baseURL
4. helpers: use pressSequentially + button click (fill() doesn't trigger React onChange)
5. global-setup: fix __dirname for ESM, use absolute paths for fixture loading
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (3)
testing/e2e/src/lib/providers.ts (1)
1-2:⚠️ Potential issue | 🟡 MinorReorder imports to satisfy
import/order.Line 1’s
@tanstack/aitype import is still out of the configured order and will keep lint failing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/lib/providers.ts` around lines 1 - 2, Reorder the two imports from '@tanstack/ai' so they satisfy the project's import/order rule: either combine into a single import that includes createChatOptions and the type AnyTextAdapter (e.g., import { createChatOptions, type AnyTextAdapter } from '@tanstack/ai') or ensure the runtime import of createChatOptions appears before the type-only import of AnyTextAdapter; update the lines referencing createChatOptions and AnyTextAdapter in providers.ts accordingly.testing/e2e/src/routes/$provider/$feature.tsx (1)
89-105:⚠️ Potential issue | 🟡 MinorGuard
reader.resultbefore extracting base64.
split(',')[1]assumes a valid data URL and can produceundefined. That builds an invalid image part and sends a malformed multimodal request.Defensive extraction
const reader = new FileReader() reader.onload = () => { - const base64 = (reader.result as string).split(',')[1] + const result = reader.result + if (typeof result !== 'string') return + const [, base64] = result.split(',', 2) + if (!base64) return sendMessage({ content: [ { type: 'text', content: text },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/`$provider/$feature.tsx around lines 89 - 105, Guard extraction of base64 from the FileReader in the reader.onload callback: validate that reader.result is a string and contains a comma-separated data URL before calling .split(',')[1], and abort or skip calling sendMessage if the extracted base64 is undefined or empty; update the reader.onload handler (the FileReader instance named reader and the sendMessage call) to perform this defensive check and only send the image part when the mime/payload is present and valid.testing/e2e/src/routes/api.tts.ts (1)
10-15:⚠️ Potential issue | 🔴 Critical
geminiis marked supported but rejected bycreateTTSAdapter.
testing/e2e/src/lib/feature-support.tsliststtsfor bothopenaiandgemini, but this factory only creates OpenAI. Every/$provider/ttsrun withprovider === 'gemini'falls into the 400 path unless you either add the Gemini branch here or remove Gemini from the support matrix.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.tts.ts` around lines 10 - 15, createTTSAdapter currently only handles 'openai' so requests with provider === 'gemini' are rejected; update the factories map in createTTSAdapter to include a 'gemini' entry that returns the correct Gemini TTS adapter (e.g., call the appropriate factory like createGeminiSpeech/GeminiTTS with required args such as DUMMY_KEY or the Gemini mock base URL) and ensure any needed import is added, or alternatively remove 'gemini' from the tts support matrix in testing/e2e/src/lib/feature-support.ts so the two lists stay consistent.
🧹 Nitpick comments (1)
testing/e2e/src/routes/api.image.ts (1)
3-5: Import adapters from the provider/adaptersentrypoints.These provider imports come from package roots instead of the
/adapterssubpaths the repo uses for tree-shakeable provider implementations.As per coding guidelines, "Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from
/adapterssubpath rather than monolithic adapters".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/src/routes/api.image.ts` around lines 3 - 5, The imports are using package root adapters instead of the tree-shakeable adapters; update the three imports (createOpenaiImage, createGeminiImage, createGrokImage) to import from each provider's /adapters entrypoint (e.g., import createOpenaiImage from '@tanstack/ai-openai/adapters', and similarly for '@tanstack/ai-gemini/adapters' and '@tanstack/ai-grok/adapters') so the image providers use the specialized adapter entrypoints.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testing/e2e/src/components/ChatUI.tsx`:
- Around line 39-43: handleSubmit currently allows submitting via Enter even
when a response is in flight; add a guard checking the component's isLoading
state at the top of handleSubmit (and the duplicate submit handler around the
other occurrence) so it returns early when isLoading is true, preventing
onSendMessage and setInput from being called while loading; reference the
handleSubmit function and the other submit handler (the one at the later
occurrence) and ensure both check isLoading before calling
onSendMessage(input.trim()) and setInput('').
- Around line 1-6: The import order violates ESLint rules because the type
import UIMessage from '@tanstack/ai-react' is placed before markdown-related
imports; fix by reordering the imports so that ReactMarkdown, rehypeRaw,
rehypeSanitize, and remarkGfm come before the type import of UIMessage (keep
React hooks useEffect/useRef/useState at the top), ensuring the import list is:
React hooks, markdown-related imports (ReactMarkdown, rehypeRaw, rehypeSanitize,
remarkGfm), then the type import UIMessage from '@tanstack/ai-react' to satisfy
the linter.
In `@testing/e2e/src/lib/providers.ts`:
- Around line 12-17: Rename the constants to camelCase: change LLMOCK_BASE ->
llmockBase, LLMOCK_OPENAI -> llmockOpenai (or llmockOpenAI if preferring acronym
casing per repo style), and DUMMY_KEY -> dummyKey; update their declarations in
providers.ts and replace all usages across the codebase (imports/exports, tests,
and any reference in e2e helpers) to use the new names so no references remain
to the old UPPER_SNAKE identifiers; ensure any exported names or re-exports keep
consistent casing and run tests/lint to confirm naming rule compliance.
- Around line 18-75: The createTextAdapter currently accepts modelOverride:
string and uses unsafe "as '...'" casts, allowing invalid provider/model combos
to compile; change to a discriminated mapping and generic signature so
modelOverride is typed per provider: define a ProviderModelMap mapping each
provider to its model type (e.g., openai: OpenAIChatModel, anthropic:
AnthropicChatModel, etc.) and update createTextAdapter to be generic: function
createTextAdapter<P extends Provider>(provider: P, modelOverride?:
ProviderModelMap[P]) so callers can only pass valid models; remove all "as
'...'" casts in the factories (e.g., in createOpenaiChat, createAnthropicChat,
createGeminiChat, createOllamaChat, createGroqText, createGrokText,
createOpenRouterText) and make the local model variable typed from
ProviderModelMap[P] (fall back to the defaultModels lookup cast/convert safely
to that union) so the factory calls use correctly typed model values at compile
time.
In `@testing/e2e/src/routes/`$provider/$feature.tsx:
- Around line 20-25: The addToCartClient stub uses Date.now() which makes cartId
non-deterministic; update the addToCartToolDef.client implementation (the
addToCartClient factory) to produce a deterministic cartId (e.g., use a constant
prefix plus a fixed timestamp, a deterministic counter, or a
seeded/fixture-based value) instead of Date.now(); ensure the returned object
still provides success, cartId, guitarId and quantity but with a stable cartId
for replayable llmock fixtures.
In `@testing/e2e/src/routes/api.image.ts`:
- Around line 47-49: The route currently returns an SSE stream from
generateImage via toServerSentEventsResponse, but ImageGenFeature is calling
res.json(); change the client features to parse SSE instead of JSON: update
ImageGenFeature to consume the text/event-stream using an SSE parser/reader,
listen for the "generation:result" event and extract the image URL from
event.payload.value.images[0].url, and resolve once that URL is received; apply
the same SSE parsing approach to TTSFeature and TranscriptionFeature (consume
the SSE stream, handle their corresponding event types, and extract the expected
payload fields) so clients no longer call res.json() on SSE responses.
In `@testing/e2e/src/routes/api.tts.ts`:
- Around line 37-39: The route currently returns SSE via
toServerSentEventsResponse(generateSpeech({ adapter, text, stream: true })), but
the frontend TTSFeature expects JSON (it calls res.json()); change the endpoint
to return a JSON response instead: call generateSpeech with stream: false (or
await the non-streaming result) and return a JSON Response containing the
generation result with Content-Type: application/json; update the use of
generateSpeech and remove toServerSentEventsResponse so the client can
successfully call res.json().
---
Duplicate comments:
In `@testing/e2e/src/lib/providers.ts`:
- Around line 1-2: Reorder the two imports from '@tanstack/ai' so they satisfy
the project's import/order rule: either combine into a single import that
includes createChatOptions and the type AnyTextAdapter (e.g., import {
createChatOptions, type AnyTextAdapter } from '@tanstack/ai') or ensure the
runtime import of createChatOptions appears before the type-only import of
AnyTextAdapter; update the lines referencing createChatOptions and
AnyTextAdapter in providers.ts accordingly.
In `@testing/e2e/src/routes/`$provider/$feature.tsx:
- Around line 89-105: Guard extraction of base64 from the FileReader in the
reader.onload callback: validate that reader.result is a string and contains a
comma-separated data URL before calling .split(',')[1], and abort or skip
calling sendMessage if the extracted base64 is undefined or empty; update the
reader.onload handler (the FileReader instance named reader and the sendMessage
call) to perform this defensive check and only send the image part when the
mime/payload is present and valid.
In `@testing/e2e/src/routes/api.tts.ts`:
- Around line 10-15: createTTSAdapter currently only handles 'openai' so
requests with provider === 'gemini' are rejected; update the factories map in
createTTSAdapter to include a 'gemini' entry that returns the correct Gemini TTS
adapter (e.g., call the appropriate factory like createGeminiSpeech/GeminiTTS
with required args such as DUMMY_KEY or the Gemini mock base URL) and ensure any
needed import is added, or alternatively remove 'gemini' from the tts support
matrix in testing/e2e/src/lib/feature-support.ts so the two lists stay
consistent.
---
Nitpick comments:
In `@testing/e2e/src/routes/api.image.ts`:
- Around line 3-5: The imports are using package root adapters instead of the
tree-shakeable adapters; update the three imports (createOpenaiImage,
createGeminiImage, createGrokImage) to import from each provider's /adapters
entrypoint (e.g., import createOpenaiImage from '@tanstack/ai-openai/adapters',
and similarly for '@tanstack/ai-gemini/adapters' and
'@tanstack/ai-grok/adapters') so the image providers use the specialized adapter
entrypoints.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 87e8cf0f-1bdf-47ab-870d-9473d2a796d7
📒 Files selected for processing (10)
testing/e2e/global-setup.tstesting/e2e/src/components/ChatUI.tsxtesting/e2e/src/lib/providers.tstesting/e2e/src/routeTree.gen.tstesting/e2e/src/routes/$provider/$feature.tsxtesting/e2e/src/routes/api.image.tstesting/e2e/src/routes/api.summarize.tstesting/e2e/src/routes/api.transcription.tstesting/e2e/src/routes/api.tts.tstesting/e2e/tests/helpers.ts
✅ Files skipped from review due to trivial changes (3)
- testing/e2e/global-setup.ts
- testing/e2e/src/routeTree.gen.ts
- testing/e2e/src/routes/api.transcription.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- testing/e2e/src/routes/api.summarize.ts
- testing/e2e/tests/helpers.ts
| import { useEffect, useRef, useState } from 'react' | ||
| import type { UIMessage } from '@tanstack/ai-react' | ||
| import ReactMarkdown from 'react-markdown' | ||
| import rehypeRaw from 'rehype-raw' | ||
| import rehypeSanitize from 'rehype-sanitize' | ||
| import remarkGfm from 'remark-gfm' |
There was a problem hiding this comment.
Fix the import-order lint failure.
ESLint is already flagging this import block, so the file stays red until the @tanstack/ai-react type import is moved after the markdown-related imports.
🧰 Tools
🪛 ESLint
[error] 2-2: @tanstack/ai-react type import should occur after import of remark-gfm
(import/order)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/components/ChatUI.tsx` around lines 1 - 6, The import order
violates ESLint rules because the type import UIMessage from
'@tanstack/ai-react' is placed before markdown-related imports; fix by
reordering the imports so that ReactMarkdown, rehypeRaw, rehypeSanitize, and
remarkGfm come before the type import of UIMessage (keep React hooks
useEffect/useRef/useState at the top), ensuring the import list is: React hooks,
markdown-related imports (ReactMarkdown, rehypeRaw, rehypeSanitize, remarkGfm),
then the type import UIMessage from '@tanstack/ai-react' to satisfy the linter.
| const handleSubmit = () => { | ||
| if (!input.trim()) return | ||
| onSendMessage(input.trim()) | ||
| setInput('') | ||
| } |
There was a problem hiding this comment.
Enter bypasses the loading state.
The button is disabled while generating, but the keyboard path still calls handleSubmit() because neither path checks isLoading. That makes the UI inconsistent and can dispatch a second send while a response is already in flight.
Minimal guard
const handleSubmit = () => {
- if (!input.trim()) return
+ if (isLoading || !input.trim()) return
onSendMessage(input.trim())
setInput('')
}Also applies to: 142-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/components/ChatUI.tsx` around lines 39 - 43, handleSubmit
currently allows submitting via Enter even when a response is in flight; add a
guard checking the component's isLoading state at the top of handleSubmit (and
the duplicate submit handler around the other occurrence) so it returns early
when isLoading is true, preventing onSendMessage and setInput from being called
while loading; reference the handleSubmit function and the other submit handler
(the one at the later occurrence) and ensure both check isLoading before calling
onSendMessage(input.trim()) and setInput('').
| const LLMOCK_BASE = process.env.LLMOCK_URL || 'http://127.0.0.1:4010' | ||
| // OpenAI-compatible SDKs (OpenAI, Groq, Grok, OpenRouter) need /v1 in baseURL | ||
| // Anthropic, Gemini, Ollama SDKs include their path prefixes internally | ||
| const LLMOCK_OPENAI = `${LLMOCK_BASE}/v1` | ||
| const DUMMY_KEY = 'sk-e2e-test-dummy-key' | ||
|
|
There was a problem hiding this comment.
Rename constants to camelCase to match repository naming rules.
LLMOCK_BASE, LLMOCK_OPENAI, and DUMMY_KEY violate the project’s camelCase naming rule for variables.
As per coding guidelines "Use camelCase for function and variable names throughout the codebase".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/lib/providers.ts` around lines 12 - 17, Rename the constants
to camelCase: change LLMOCK_BASE -> llmockBase, LLMOCK_OPENAI -> llmockOpenai
(or llmockOpenAI if preferring acronym casing per repo style), and DUMMY_KEY ->
dummyKey; update their declarations in providers.ts and replace all usages
across the codebase (imports/exports, tests, and any reference in e2e helpers)
to use the new names so no references remain to the old UPPER_SNAKE identifiers;
ensure any exported names or re-exports keep consistent casing and run
tests/lint to confirm naming rule compliance.
| const defaultModels: Record<Provider, string> = { | ||
| openai: 'gpt-4o', | ||
| anthropic: 'claude-sonnet-4-5', | ||
| gemini: 'gemini-2.0-flash', | ||
| ollama: 'mistral', | ||
| groq: 'llama-3.3-70b-versatile', | ||
| grok: 'grok-3', | ||
| openrouter: 'openai/gpt-4o', | ||
| } | ||
|
|
||
| export function createTextAdapter( | ||
| provider: Provider, | ||
| modelOverride?: string, | ||
| ): { adapter: AnyTextAdapter } { | ||
| const model = modelOverride ?? defaultModels[provider] | ||
|
|
||
| const factories: Record<Provider, () => { adapter: AnyTextAdapter }> = { | ||
| openai: () => | ||
| createChatOptions({ | ||
| adapter: createOpenaiChat(model as 'gpt-4o', DUMMY_KEY, { | ||
| baseURL: LLMOCK_OPENAI, | ||
| }), | ||
| }), | ||
| anthropic: () => | ||
| createChatOptions({ | ||
| adapter: createAnthropicChat(model as 'claude-sonnet-4-5', DUMMY_KEY, { | ||
| baseURL: LLMOCK_BASE, | ||
| }), | ||
| }), | ||
| gemini: () => | ||
| createChatOptions({ | ||
| adapter: createGeminiChat(model as 'gemini-2.0-flash', DUMMY_KEY, { | ||
| baseURL: LLMOCK_BASE, | ||
| }), | ||
| }), | ||
| ollama: () => | ||
| createChatOptions({ | ||
| adapter: createOllamaChat(model as 'mistral', LLMOCK_BASE), | ||
| }), | ||
| groq: () => | ||
| createChatOptions({ | ||
| adapter: createGroqText(model as 'llama-3.3-70b-versatile', DUMMY_KEY, { | ||
| baseURL: LLMOCK_OPENAI, | ||
| }), | ||
| }), | ||
| grok: () => | ||
| createChatOptions({ | ||
| adapter: createGrokText(model as 'grok-3', DUMMY_KEY, { | ||
| baseURL: LLMOCK_OPENAI, | ||
| }), | ||
| }), | ||
| openrouter: () => | ||
| createChatOptions({ | ||
| adapter: createOpenRouterText(model as 'openai/gpt-4o', DUMMY_KEY, { | ||
| baseURL: LLMOCK_OPENAI, | ||
| }), | ||
| }), | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's explore the file structure and locate the providers.ts file
fd -t f "providers.ts" | head -20Repository: TanStack/ai
Length of output: 87
🏁 Script executed:
# Let's read the actual file to verify the issue
cat -n testing/e2e/src/lib/providers.tsRepository: TanStack/ai
Length of output: 3137
🏁 Script executed:
# Check for type definitions related to Provider and models
rg -t ts "type.*Provider|interface.*Provider" --max-count 20Repository: TanStack/ai
Length of output: 32754
🏁 Script executed:
# Look for per-model type patterns in the codebase, especially in model-meta files
fd -t f "model-meta.ts" | head -10Repository: TanStack/ai
Length of output: 438
🏁 Script executed:
# Check the Provider type definition
cat -n testing/e2e/src/lib/types.ts | head -50Repository: TanStack/ai
Length of output: 1266
🏁 Script executed:
# Examine a model-meta.ts file to see the per-model typing pattern
head -60 packages/typescript/ai-openai/src/model-meta.tsRepository: TanStack/ai
Length of output: 1353
🏁 Script executed:
# Check how model metadata exports provider options by model name
rg "ModelProviderOptionsByName" packages/typescript/ai-openai/src/model-meta.ts -A 5 | head -40Repository: TanStack/ai
Length of output: 263
🏁 Script executed:
# Let's see the full OpenAIChatModelProviderOptionsByName type
rg "export type OpenAIChatModelProviderOptionsByName" packages/typescript/ai-openai/src/model-meta.ts -A 20Repository: TanStack/ai
Length of output: 728
🏁 Script executed:
# Check if there are examples of type-safe model/provider pair handling elsewhere
rg -t ts "type.*ByProvider|type.*ModelMap" packages/typescript/ai*/src --max-count 10Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
# Check the createOpenaiChat signature to see what it expects
rg -t ts "export.*createOpenaiChat|function createOpenaiChat" packages/typescript/ai-openai/src -B 2 -A 5Repository: TanStack/ai
Length of output: 646
🏁 Script executed:
# Check if OPENAI_CHAT_MODELS constant exists and what models it contains
rg "export.*OPENAI_CHAT_MODELS" packages/typescript/ai-openai/src -A 2Repository: TanStack/ai
Length of output: 866
🏁 Script executed:
# Look for similar multi-provider adapter patterns in the codebase
rg -t ts "Record<Provider" testing/e2e/src/lib -B 2 -A 5Repository: TanStack/ai
Length of output: 1557
🏁 Script executed:
# Check if there are conditional types or discriminated unions for per-provider models
rg -t ts "type.*Model.*Provider|Provider.*Model" packages/typescript/ai/src --max-count 5Repository: TanStack/ai
Length of output: 1978
Remove modelOverride: string and as '...' type casts; use discriminated per-provider model typing for compile-time safety.
Currently, passing invalid model/provider pairs like createTextAdapter('openai', 'mistral') compiles successfully but fails at runtime. The codebase already defines per-model types in each provider's model-meta.ts (e.g., OpenAIChatModelProviderOptionsByName). Use a discriminated union to map each provider to its valid models and remove the as casts:
type ProviderModelMap = {
openai: OpenAIChatModel
anthropic: AnthropicChatModel
gemini: GeminiChatModel
ollama: OllamaChatModel
groq: GroqChatModel
grok: GrokChatModel
openrouter: OpenRouterTextModel
}
export function createTextAdapter<P extends Provider>(
provider: P,
modelOverride?: ProviderModelMap[P],
): { adapter: AnyTextAdapter }Per coding guidelines: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/lib/providers.ts` around lines 18 - 75, The createTextAdapter
currently accepts modelOverride: string and uses unsafe "as '...'" casts,
allowing invalid provider/model combos to compile; change to a discriminated
mapping and generic signature so modelOverride is typed per provider: define a
ProviderModelMap mapping each provider to its model type (e.g., openai:
OpenAIChatModel, anthropic: AnthropicChatModel, etc.) and update
createTextAdapter to be generic: function createTextAdapter<P extends
Provider>(provider: P, modelOverride?: ProviderModelMap[P]) so callers can only
pass valid models; remove all "as '...'" casts in the factories (e.g., in
createOpenaiChat, createAnthropicChat, createGeminiChat, createOllamaChat,
createGroqText, createGrokText, createOpenRouterText) and make the local model
variable typed from ProviderModelMap[P] (fall back to the defaultModels lookup
cast/convert safely to that union) so the factory calls use correctly typed
model values at compile time.
| const addToCartClient = addToCartToolDef.client((args) => ({ | ||
| success: true, | ||
| cartId: 'CART_' + Date.now(), | ||
| guitarId: args.guitarId, | ||
| quantity: args.quantity, | ||
| })) |
There was a problem hiding this comment.
Keep the client tool stub deterministic.
Date.now() makes the approval result change on every run. That fights the goal of replayable llmock fixtures and can bleed unstable tool output into later chat steps.
Possible deterministic alternative
+let nextCartId = 1
+
const addToCartClient = addToCartToolDef.client((args) => ({
success: true,
- cartId: 'CART_' + Date.now(),
+ cartId: `CART_${nextCartId++}`,
guitarId: args.guitarId,
quantity: args.quantity,
}))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const addToCartClient = addToCartToolDef.client((args) => ({ | |
| success: true, | |
| cartId: 'CART_' + Date.now(), | |
| guitarId: args.guitarId, | |
| quantity: args.quantity, | |
| })) | |
| let nextCartId = 1 | |
| const addToCartClient = addToCartToolDef.client((args) => ({ | |
| success: true, | |
| cartId: `CART_${nextCartId++}`, | |
| guitarId: args.guitarId, | |
| quantity: args.quantity, | |
| })) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/`$provider/$feature.tsx around lines 20 - 25, The
addToCartClient stub uses Date.now() which makes cartId non-deterministic;
update the addToCartToolDef.client implementation (the addToCartClient factory)
to produce a deterministic cartId (e.g., use a constant prefix plus a fixed
timestamp, a deterministic counter, or a seeded/fixture-based value) instead of
Date.now(); ensure the returned object still provides success, cartId, guitarId
and quantity but with a stable cartId for replayable llmock fixtures.
| try { | ||
| const stream = generateImage({ adapter, prompt, stream: true }) | ||
| return toServerSentEventsResponse(stream) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n testing/e2e/src/routes/api.image.ts | head -70Repository: TanStack/ai
Length of output: 2423
🏁 Script executed:
fd -t f '$provider' testing/e2e/src/routes/Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg "ImageGenFeature" testing/e2e/src -A 10 -B 2Repository: TanStack/ai
Length of output: 2050
🏁 Script executed:
rg "toServerSentEventsResponse" -A 5 -B 2Repository: TanStack/ai
Length of output: 50369
🏁 Script executed:
rg "function ImageGenFeature" testing/e2e/src/routes/$provider/$feature.tsx -A 30Repository: TanStack/ai
Length of output: 123
🏁 Script executed:
rg "ImageGenFeature" testing/e2e/src/routes/$provider/$feature.tsx -A 50 | grep -A 50 "handleGenerate"Repository: TanStack/ai
Length of output: 123
🏁 Script executed:
fd "feature" testing/e2e/src/routes/ -type fRepository: TanStack/ai
Length of output: 225
🏁 Script executed:
find testing/e2e/src/routes -name "*.tsx" -type f | head -20Repository: TanStack/ai
Length of output: 210
🏁 Script executed:
cat -n 'testing/e2e/src/routes/$provider/$feature.tsx' | grep -A 50 "function ImageGenFeature"Repository: TanStack/ai
Length of output: 2004
🏁 Script executed:
rg "generation:result" packages/typescript/ai -A 5 -B 5Repository: TanStack/ai
Length of output: 9909
🏁 Script executed:
cat -n 'testing/e2e/src/routes/$provider/$feature.tsx' | grep -A 50 "function TTSFeature"Repository: TanStack/ai
Length of output: 1985
🏁 Script executed:
rg "parseSSEResponse" testing/e2e/src -B 2 -A 2Repository: TanStack/ai
Length of output: 37
Client parsing mismatch: ImageGenFeature expects JSON but receives SSE.
The /api/image route returns a Server-Sent Events stream (SSE with text/event-stream), but ImageGenFeature attempts to parse it with await res.json() on line 165. This will fail to extract the image URL.
The image URL arrives in the streamed generation:result event under value.images[0].url, not at the top level as data.url or data.image. The component needs to parse the SSE stream using an SSE parser instead of JSON deserialization. This same issue affects TTSFeature and TranscriptionFeature as well.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.image.ts` around lines 47 - 49, The route
currently returns an SSE stream from generateImage via
toServerSentEventsResponse, but ImageGenFeature is calling res.json(); change
the client features to parse SSE instead of JSON: update ImageGenFeature to
consume the text/event-stream using an SSE parser/reader, listen for the
"generation:result" event and extract the image URL from
event.payload.value.images[0].url, and resolve once that URL is received; apply
the same SSE parsing approach to TTSFeature and TranscriptionFeature (consume
the SSE stream, handle their corresponding event types, and extract the expected
payload fields) so clients no longer call res.json() on SSE responses.
| try { | ||
| const stream = generateSpeech({ adapter, text, stream: true }) | ||
| return toServerSentEventsResponse(stream) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "api.tts.ts" | head -20Repository: TanStack/ai
Length of output: 128
🏁 Script executed:
find . -type f -name "*.tsx" -path "*\$provider/*" -name "*feature*" | head -20Repository: TanStack/ai
Length of output: 102
🏁 Script executed:
rg "toServerSentEventsResponse" -t ts -t tsx -A 3 -B 3 | head -50Repository: TanStack/ai
Length of output: 82
🏁 Script executed:
rg "generateSpeech" -t ts -t tsx -A 2 -B 2 | head -100Repository: TanStack/ai
Length of output: 82
🏁 Script executed:
cat -n ./testing/e2e/src/routes/api.tts.tsRepository: TanStack/ai
Length of output: 1984
🏁 Script executed:
cat -n ./testing/e2e/src/routes/$provider/$feature.tsxRepository: TanStack/ai
Length of output: 117
🏁 Script executed:
fd "feature.tsx" ./testing/e2e/src/routes/Repository: TanStack/ai
Length of output: 102
🏁 Script executed:
rg "TTSFeature" -t ts -B 5 -A 10Repository: TanStack/ai
Length of output: 2422
🏁 Script executed:
cat -n ./testing/e2e/src/routes/\$provider/\$feature.tsx | sed -n '500,650p'Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg "handleGenerate" ./testing/e2e/src/routes/\$provider/\$feature.tsx -A 20Repository: TanStack/ai
Length of output: 2259
/api/tts returns Server-Sent Events but client expects JSON.
This route returns toServerSentEventsResponse(generateSpeech(..., { stream: true })), but TTSFeature in testing/e2e/src/routes/$provider/$feature.tsx calls await res.json() on the response. The endpoint sends SSE format (text/event-stream) while the client attempts to parse it as JSON, causing the feature to fail at runtime. Either switch the endpoint to return JSON or update the client to consume the streamed generation:result event from the SSE response.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@testing/e2e/src/routes/api.tts.ts` around lines 37 - 39, The route currently
returns SSE via toServerSentEventsResponse(generateSpeech({ adapter, text,
stream: true })), but the frontend TTSFeature expects JSON (it calls
res.json()); change the endpoint to return a JSON response instead: call
generateSpeech with stream: false (or await the non-streaming result) and return
a JSON Response containing the generation result with Content-Type:
application/json; update the use of generateSpeech and remove
toServerSentEventsResponse so the client can successfully call res.json().
…maining issues
- Structured output/agentic features now use stream:false and await directly,
preventing unhandled rejections that crash the Node process
- Fix baseURL: OpenAI-compat SDKs need /v1 suffix, others use bare base
- Fix TextPart.content (was .text), ThinkingPart.content (was .thinking)
- Fix sendMessage: use plain string, not {role, parts} object
- Fix helpers: use pressSequentially + button click for React compat
- Fix global-setup: use fileURLToPath for ESM __dirname
…ning Provider SDK fixes: - OpenRouter: use serverURL instead of baseURL - Gemini: use httpOptions.baseUrl (nested) instead of baseURL - Apply same fixes to summarize and image routes Fixture collision fix: - Prefix all fixture userMessages with [feature] tags to prevent llmock substring matching collisions (e.g. [chat], [toolcall]) - Update all test specs with matching prefixed messages Feature config fixes: - Remove outputSchema from structured-output/agentic-structured (test JSON content via streaming instead of non-streaming path) - Remove stream:false from one-shot-text (useChat requires SSE) ChatUI rendering fixes: - Tool approval: check part.type==='tool-call' with state==='approval-requested' (was checking non-existent 'approval-requested' part type) - ToolCallDisplay: use part.name and part.arguments (not toolName/args) - ApprovalPrompt: use part.approval.id for response (not part.id) - Add tool-result part rendering Skip unsupported features: - image-gen, tts, transcription (llmock has no endpoints for these) - multimodal-structured (needs provider-specific tuning) - reasoning (llmock may not map reasoning to thinking chunks)
Summary
testing/e2e/using llmock for deterministic LLM mocking$provider/$featurerouting covers 17 features across 7 providers (openai, anthropic, gemini, ollama, groq, grok, openrouter)pnpm record) to capture real LLM responses as fixtures for future replayFeatures tested
chat, one-shot-text, reasoning, multi-turn, tool-calling, parallel-tool-calls, tool-approval, structured-output, agentic-structured, multimodal-image, multimodal-structured, summarize, summarize-stream, image-gen, tts, transcription
How it works
globalSetupbaseUrlpointed at llmock/$provider/$feature, interact with the UI, and assert on responsesCI
.github/workflows/e2e.ymlruns on every PRTest plan
pnpm --filter @tanstack/ai-e2e devand verify routes render correctlypnpm --filter @tanstack/ai-e2e test:e2eand verify tests pass with llmockOPENAI_API_KEY=sk-... pnpm --filter @tanstack/ai-e2e recordSummary by CodeRabbit
Tests
New Features
Documentation
Chores