Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 0 additions & 14 deletions packages/sdk/ts/src/supported-models/chat/gemini.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,11 @@ export type GeminiModel =
| 'gemini-2.0-flash-lite-001'
| 'gemini-2.0-flash-lite-preview'
| 'gemini-2.0-flash-lite-preview-02-05'
| 'gemini-2.0-flash-preview-image-generation'
| 'gemini-2.0-flash-thinking-exp'
| 'gemini-2.0-flash-thinking-exp-01-21'
| 'gemini-2.0-flash-thinking-exp-1219'
| 'gemini-2.5-flash'
| 'gemini-2.5-flash-image'
| 'gemini-2.5-flash-image-preview'
| 'gemini-2.5-flash-lite'
| 'gemini-2.5-flash-lite-preview-06-17'
| 'gemini-2.5-flash-lite-preview-09-2025'
Expand Down Expand Up @@ -78,12 +76,6 @@ export const GeminiModels: SupportedModel[] = [
output_cost_per_token: 3e-7,
provider: 'Gemini',
},
{
model_id: 'gemini-2.0-flash-preview-image-generation',
input_cost_per_token: 1e-7,
output_cost_per_token: 4e-7,
provider: 'Gemini',
},
{
model_id: 'gemini-2.0-flash-thinking-exp',
input_cost_per_token: 1e-7,
Expand Down Expand Up @@ -114,12 +106,6 @@ export const GeminiModels: SupportedModel[] = [
output_cost_per_token: 0.0000025,
provider: 'Gemini',
},
{
model_id: 'gemini-2.5-flash-image-preview',
input_cost_per_token: 3e-7,
output_cost_per_token: 0.0000025,
provider: 'Gemini',
},
{
model_id: 'gemini-2.5-flash-lite',
input_cost_per_token: 1e-7,
Expand Down
1 change: 0 additions & 1 deletion packages/tests/provider-smoke/gemini-generate-text.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ import {
beforeAll(assertEnv);

export const BLACKLISTED_MODELS = new Set([
'gemini-2.0-flash-preview-image-generation',
'veo-3.0-fast-generate',
'gemini-2.0-flash-exp',
'gemini-2.0-flash-thinking-exp-1219',
Expand Down
40 changes: 40 additions & 0 deletions templates/assistant-ui/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
description: Guidelines for building with Echo and assistant-ui components
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + assistant-ui Guidelines

## What Echo does
Echo is a user-pays AI layer. Users fund their own AI calls. `<EchoTokens />` handles credit top-ups.

## assistant-ui integration
This template uses [assistant-ui](https://www.assistant-ui.com/) for the chat interface and Echo for the AI backend.

- The `AssistantRuntimeProvider` wraps the chat interface with a runtime that calls your Echo-backed route.
- Keep the Echo route handler pattern the same as in `next-chat` — `streamText` + `toDataStreamResponse()`.

## Pattern
```tsx
// src/app/layout.tsx
import { AssistantRuntimeProvider } from '@assistant-ui/react';
import { useVercelUseChatRuntime } from '@assistant-ui/react-ai-sdk';
import { useChat } from 'ai/react';

function Providers({ children }) {
const chat = useChat({ api: '/api/chat' });
const runtime = useVercelUseChatRuntime(chat);
return (
<AssistantRuntimeProvider runtime={runtime}>
{children}
</AssistantRuntimeProvider>
);
}
```

## Rules
- The Echo route handler is the same as `next-chat` — don't duplicate model logic in the runtime config.
- Place `<EchoTokens />` in the layout or sidebar, not inside the thread component.
- Don't override assistant-ui's default streaming behaviour — it's compatible with `toDataStreamResponse()`.
- Handle `402` errors in the `onError` callback of `useChat` to show a credits prompt.
49 changes: 49 additions & 0 deletions templates/authjs/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
description: Guidelines for building with Echo and Auth.js (NextAuth) in Next.js
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + Auth.js (NextAuth) Guidelines

## What Echo does
Echo is a user-pays AI layer. In this template, Auth.js handles user identity and Echo handles AI billing. The user's Echo API key is stored in the session after sign-in.

## Auth flow
1. User signs in via Auth.js (e.g., GitHub OAuth).
2. During the `jwt` callback, attach the user's Echo API key to the token.
3. Route Handlers read the key from the session and pass it to `EchoClient`.

## Session extension
```ts
// auth.ts — store Echo key in JWT
callbacks: {
jwt({ token, account }) {
if (account?.echoApiKey) token.echoApiKey = account.echoApiKey;
return token;
},
session({ session, token }) {
session.echoApiKey = token.echoApiKey as string;
return session;
}
}
```

## Route Handler pattern
```ts
import { auth } from '@/auth';
import { EchoClient } from '@merit-systems/echo-typescript-sdk';

export async function POST(req: Request) {
const session = await auth();
if (!session) return new Response('Unauthorized', { status: 401 });
const client = new EchoClient({ apiKey: session.echoApiKey });
// ...
}
```

## Rules
- Never expose `echoApiKey` to the client — access it only in Route Handlers and Server Actions.
- Protect all AI routes with `auth()` — return `401` before attempting any Echo call.
- Handle `402` (no credits) separately from `401` (not authenticated) in client error handling.
- Extend `Session` and `JWT` TypeScript types in `next-auth.d.ts` to include `echoApiKey`.
34 changes: 34 additions & 0 deletions templates/echo-cli/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
description: Guidelines for building with the Echo CLI SDK
globs: ["**/*.ts"]
alwaysApply: true
---

# Echo CLI SDK Guidelines

## What Echo does
Echo is a user-pays AI infrastructure layer. The CLI SDK (`@merit-systems/echo-typescript-sdk`) lets you call AI models from Node.js scripts and CLIs, billed to the authenticated user's Echo account.

## Setup
```ts
import { EchoClient } from '@merit-systems/echo-typescript-sdk';

// Uses stored credentials from `echo login`, or pass apiKey explicitly
const client = new EchoClient();
```

## Common patterns
```ts
// Check balance
const { balance } = await client.getBalance();

// Create a payment/top-up link
const { paymentLink } = await client.createPaymentLink({ amount: 10.0 });
```

## Rules
- Never hardcode API keys — use `echo login` for interactive auth or `ECHO_API_KEY` env var for CI.
- Wrap calls in try/catch and handle `402` (no credits) explicitly: print a message with the top-up URL.
- For long-running CLI tools, check balance before starting a batch job to avoid mid-run failures.
- Log token usage after each call when in verbose mode so users can track spend.
- Keep `EchoClient` instantiation at the top of the script/command, not inside loops.
40 changes: 40 additions & 0 deletions templates/next-chat/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
description: Guidelines for building with Echo in a Next.js AI chat app
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + Next.js Chat Guidelines

## What Echo does
Echo is a user-pays AI layer. Users buy their own credits — you never pay for their usage. The `<EchoTokens />` component handles top-ups.

## Setup
- Echo providers live in `src/echo.ts`. Always import `openai`, `anthropic`, etc. from `@/echo`, not directly from provider packages.
- `ECHO_SECRET_KEY` goes in `.env.local`. Never expose it client-side.

## Chat route pattern
```ts
// src/app/api/chat/route.ts
import { openai } from '@/echo';
import { convertToModelMessages, streamText } from 'ai';

export const maxDuration = 30;

export async function POST(req: Request) {
const { messages, model } = await req.json();
const result = streamText({
model: openai(model ?? 'gpt-4o-mini'),
messages: convertToModelMessages(messages),
});
return result.toDataStreamResponse();
}
```

## Rules
- Validate `model` and `messages` before calling `streamText` — return 400 for missing params.
- Always use `convertToModelMessages` to normalise `UIMessage[]` before passing to the model.
- Stream with `toDataStreamResponse()` — don't buffer.
- Catch `402` errors and surface a "top up credits" message in the UI.
- The `<EchoTokens />` component must be reachable from the chat page (typically in the layout or nav).
- Let users select the model in the UI — pass it as a request body field, not hardcoded in the route.
38 changes: 38 additions & 0 deletions templates/next-image/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
description: Guidelines for building with Echo in a Next.js image generation app
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + Next.js Image Generation Guidelines

## What Echo does
Echo is a user-pays AI layer. Users spend their own credits on image generation — you pay nothing per image. `<EchoTokens />` handles top-ups.

## Supported image models (current)
- OpenAI: `dall-e-3`, `dall-e-2`, `gpt-image-1`
- Google: `gemini-2.5-flash-image` (use this, not the `-preview` variant which is deprecated)
- Stability AI: `stable-diffusion-v3-*`

## Route pattern
```ts
// src/app/api/generate-image/route.ts
import { openai } from '@/echo';
import { experimental_generateImage as generateImage } from 'ai';

export async function POST(req: Request) {
const { prompt, model } = await req.json();
const { image } = await generateImage({
model: openai.image(model ?? 'dall-e-3'),
prompt,
});
return Response.json({ base64: image.base64 });
}
```

## Rules
- Never use deprecated preview model IDs like `gemini-2.5-flash-image-preview` or `gemini-2.0-flash-preview-image-generation` — use `gemini-2.5-flash-image`.
- Image routes are non-streaming — use `generateImage`, not `streamText`.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Allow generateText for Gemini image routes

In this same next-image template, the Google generation and edit handlers intentionally call generateText with google('gemini-2.5-flash-image') and read image data from result.files; the generateImage path shown here only matches the OpenAI image provider. Since this rule is alwaysApply, Cursor will tell future edits to replace the working Gemini path with generateImage, which would break Google image generation/editing for users following the template guidance.

Useful? React with 👍 / 👎.

- Validate `prompt` length server-side (DALL-E 3 max is 4000 chars).
- Return base64 from the route; let the client convert to a blob URL if needed.
- Handle `402` (no credits) and `400` (content policy) distinctly in the UI.
2 changes: 1 addition & 1 deletion templates/next-image/src/app/api/edit-image/google.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ export async function handleGoogleEdit(
];

const result = await generateText({
model: google('gemini-2.5-flash-image-preview'),
model: google('gemini-2.5-flash-image'),
prompt: [
{
role: 'user',
Expand Down
2 changes: 1 addition & 1 deletion templates/next-image/src/app/api/generate-image/google.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import { ERROR_MESSAGES } from '@/lib/constants';
export async function handleGoogleGenerate(prompt: string): Promise<Response> {
try {
const result = await generateText({
model: google('gemini-2.5-flash-image-preview'),
model: google('gemini-2.5-flash-image'),
prompt,
});

Expand Down
37 changes: 37 additions & 0 deletions templates/next-video-template/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
description: Guidelines for building with Echo in a Next.js video generation app
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + Next.js Video Generation Guidelines

## What Echo does
Echo is a user-pays AI layer. Users spend credits on video generation — you have no per-video cost. `<EchoTokens />` handles top-ups.

## Key differences from image generation
- Video generation is asynchronous: submit a job, poll for completion.
- Jobs can take 30 seconds to several minutes — never block a request waiting for completion.
- Return a job ID immediately; let the client poll `GET /api/video-status/[jobId]`.

## Route pattern
```ts
// src/app/api/generate-video/route.ts — submit job
export async function POST(req: Request) {
const { prompt, model } = await req.json();
// submit to video model, return job id
return Response.json({ jobId });
}

// src/app/api/video-status/[jobId]/route.ts — poll status
export async function GET(req: Request, { params }) {
// check job status, return { status, videoUrl? }
}
```

## Rules
- Never use long-polling or blocking waits inside a Route Handler — Next.js has a default 30s timeout.
- Use `maxDuration` export only if your hosting plan supports extended timeouts.
- Poll from the client on a reasonable interval (3-5 seconds) with exponential backoff.
- Handle `402` (no credits) at the submit step, before the job is created.
- `<EchoTokens />` should be accessible from the generation UI.
34 changes: 34 additions & 0 deletions templates/next/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
---
description: Guidelines for building with Echo in a Next.js app
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + Next.js Guidelines

## What Echo does
Echo is a user-pays AI infrastructure layer. Your users fund their own API calls — you never pay for their usage. The `<EchoTokens />` component handles the entire payment and token top-up flow.

## Setup
- Echo SDK is initialised in `src/echo.ts`. Import providers from there (`openai`, `anthropic`, etc.) — never import directly from `ai` SDK or provider packages for model calls.
- Set `ECHO_SECRET_KEY` in `.env.local`. Never expose it to the client.

## Making AI calls
```ts
// In a Next.js Route Handler (src/app/api/*/route.ts)
import { openai } from '@/echo';
import { streamText } from 'ai';

export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({ model: openai('gpt-4o'), messages });
return result.toDataStreamResponse();
}
```

## Rules
- AI calls belong in Route Handlers (`src/app/api/`), never in Server Components or client components.
- Always stream responses with `toDataStreamResponse()` — don't await the full response.
- The `<EchoTokens />` component must be rendered somewhere in the layout so users can top up.
- Don't hardcode model names in multiple places — define a `DEFAULT_MODEL` constant.
- Handle the `402 Payment Required` response: it means the user is out of credits. Prompt them to top up.
27 changes: 27 additions & 0 deletions templates/nextjs-api-key-template/.cursor/rules/echo_rules.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
description: Guidelines for building with Echo using API key auth in Next.js
globs: ["**/*.ts", "**/*.tsx"]
alwaysApply: true
---

# Echo + Next.js API Key Template Guidelines

## What Echo does
Echo is a user-pays AI layer. In this template, users authenticate with an API key rather than the hosted payment widget. Users pre-fund their account and calls are debited automatically.

## Auth pattern
```ts
// Pass the user's Echo API key in the Authorization header
const client = new EchoClient({ apiKey: userApiKey });
```

## Setup
- `ECHO_SECRET_KEY` is your platform key (server-only).
- Users provide their own Echo API key — store it in the session or pass it per-request.
- Never log or persist user API keys beyond what the session needs.

## Rules
- Validate the user API key format before forwarding to Echo — keys follow the `echo_*` prefix pattern.
- Return `401` for missing/invalid keys, `402` for insufficient credits, `429` for rate limits.
- Don't hardcode a fallback API key — if the user has no key, surface a clear setup prompt.
- Keep all Echo client instantiation in Route Handlers or server actions, never in client components.
Loading