Skip to content

feat: add openai service emulator#8

Open
mvanhorn wants to merge 1 commit intovercel-labs:mainfrom
mvanhorn:feat/openai-emulator
Open

feat: add openai service emulator#8
mvanhorn wants to merge 1 commit intovercel-labs:mainfrom
mvanhorn:feat/openai-emulator

Conversation

@mvanhorn
Copy link
Copy Markdown
Contributor

Summary

Adds an OpenAI API emulator with chat completions (non-streaming + SSE streaming), deterministic embeddings, model listing, and an interactive playground UI. Responses are seeded via config patterns - same input always produces the same output.

Why this matters

The Vercel AI SDK (22,900 stars, 8.8M weekly npm downloads) is Vercel's flagship AI product. Every tutorial starts with OPENAI_API_KEY. Block Engineering wrote: "We don't run live LLM tests in CI because it's too expensive, too slow, and too flaky."

The mocking space is fragmented (8+ tools, none dominant). OpenAI's SDK supports OPENAI_BASE_URL for drop-in emulator redirect, and openai-python #398 (19 thumbs-up) requests mock support directly.

Changes

New package: @internal/openai (~400 lines across 13 files)

Endpoints:

  • POST /v1/chat/completions - non-streaming JSON response AND SSE streaming with proper wire format (data: {json}\n\n chunks, data: [DONE]\n\n terminator)
  • POST /v1/embeddings - deterministic vectors via sha256 hash (same input = same 1536-dim output)
  • GET /v1/models / GET /v1/models/:id - list and retrieve seeded models

Streaming wire format matches the real OpenAI API exactly:

data: {"choices":[{"delta":{"role":"assistant"}}]}\n\n
data: {"choices":[{"delta":{"content":"Hello!"}}]}\n\n
data: {"choices":[{"delta":{"content":" World"}}]}\n\n
data: {"choices":[{"delta":{},"finish_reason":"stop"}]}\n\n
data: [DONE]\n\n

Playground UI at /playground:

Playground

Shows seeded completion patterns with a form to test prompts interactively. Uses only core CSS classes (zero inline styles).

Seed config maps regex patterns to canned responses:

openai:
  completions:
    - pattern: "hello|hi|hey"
      content: "Hello! I'm the emulated assistant."
    - pattern: ".*"
      content: "This is a mock response from the emulated OpenAI API."

Integration: All 7 points wired in start.ts + list.ts SERVICE_DESCRIPTIONS updated.

Testing

9 tests covering non-streaming completion, SSE streaming wire format verification, deterministic embeddings (same input = same output), tool calls with finish_reason: "tool_calls", model 404 error format, and playground rendering.

Dogfooding

Ran the emulator, tested non-streaming + streaming completions via curl, verified SSE chunks arrive word-by-word with proper data: prefix and [DONE] terminator. Verified 5 default models listed. Playground screenshot above is from the running emulator.

This contribution was developed with AI assistance (Claude Code).

@vercel
Copy link
Copy Markdown
Contributor

vercel bot commented Mar 23, 2026

@mvanhorn is attempting to deploy a commit to the Vercel Labs Team on Vercel.

A member of the Team first needs to authorize it.

Rebased onto main and migrated from @internal to @emulators scope.
Registered openai in SERVICE_REGISTRY. All 9 openai tests + full
suite passing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant