Open
Conversation
Contributor
|
@mvanhorn is attempting to deploy a commit to the Vercel Labs Team on Vercel. A member of the Team first needs to authorize it. |
Rebased onto main and migrated from @internal to @emulators scope. Registered openai in SERVICE_REGISTRY. All 9 openai tests + full suite passing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
8f38c76 to
023e014
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds an OpenAI API emulator with chat completions (non-streaming + SSE streaming), deterministic embeddings, model listing, and an interactive playground UI. Responses are seeded via config patterns - same input always produces the same output.
Why this matters
The Vercel AI SDK (22,900 stars, 8.8M weekly npm downloads) is Vercel's flagship AI product. Every tutorial starts with
OPENAI_API_KEY. Block Engineering wrote: "We don't run live LLM tests in CI because it's too expensive, too slow, and too flaky."The mocking space is fragmented (8+ tools, none dominant). OpenAI's SDK supports
OPENAI_BASE_URLfor drop-in emulator redirect, and openai-python #398 (19 thumbs-up) requests mock support directly.Changes
New package:
@internal/openai(~400 lines across 13 files)Endpoints:
POST /v1/chat/completions- non-streaming JSON response AND SSE streaming with proper wire format (data: {json}\n\nchunks,data: [DONE]\n\nterminator)POST /v1/embeddings- deterministic vectors via sha256 hash (same input = same 1536-dim output)GET /v1/models/GET /v1/models/:id- list and retrieve seeded modelsStreaming wire format matches the real OpenAI API exactly:
Playground UI at
/playground:Shows seeded completion patterns with a form to test prompts interactively. Uses only core CSS classes (zero inline styles).
Seed config maps regex patterns to canned responses:
Integration: All 7 points wired in
start.ts+list.tsSERVICE_DESCRIPTIONS updated.Testing
9 tests covering non-streaming completion, SSE streaming wire format verification, deterministic embeddings (same input = same output), tool calls with
finish_reason: "tool_calls", model 404 error format, and playground rendering.Dogfooding
Ran the emulator, tested non-streaming + streaming completions via curl, verified SSE chunks arrive word-by-word with proper
data:prefix and[DONE]terminator. Verified 5 default models listed. Playground screenshot above is from the running emulator.This contribution was developed with AI assistance (Claude Code).