Local-first MCP image generation server with multi-model support and an embedded interactive studio.
Generate and edit AI images through OpenAI GPT Image and Google Gemini models via the Model Context Protocol (MCP). Includes a built-in studio UI for visual iteration directly inside the chat surface.
Works with any MCP-compatible client: Claude Desktop, Cursor, Windsurf, AnythingLLM, and other AI platforms.
- Multi-model support -- OpenAI GPT Image 1.5, GPT Image 1 Mini, Google Gemini 3.1 Flash, Gemini 3 Pro, Gemini 2.5 Flash
- Image generation -- create images from detailed text prompts with configurable aspect ratios and quality profiles
- Image editing -- modify existing images with natural language instructions and optional reference images
- Embedded studio -- interactive MCP App for browsing assets, switching models, adjusting settings, and iterating visually
- Local asset storage -- all generated and uploaded images are persisted locally with full history
- Enterprise-ready -- configurable model access, concurrency limits, and provider API key management
These tools are exposed to the AI model:
| Tool | Description |
|---|---|
imagegen_generate |
Generate a new image from a detailed text prompt |
imagegen_edit |
Edit an existing image using instructions and optional reference images |
imagegen_list_models |
List enabled image models and their capabilities |
These tools are used internally by the embedded studio UI and are not visible to the AI model:
imagegen_list_assets, imagegen_read_asset_bytes, imagegen_create_upload,
imagegen_append_upload_chunk, imagegen_finalize_upload
Prerequisites: Node.js >= 24, pnpm
git clone https://github.com/CCimen/imagegen.git
cd imagegen
pnpm install
cp .env.example .envSet at least one provider API key in .env:
OPENAI_API_KEY=sk-...
# and/or
GOOGLE_API_KEY=AI...Start the server:
pnpm devThe Streamable HTTP endpoint is available at:
http://127.0.0.1:3001/mcp
| Variable | Default | Description |
|---|---|---|
OPENAI_API_KEY |
-- | OpenAI API key (required for GPT Image models) |
GOOGLE_API_KEY |
-- | Google AI API key (required for Gemini Image models) |
MCP_IMAGEGEN_DATA_DIR |
~/.mcp-imagegen |
Local directory for generated assets |
IMAGEGEN_ENABLED_MODELS |
gpt-image-1.5,gemini-3.1-flash-image-preview |
Comma-separated list of enabled model IDs |
IMAGEGEN_DEFAULT_MODEL |
gpt-image-1.5 |
Model used when none is specified |
IMAGEGEN_CONCURRENCY_LIMIT |
2 |
Max concurrent image generation requests |
IMAGEGEN_HTTP_HOST |
127.0.0.1 |
Server bind address |
IMAGEGEN_HTTP_PORT |
3001 |
Server port |
Add this to your MCP client configuration (e.g. claude_desktop_config.json):
{
"mcpServers": {
"imagegen": {
"url": "http://127.0.0.1:3001/mcp"
}
}
}Docker: If connecting from inside a container, use
http://host.docker.internal:3001/mcpinstead of127.0.0.1.
| Model | Provider | Highlights |
|---|---|---|
gpt-image-1.5 |
OpenAI | State-of-the-art image generation and editing |
gpt-image-1-mini |
OpenAI | Cost-efficient variant with editing support |
gemini-3.1-flash-image-preview |
Fast generation with thinking controls | |
gemini-3-pro-image-preview |
High-fidelity text rendering | |
gemini-2.5-flash-image |
Low-latency generation |
Models are enabled via IMAGEGEN_ENABLED_MODELS in .env. The server fails
fast on startup if no enabled models have valid API keys configured.
pnpm dev # build studio + start server in watch mode
pnpm test # run all tests
pnpm test:e2e # run end-to-end server tests
pnpm build # production build
pnpm start # start production server
pnpm check # type-check all packagesIf you run a modified version of this server for users over a network, you must make the corresponding source available to those users, as required by the AGPL.