Skip to content

blankdotspace/blueprints

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

224 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Blueprints

Blueprints is the alpha version of an Agents Launchpad. While it currently starts with support for elizaos, the vision is to support multiple agent frameworks in the near future.

This monorepo manages the deployment, state, and configuration of these AI Agents.

Project Structure

This project is organized as a monorepo with the following packages:

  • packages/frontend: A Next.js application providing the UI for managing projects and agents.
    • Tech Stack: React, Tailwind CSS, Supabase
  • packages/backend: A Fastify server providing the REST API.
    • Tech Stack: Fastify, Supabase
  • packages/worker: A worker process for handling background tasks and state synchronization.
  • packages/shared: Shared code, type definitions, and Zod validation schemas used across the monorepo.
  • external/: External dependencies and agent frameworks.

Key Features

πŸ–₯️ Integrated Agent Terminal

Blueprints features a professional-grade terminal integrated directly into the agent's chat interface. This allows for:

  • Direct Container Control: Execute shell commands directly inside the agent's isolated environment.
  • Dual Routing System: Seamlessly toggle between Chat Mode (Natural Language) and Terminal Mode (Shell Commands).
  • Advanced Diagnostics: Access logs, file systems, and environment variables without leaving the dashboard.
  • Slash Commands: Power users can use the /terminal <command> prefix to trigger shell execution even while in Chat Mode.
  • Persistent Sessions: Maintain console state across multiple chat sessions and page refreshes.

πŸ”‘ Managed Provider Keys (MPK)

Blueprints allows for hybrid model access through a dual-key system:

  • BYOK (Bring Your Own Key): Full control over your AI providers and billing.
  • Managed Leases: Admins can lease shared API access (e.g., OpenRouter) to users for temporary testing and evaluation without per-user setup.
  • Automatic Orchestration: The system handles lease expiration, key rotation, and agent configuration automatically.

Security Levels & Isolation Model

Blueprints ships with a tiered container security model designed to balance safety, power, and operational control.

STANDARD (Default / Free Tier)

Workspace-only access.

  • βœ… Only /home/node is writable
  • βœ… Non-root user
  • βœ… Root filesystem is read-only
  • βœ… No Linux capabilities
  • βœ… Cannot escalate privileges

Effectively equivalent to a Google Colab–style sandbox.

Use case: hobby projects, experimentation, safe agents.


PRO

Builder-grade observability with limited privileges.

  • βœ… SYS_ADMIN capability
  • βœ… Still non-root
  • βœ… System filesystem remains read-only
  • βœ… Can inspect kernel / system state
  • ❌ Cannot modify system files

Use case: power users, agent builders, diagnostics.


ADVANCED (Enterprise)

Full container control.

  • βœ… Runs as root
  • βœ… Read/write access to /
  • βœ… SYS_ADMIN
  • βœ… NET_ADMIN

Still isolated from the host (no docker socket, no host filesystem).

Use case: enterprise automation, complex integrations.


ADMIN (Internal / Hidden)

Host-level access used only by platform operators for debugging and recovery.

Not exposed to end users.


The Communication Flow The path of a message looks like this: User β†’ Frontend β†’ Database β†’ Worker β†’ OpenClaw Agent (Docker) β†’ LLM (OpenAI/Venice/etc.)

Worker to OpenClaw Agent: This is where chatCompletions matters. We use the OpenAI-compatible API format to talk to the agent container. We do this because it's a standard way to send "messages" and get "responses" that include tool-use and planning logic. OpenClaw Agent to LLM: The agent takes your message, thinks about it, and then talks to the actual LLM (like GPT-3.5 or a Venice model). It handles all the complex stuff (memory, tools, terminal commands). 2. What is chatCompletions? When we set chatCompletions: { enabled: true } in the openclaw.json, we are telling the Agent's internal gateway to turn on its "OpenAI-compatible server."

Regardless of what LLM the agent is using (even if the LLM itself isn't OpenAI-compatible), the Agent acts as a translator. It presents itself to our Worker as an OpenAI-compatible endpoint. This allows our Worker to use a single, unified code path to talk to any OpenClaw agent.

  1. "OpenAI Compatible" vs "Non-Compatible" Models There are two types of compatibility to consider:

Provider Compatibility (The LLM): If a model is OpenAI-compatible (like Venice models or GPT), OpenClaw talks to it using the standard openai provider. If a model is NOT OpenAI-compatible (like Anthropic/Claude), OpenClaw has special "providers" built-in to handle the translation. Agent Compatibility (How we talk to the container): Because we enabled the chatCompletions endpoint, we can always talk to the agent using the worker's chat logic, even if the agent is using a non-OpenAI model as its brain.

Architectural Model

STANDARD β†’ jailed PRO β†’ observability ADVANCED β†’ power ADMIN β†’ host (hidden)

This maps cleanly to:

  • hobby
  • builder
  • enterprise
  • ops

Prerequisites

  • Bun (Runtime & Package Manager)
  • Supabase (Database & Auth)

Getting Started

  1. Install dependencies:

    bun install
  2. Environment Setup:

    Ensure you have the necessary environment variables set up for Supabase and other services. Check each package's directory for env.sample files if available.

    • The database schema is located in schema.sql.

Running the Application

You can run each service individually using the following commands:

Frontend

Start the Next.js development server:

bun run dev:frontend
# Runs: bun run --cwd packages/frontend dev

Backend

Start the Fastify backend server:

bun run dev:backend
# Runs: bun run --cwd packages/backend dev

Worker

Start the background worker process:

bun run dev:worker
# Runs: bun run --cwd packages/worker dev

Building

To build all packages in the workspace:

bun run build
# Runs: bun run build --workspaces

Linting

To run linting across all packages:

bun run lint
# Runs: bun run lint --workspaces

Documentation


System Maintenance & Rebuilding

After making changes to the source code or configurations, follow these steps to ensure all components are properly rebuilt and updated.

1. Rebuild the Worker

If you modify worker-specific logic (e.g., in openclaw.ts or eliza.ts), you must rebuild the worker container:

docker compose build worker
docker compose up -d worker

2. Rebuild Agent Images

If you modify the base agent Dockerfiles (e.g., scripts/eliza.dockerfile), trigger a fresh build of the agent image:

./scripts/2-setup-eliza.sh

3. Restart Agents

When the worker is updated, existing agents must be stopped and started via the UI or API to pick up the new logic. This ensures:

  • The correct directory structures (like .openclaw/) are created.
  • Correct volume permissions (chown) are applied.
  • New environment variables and path mappings are active.

4. Migration & Permissions (First-time setup)

Ensure your packages/worker/.env contains the required path variables:

AGENTS_DATA_HOST_PATH=/var/lib/blueprints/agents-data
AGENTS_DATA_CONTAINER_PATH=/mnt/agents-data

And verify the host directory exists with correct permissions:

sudo mkdir -p /var/lib/blueprints/agents-data
sudo chown -R 1000:1000 /var/lib/blueprints/agents-data

Once these steps are completed, your system is fully rebuilt and synchronized.

About

Blankspace Agents Launchpad Alpha

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors