This is a hosted RL training platform using Verifiers and Prime RL.
It lets you import GitHub repositories, choose an rlx.toml config, provision Prime Intellect GPU pods, launch Prime RL jobs, and inspect the resulting pipeline through logs, Weights & Biases, job output, and run metadata in one place.
- Imports GitHub repositories as RL projects.
- Lets you select a branch and named
rlx.tomlconfig entry for a run. - Provisions GPU compute on Prime Intellect.
- Boots and prepares the runtime through ordered Celery jobs executed over SSH.
- Surfaces per-run logs for orchestrator, trainer, and inference processes.
- Tracks job execution, command output, retries, and run termination from the UI.
- Stores user SSH keys and integration credentials needed for pod access and monitoring.
At a high level, RLX sits between the browser UI, the application backend, and the systems needed to launch and monitor RL workloads.
flowchart LR
Web["Next.js web app"] --> Actions["Server actions"]
Actions --> API["FastAPI API"]
API --> Postgres["PostgreSQL"]
API --> Redis["Redis"]
Redis --> Worker["Celery worker"]
Redis --> Beat["Celery beat"]
API --> GitHub["GitHub API"]
API --> Prime["Prime Intellect API"]
API --> Secrets["AWS Secrets Manager"]
Worker --> SSH["SSH into GPU pod"]
SSH --> PrimeRL["Prime RL process"]
Typical run flow:
- Sign in and connect GitHub.
- Import a repository as a project.
- Pick a branch, config name, and GPU shape.
- Create a run and provision a Prime Intellect pod.
- Wait for the pod to become active.
- Execute the queued setup jobs over SSH.
- Launch Prime RL with the resolved config path.
- Monitor logs, jobs, and run state from the run details page.
- Frontend: Next.js 16, React 19, TypeScript, Tailwind CSS 4, shadcn/ui, Clerk
- Backend: FastAPI, SQLAlchemy, Alembic, asyncpg, Pydantic
- Async execution: Redis, Celery worker, Celery beat
- Infra integrations: Prime Intellect, GitHub, AWS Secrets Manager
rlx/
├── apps/
│ ├── api/ # FastAPI backend
│ └── web/ # Next.js frontend
├── docs/ # Architecture and implementation docs
├── scripts/ # Helper scripts
├── dev.sh # tmux-based local dev launcher
└── docker-compose.yml
- Node.js 20+
pnpm- Python 3.13+
uv- Docker
- Redis
- A PostgreSQL database reachable through
DATABASE_URL tmuxif you want to use./dev.sh
Create these local env files before starting the stack:
apps/web/.env.local
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=pk_...
CLERK_SECRET_KEY=sk_...
API_BASE_URL=http://localhost:8000apps/api/.env
DATABASE_URL=postgresql+asyncpg://...
CLERK_SECRET_KEY=sk_...
GITHUB_CLIENT_ID=...
GITHUB_CLIENT_SECRET=...If you already have your env files and external database ready, the fastest local path is:
./dev.shThis opens a tmux session with:
- Next.js web app
- FastAPI API
- Redis
- Celery worker
- Celery beat
You can also run the app services with Docker:
docker compose up --buildThis starts:
webapiredisworkerscheduler
You still need a valid DATABASE_URL configured for the API container.
Frontend:
cd apps/web
pnpm install
pnpm devBackend:
cd apps/api
uv sync
uv run uvicorn rlx_api.main:app --reload --port 8000Redis:
docker run -d --name redis -p 6379:6379 redis:7-alpineWorker:
cd apps/api
uv run celery -A rlx_api.celery_app:celery_app worker --loglevel=info -Q pod_ops,repo_opsScheduler:
cd apps/api
uv run celery -A rlx_api.celery_app:celery_app beat --loglevel=infoFrontend:
cd apps/web
pnpm build
pnpm lintBackend:
cd apps/api
uv run alembic upgrade head
uv run alembic current





