The backend API for the CAPY club management system. Built with Go (Chi), PostgreSQL, and SQLC.
- Authentication: JWT-based auth and OAuth2 (Google/Microsoft) support.
- Role-Based Access: Granular permissions for Student, Alumni, Faculty, and External roles.
- Organization Management: Create and join organizations (clubs).
- Event System: Event scheduling, registration, and attendance tracking.
- Bot Integration: API tokens for bot automation.
- Language: Go 1.25+
- Router: Chi
- Database: PostgreSQL
- ORM-ish: sqlc (Type-safe SQL generation)
- Migration: golang-migrate
- Testing:
- Testify (Assertions)
- Testcontainers (Integration tests)
- Mockery (Mock generation)
- Documentation: Swagger/OpenAPI (via
swag)
Interactive API documentation is available via Swagger UI.
- Web UI: http://localhost:8080/swagger/index.html
- Raw Spec:
docs/swagger/swagger.json - Regenerate Docs:
make docs
- Go 1.25+
- Docker & Docker Compose (for local DB)
- Make
git clone https://github.com/CAPY-RPI/api.git
cd capy-api
cp .env.example .env
# Edit .env with your local credentials if neededStart the PostgreSQL database:
make dockermake migrate-up
make generatemake migrate-up runs all pending migrations in Docker on the Compose network by default. make migrate-down rolls back exactly one migration. make migrate-version shows the current version.
Create a new migration:
make migrate-create name=add_event_capacityWhat goes in migration files:
*.up.sql: the incremental schema change you want to apply (DDL likeCREATE TABLE,ALTER TABLE,CREATE INDEX)*.down.sql: the reverse of that same change (rollback), not the full previous schema
Example (add_event_capacity):
-- up.sql
ALTER TABLE events
ADD COLUMN capacity INTEGER;
ALTER TABLE events
ADD CONSTRAINT events_capacity_nonnegative
CHECK (capacity IS NULL OR capacity >= 0);-- down.sql
ALTER TABLE events
DROP CONSTRAINT IF EXISTS events_capacity_nonnegative;
ALTER TABLE events
DROP COLUMN IF EXISTS capacity;Rule of thumb:
up= apply one changedown= undo that same change only
make run
# API will be available at http://localhost:8080
# Health check: http://localhost:8080/healthThe API applies pending migrations automatically on startup before serving requests.
Fast tests running in isolation with mocks.
make testFull-stack tests using ephemeral Docker containers (Postgres). Requires Docker to be running.
make test-integrationmake test-allHelper scripts are located in the scripts/ directory.
Seeds or updates a user in the database and prints a JWT for that user. --email is required; the other fields have defaults.
go run scripts/create_user/main.go --email dev@example.com --role devCreates a bot token in the database and prints the full token_id.secret value once for use with the X-Bot-Token header. --name and --created-by are required; --hours 0 means the token does not expire.
go run scripts/create_bot_token/main.go --name my-bot --created-by 00000000-0000-0000-0000-000000000001 --hours 24If you are running the API and Postgres with Docker Compose, the API container does not include the Go toolchain. To run local Go scripts that need database access, start a one-off Go container on the same Compose network and mount the repository into it.
Current local network:
api_defaultExample:
docker run --rm \
--network api_default \
-v "$PWD":/app \
-w /app \
--env-file .env \
golang:1.25 \
go run scripts/create_user/main.go --email dev@example.com --role devBot token example:
docker run --rm \
--network api_default \
-v "$PWD":/app \
-w /app \
--env-file .env \
golang:1.25 \
go run scripts/create_bot_token/main.go --name my-bot --created-by 00000000-0000-0000-0000-000000000001 --hours 24If your Compose project name is different, the network name will usually be <project>_default. You can check it with:
docker network ls.
├── cmd/server/ # Main entry point
├── internal/
│ ├── config/ # Configuration loading
│ ├── database/ # sqlc generated code & queries
│ ├── dto/ # Data Transfer Objects (Request/Response)
│ ├── handler/ # HTTP Handlers
│ ├── middleware/ # Auth, CORS, Logger middleware
│ ├── router/ # Route definitions
│ └── testutils/ # Testing helpers
├── migrations/ # SQL migration files
├── tests/integration/ # End-to-end integration tests
└── schema.sql # Current database schema
This project uses GitHub Actions for continuous integration.
- Workflow:
.github/workflows/ci.yml - Checks: Linting (
golangci-lint), Unit Tests, Integration Tests.
This project automatically builds and publishes a Docker image to GitHub Container Registry (GHCR) on every push to main.
If the repository or package is private, you must authenticate before pulling:
-
Generate a Token: Go to GitHub Developer Settings and create a Classic PAT with
read:packagesscope.- Note: Fine-grained tokens do not yet fully support GitHub Container Registry.
-
Login:
export CR_PAT=YOUR_TOKEN echo $CR_PAT | docker login ghcr.io -u YOUR_USERNAME --password-stdin
Windows (PowerShell):
$env:CR_PAT = "YOUR_TOKEN" echo $env:CR_PAT | docker login ghcr.io -u YOUR_USERNAME --password-stdin
Note for Organizations: PATs are attached to your user account, not the organization. If your organization uses SAML SSO, you must authorize the token for the organization by clicking "Configure SSO" next to the token in your GitHub settings.
docker pull ghcr.io/capy-rpi/api:mainYou can run the API using Docker without installing Go on your machine:
docker run -d \
--name capy-api \
-p 8080:8080 \
--env-file .env \
ghcr.io/capy-rpi/api:mainWindows (PowerShell):
docker run -d `
--name capy-api `
-p 8080:8080 `
--env-file .env `
ghcr.io/capy-rpi/api:mainNote:
host.docker.internalallows the container to access your host machine's localhost (e.g., if running Postgres locally). On Linux, you may need--add-host=host.docker.internal:host-gateway.
To run the full stack (API + Postgres + Cloudflare Tunnel), update your .env file with the required credentials and use the following docker-compose.yml.
Important
Ensure your .env file contains all necessary OAuth credentials (GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, GOOGLE_REDIRECT_URL, MICROSOFT_CLIENT_ID, MICROSOFT_CLIENT_SECRET, MICROSOFT_REDIRECT_URL, etc.), the TUNNEL_TOKEN, and MIGRATIONS_PATH (defaults to migrations). The api service will pull these automatically via the env_file directive.
services:
db:
image: postgres:16-alpine
env_file:
- .env
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}" ]
interval: 5s
timeout: 5s
retries: 5
api:
image: ghcr.io/capy-rpi/api:main
ports:
- "8080:8080"
env_file:
- .env
depends_on:
db:
condition: service_healthy
tunnel:
image: cloudflare/cloudflared:latest
restart: unless-stopped
command: tunnel run
env_file:
- .env
depends_on:
- api
volumes:
pgdata: