A platform for educational institutions to upload, manage, and distribute online lectures to students. Originally developed for a university cloud computing course.
- Backend: FastAPI app exposing users, courses, videos, reactions and metrics endpoints
- Object storage: MinIO for video files and thumbnails
- Database: PostgreSQL (SQLAlchemy models and Alembic migrations)
- Frontend: Next.js app
- Kubernetes: Deployments, Services, Ingress, HPA (CPU-based)
- Observability: Prometheus/Grafana
Note that project is developed and managed on GitLab. This repository is just a copy.
The GitLab pipeline builds backend and frontend, builds and pushes docker images to the registry, updates kustomize overlays with the new image tags, deploys to kubernetes, and triggers database migrations via the alembic job. Secrets are set by pipeline variables.
This runs PostgreSQL and MinIO via docker-compose, then runs the backend and frontend locally.
- Docker
- Node.js (frontend dev)
- Python 3.13 and uv (backend dev)
cd db
# Provide the required env vars via shell or a .env file:
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_DB=th_archive
export MINIO_ROOT_USER=minioadmin
export MINIO_ROOT_PASSWORD=minioadmin
docker compose up -dCreate a .env file in backend with matching values:
cd backend
cat > .env << 'EOF'
DATABASE_URL=postgresql+psycopg://postgres:postgres@localhost:5432/th_archive
MINIO_ENDPOINT=http://localhost:9000
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin
MINIO_BUCKET_VIDEOS=videos
MINIO_BUCKET_THUMBNAILS=thumbnails
EOFInstall deps and run migrations, then start the dev server:
uv sync
uv run alembic upgrade head
make devCreate an .env.local with the API base:
cd frontend
cat > .env.local << 'EOF'
NEXT_PUBLIC_API_BASE=http://localhost:8000/api
EOFThen run:
npm install
npm run devYou should now have:
- Frontend at http://localhost:3000/
- Backend at http://localhost:8000/api
- Docker
- k3d, kubectl, Helm
The script deploy-dev.sh fully provisions a local cluster, observability stack, builds and imports images, applies kustomize overlays, and prints access URLs.
Before running, create Kubernetes secrets for the local overlay:
cp k8s/base/secrets/secrets.yaml.example k8s/base/secrets/secrets.yaml # and adjustRun it:
./deploy-dev.shWhat it does
- Ensures a k3d cluster (default name
k3s-default) with LB ports8080:80and8443:443 - Installs kube-prometheus-stack (Prometheus Operator + Grafana) into
monitoring - Applies upstream Kubernetes Dashboard and creates an admin ServiceAccount
- Builds backend/frontend docker images and imports into k3d
- Applies kustomize overlay k8s/overlays/local
- Waits for rollouts and prints credentials and URLs
Access URLs
- Frontend: http://localhost:8080/
- Backend API: http://localhost:8080/api
- Prometheus: http://prometheus.local:8080/
- Grafana: http://grafana.local:8080/
- K8s Dashboard: http://dashboard.local:8080/
For Grafana/Prometheus, add this to /etc/hosts:
127.0.0.1 grafana.local prometheus.local dashboard.local
Backend HPA configuration is in k8s/base/backend/hpa.yaml:
- Target: Deployment
th-archive-backend - Replicas: min 1, max 5
- Metric: CPU utilization target at 80%
- Behavior:
- Scale Up: up to 200% per 60s
- Scale Down: stabilization window 10s, up to 50% per 60s
Requirements:
- Resource
requests/limitsare set in the backend deployment to enable CPU metrics - Metrics server must be available (k3s includes it)
- Application Ingress: k8s/base/ingress/ingress.yaml
/→th-archive-frontend-service:3000/api→th-archive-backend-service:8000
- Observability Ingress: grafana-ingress.yaml, prometheus-ingress.yaml, dashboard-ingress.yaml
grafana.local→ Grafana Dashboardprometheus.local→ Prometheus Dashboarddashboard.local→ Kubernetes Dashboard
- Local: k8s/overlays/local
- Images: retagged to
:local-dev - Includes: secrets, config, and migrations
- Images: retagged to
- Production: k8s/overlays/prod
- Includes: core manifests only (secrets/config/migrations managed by pipeline)
- Images: replace with real tags/SHAs before applying
- Production DB Migration: k8s/overlays/prod-db-migration
- Purpose: runs the Alembic job using existing cluster config/secrets
The backend expects the following environment variables (Kubernetes secret backend-secrets in cluster):
# Example: k8s/base/secrets/secrets.yaml.example
stringData:
POSTGRES_DB: th_archive
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
DATABASE_URL: postgresql+psycopg://postgres:postgres@postgres:5432/th_archive
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadminConfigMaps provide non-sensitive configuration:
# k8s/base/config/backend-configmap.yaml
data:
MINIO_ENDPOINT: http://minio:9000
MINIO_BUCKET_VIDEOS: videos
MINIO_BUCKET_THUMBNAILS: thumbnails
# k8s/base/config/frontend-configmap.yaml
data:
BACKEND_INTERNAL_URL: http://th-archive-backend-service
NEXT_PUBLIC_API_BASE: /apiUpdate API_HOSTNAME in backend/load_testing/locustfile.py and run:
cd backend
uv run locust -f load_testing/locustfile.py
