Skip to content

karastift/th-archive

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

114 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TH Archive

A platform for educational institutions to upload, manage, and distribute online lectures to students. Originally developed for a university cloud computing course.

Architecture

Overview

  • Backend: FastAPI app exposing users, courses, videos, reactions and metrics endpoints
  • Object storage: MinIO for video files and thumbnails
  • Database: PostgreSQL (SQLAlchemy models and Alembic migrations)
  • Frontend: Next.js app
  • Kubernetes: Deployments, Services, Ingress, HPA (CPU-based)
  • Observability: Prometheus/Grafana

Kubernetes

Kubernetes Workload

GitLab Deploy Pipeline

Note that project is developed and managed on GitLab. This repository is just a copy.

GitLab Deployment Pipeline

The GitLab pipeline builds backend and frontend, builds and pushes docker images to the registry, updates kustomize overlays with the new image tags, deploys to kubernetes, and triggers database migrations via the alembic job. Secrets are set by pipeline variables.

Local Development (without Kubernetes)

This runs PostgreSQL and MinIO via docker-compose, then runs the backend and frontend locally.

Prerequisites

  • Docker
  • Node.js (frontend dev)
  • Python 3.13 and uv (backend dev)

1) Start Postgres and MinIO

cd db
# Provide the required env vars via shell or a .env file:
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_DB=th_archive
export MINIO_ROOT_USER=minioadmin
export MINIO_ROOT_PASSWORD=minioadmin

docker compose up -d

2) Backend API

Create a .env file in backend with matching values:

cd backend
cat > .env << 'EOF'
DATABASE_URL=postgresql+psycopg://postgres:postgres@localhost:5432/th_archive
MINIO_ENDPOINT=http://localhost:9000
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=minioadmin
MINIO_BUCKET_VIDEOS=videos
MINIO_BUCKET_THUMBNAILS=thumbnails
EOF

Install deps and run migrations, then start the dev server:

uv sync
uv run alembic upgrade head
make dev

3) Frontend

Create an .env.local with the API base:

cd frontend
cat > .env.local << 'EOF'
NEXT_PUBLIC_API_BASE=http://localhost:8000/api
EOF

Then run:

npm install
npm run dev

You should now have:

Deploy to Local Kubernetes (k3d)

Prerequisites

  • Docker
  • k3d, kubectl, Helm

Deploy Script

The script deploy-dev.sh fully provisions a local cluster, observability stack, builds and imports images, applies kustomize overlays, and prints access URLs.

Before running, create Kubernetes secrets for the local overlay:

cp k8s/base/secrets/secrets.yaml.example k8s/base/secrets/secrets.yaml # and adjust

Run it:

./deploy-dev.sh

What it does

  • Ensures a k3d cluster (default name k3s-default) with LB ports 8080:80 and 8443:443
  • Installs kube-prometheus-stack (Prometheus Operator + Grafana) into monitoring
  • Applies upstream Kubernetes Dashboard and creates an admin ServiceAccount
  • Builds backend/frontend docker images and imports into k3d
  • Applies kustomize overlay k8s/overlays/local
  • Waits for rollouts and prints credentials and URLs

Access URLs

For Grafana/Prometheus, add this to /etc/hosts:

127.0.0.1 grafana.local prometheus.local dashboard.local

Kubernetes Architecture

Horizontal Automatic Podscaling

Backend HPA configuration is in k8s/base/backend/hpa.yaml:

  • Target: Deployment th-archive-backend
  • Replicas: min 1, max 5
  • Metric: CPU utilization target at 80%
  • Behavior:
    • Scale Up: up to 200% per 60s
    • Scale Down: stabilization window 10s, up to 50% per 60s

Requirements:

  • Resource requests/limits are set in the backend deployment to enable CPU metrics
  • Metrics server must be available (k3s includes it)

Kubernetes Ingress Overview

Kustomize Overlays

  • Local: k8s/overlays/local
    • Images: retagged to :local-dev
    • Includes: secrets, config, and migrations
  • Production: k8s/overlays/prod
    • Includes: core manifests only (secrets/config/migrations managed by pipeline)
    • Images: replace with real tags/SHAs before applying
  • Production DB Migration: k8s/overlays/prod-db-migration
    • Purpose: runs the Alembic job using existing cluster config/secrets

Secrets and Configuration

The backend expects the following environment variables (Kubernetes secret backend-secrets in cluster):

# Example: k8s/base/secrets/secrets.yaml.example
stringData:
	POSTGRES_DB: th_archive
	POSTGRES_USER: postgres
	POSTGRES_PASSWORD: postgres
	DATABASE_URL: postgresql+psycopg://postgres:postgres@postgres:5432/th_archive
	MINIO_ROOT_USER: minioadmin
	MINIO_ROOT_PASSWORD: minioadmin

ConfigMaps provide non-sensitive configuration:

# k8s/base/config/backend-configmap.yaml
data:
	MINIO_ENDPOINT: http://minio:9000
	MINIO_BUCKET_VIDEOS: videos
	MINIO_BUCKET_THUMBNAILS: thumbnails

# k8s/base/config/frontend-configmap.yaml
data:
	BACKEND_INTERNAL_URL: http://th-archive-backend-service
	NEXT_PUBLIC_API_BASE: /api

Load Testing (Locust)

Update API_HOSTNAME in backend/load_testing/locustfile.py and run:

cd backend
uv run locust -f load_testing/locustfile.py

About

A platform for educational institutions to upload, manage, and distribute online lectures to students. Originally developed for a university cloud computing course.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors