A Django/DRF backend that generates personalized study plans from learner profiles, goals, schedules, and a small resource library using retrieval, structured LLM generation, deterministic validation, async jobs, and explainable outputs.
This is a compact technical demo showing how to structure AI-backed planning systems: domain data modeling, retrieval over a small knowledge base, async generation jobs, deterministic validation, confidence scoring, and explainable outputs. It is intentionally not a fitness product or Superset clone. The goal is to demonstrate stack fluency and production-oriented AI architecture in a domain-neutral setting.
This demo is designed to show:
- Django/DRF stack fluency — clean models, serializers, viewsets, Celery integration
- AI systems judgment — structured inputs, retrieval, constraints, validation, not just prompting
- Production thinking — async jobs, deterministic validation, confidence scoring, traceability
- Tradeoff awareness — simple retrieval over vector search, mock-first development, explicit non-goals
Client / API Consumer
|
v
Django REST Framework API
|
|-- Learner/Profile CRUD
|-- Resource CRUD / Seed Data
|-- Plan Request Endpoint
|
v
Celery Worker + Redis Queue
|
|-- Retrieve resources
|-- Build generation context
|-- Call LLM adapter or mock LLM
|-- Validate output
|-- Score confidence
|-- Persist generated plan
|
v
PostgreSQL / SQLite
|
|-- Learners
|-- Resources
|-- StudyPlanRequests
|-- GeneratedPlans
|-- ValidationFindings
|-- RetrievalTrace
Stack: Python 3.12, Django 5.x, DRF, PostgreSQL, Celery, Redis, Docker Compose
# Install dependencies
pip install -r requirements.txt
# Run migrations
python manage.py migrate
# Load seed data
python manage.py seed_demo_data
# Start dev server
python manage.py runserver
# In another terminal, start Celery worker
celery -A config worker -l infodocker compose up --build
docker compose exec web python manage.py migrate
docker compose exec web python manage.py seed_demo_datacurl -X POST localhost:8000/api/learners/ \
-H "Content-Type: application/json" \
-d '{"name":"Ana","current_level":"beginner","preferred_learning_style":"mixed","weekly_hours":5,"weak_areas":["syntax"]}'curl -X POST localhost:8000/api/plan-requests/ \
-H "Content-Type: application/json" \
-d '{"learner_id":"<UUID>","goal":"Learn Python basics","topic":"python","duration_days":7,"constraints":{"max_daily_minutes":60,"must_include_practice":true}}'curl localhost:8000/api/plan-requests/<REQUEST_UUID>/curl localhost:8000/api/plans/<PLAN_UUID>/curl localhost:8000/api/plan-requests/<REQUEST_UUID>/trace/| Endpoint | Method | Description |
|---|---|---|
/api/health/ |
GET | Health check |
/api/learners/ |
POST/GET | Create/list learners |
/api/learners/{id}/ |
GET | Get learner |
/api/resources/ |
POST/GET | Create/list resources |
/api/resources/{id}/ |
GET | Get resource |
/api/resources/seed/ |
POST | Load seed data (26 resources) |
/api/plan-requests/ |
POST/GET | Create/list plan requests |
/api/plan-requests/{id}/ |
GET | Poll request status |
/api/plan-requests/{id}/trace/ |
GET | Retrieval + generation trace |
/api/plans/{id}/ |
GET | Get generated plan |
The system runs 8 deterministic validation checks on every generated plan:
| Check | Code | What It Catches |
|---|---|---|
| Valid resource references | MISSING_RETRIEVED_SOURCE |
Task uses resource not in retrieved set |
| Daily time limit | DAILY_TIME_EXCEEDED |
Day exceeds max_daily_minutes |
| Difficulty appropriateness | DIFFICULTY_TOO_HIGH |
Resource too advanced for learner |
| Practice requirement | NO_PRACTICE_TASKS |
Missing practice when required |
| Duration match | DURATION_MISMATCH |
Day count != requested duration |
| Topic relevance | RESOURCE_OUTSIDE_TOPIC |
Resource topic doesn't match request |
| Weekly time alignment | WEEKLY_TIME_MISMATCH |
Total time deviates from weekly hours |
| Non-empty days | EMPTY_DAY |
Day has no tasks |
Confidence scoring starts at 1.0 and applies penalties for each finding.
| Range | Interpretation |
|---|---|
| 0.90–1.00 | High confidence — ship as-is |
| 0.70–0.89 | Acceptable — optional review |
| 0.50–0.69 | Needs human review |
| < 0.50 | Failed — regenerate |
python manage.py run_evalsRuns 10 predefined cases against the pipeline and reports:
- Schema validity rate
- Constraint pass rate
- Average confidence score
- Failed cases
pytest # All tests
pytest -v # Verbose
pytest --cov=study_ai # With coverage
ruff check . # Lint| Decision | Rationale |
|---|---|
| Metadata retrieval over vector search | More explainable for a demo; easier to inspect and debug |
| Mock LLM provider by default | Demo works without API keys; tests are deterministic |
| Simple weighted scoring | Transparent over sophisticated; interviewer can understand it |
| Celery over asyncio | Standard Django async pattern; handles retries and failures |
| Study plans as domain | Same system shape as any AI planning, without domain-specific IP |
- Hybrid retrieval — add embeddings alongside keyword scoring
- Real LLM provider — test the adapter with OpenAI
- Human feedback loop — collect plan ratings, improve ranking
- React frontend — simple plan viewer
- Deployment — Docker-based deployment with CI/CD
See the docs/ folder for full design documentation:
| Doc | Description |
|---|---|
| Vision | Purpose, positioning, success criteria |
| Architecture | Tech stack, modules, diagrams, env vars |
| Data Model | All models, fields, relationships, indexes |
| API Design | Endpoints, status codes, error format |
| AI Pipeline | Retrieval, generation, validation, scoring |
| Observability & Evals | Logging, trace, eval suite |
| Build Plan | Step-by-step implementation guide |