Skip to content

Latest commit

 

History

History
315 lines (244 loc) · 7.79 KB

File metadata and controls

315 lines (244 loc) · 7.79 KB

Learning Path Generator SaaS

A hybrid microservices architecture SaaS application that generates personalized learning paths based on trending skills and market demand. Built with Rust, Python, C++, and Next.js.

🏗️ Architecture Overview

The application consists of 5 main services:

  • Rust Backend (Actix-web): Main API gateway and request orchestrator
  • Python AI Service (FastAPI): Generates learning paths using trending data
  • Python Trends Ingestion: Collects and processes trend data from various sources
  • C++ Optimization Module: Optimizes popular learning paths for performance
  • Next.js Frontend: Modern web interface for users

🚀 Quick Start

Prerequisites

  • Docker and Docker Compose
  • Node.js 18+ (for local frontend development)
  • Rust 1.75+ (for local backend development)
  • Python 3.11+ (for local service development)

Using Docker (Recommended)

  1. Clone and navigate to the project:

    git clone https://github.com/guicybercode/hype-learning
    cd saas_learning
  2. Copy environment configuration:

    cp docker/env.example docker/.env
  3. Start all services:

    cd docker
    docker-compose up -d
  4. Access the application:

Manual Setup

1. Database Setup

# Start PostgreSQL
docker run -d \
  --name learning-path-db \
  -e POSTGRES_DB=learning_paths \
  -e POSTGRES_USER=user \
  -e POSTGRES_PASSWORD=password \
  -p 5432:5432 \
  postgres:15

# Initialize schema
psql -h localhost -U user -d learning_paths -f database/schema.sql

2. Start Services

Rust Backend:

cd backend/rust-backend
export DATABASE_URL="postgresql://user:password@localhost:5432/learning_paths"
export AI_SERVICE_URL="http://localhost:8001"
cargo run

Python AI Service:

cd backend/ai-service
pip install -r requirements.txt
export DATABASE_URL="postgresql://user:password@localhost:5432/learning_paths"
python main.py

Trends Ingestion Service:

cd backend/trends-ingestion
pip install -r requirements.txt
export DATABASE_URL="postgresql://user:password@localhost:5432/learning_paths"
python main.py

C++ Optimizer:

cd backend/cpp-optimizer
mkdir build && cd build
cmake ..
make
./optimizer

Frontend:

cd frontend
npm install
npm run dev

📊 API Documentation

Rust Backend API (Port 8080)

Health Check

GET /api/v1/health

Get Available Skills

GET /api/v1/skills

Generate Learning Path

POST /api/v1/generate-path
Content-Type: application/json

{
  "skill_id": "uuid",
  "difficulty_level": "beginner|intermediate|advanced",
  "learning_objective": "optional string",
  "user_id": "optional uuid"
}

Python AI Service API (Port 8001)

Generate Learning Path

POST /generate-path
Content-Type: application/json

{
  "skill_id": "uuid",
  "difficulty_level": "beginner|intermediate|advanced",
  "learning_objective": "optional string",
  "user_id": "optional uuid"
}

Get Skill Trends

GET /trends/{skill_id}

🗄️ Database Schema

The application uses PostgreSQL with the following main tables:

  • users: User information
  • skills: Available skills for learning paths
  • trends_data: Collected trend data from various sources
  • learning_paths: Generated learning paths
  • learning_steps: Individual steps within learning paths
  • user_requests: User requests for learning paths
  • optimized_paths: Pre-computed optimized paths

🔄 Data Flow

  1. User Request: User selects skill and difficulty level in frontend
  2. Backend Processing: Rust backend validates request and forwards to AI service
  3. Trend Analysis: AI service queries trend data and generates personalized path
  4. Path Generation: AI service creates structured learning path with 6-10 steps
  5. Response: Generated path is returned to user through backend
  6. Optimization: C++ optimizer processes popular paths for future requests

🎯 Key Features

Frontend

  • Modern, responsive UI with Tailwind CSS
  • Real-time form validation
  • Interactive learning path visualization
  • Resource links and progress tracking

Backend Services

  • JWT-based authentication (ready for implementation)
  • Comprehensive error handling
  • Health check endpoints
  • Structured logging

AI Service

  • Trend-based learning path generation
  • Difficulty-appropriate content
  • Resource type diversity
  • Prerequisite management

Trends Ingestion

  • Google Trends integration
  • Daily automated collection
  • Trend score calculation
  • Keyword extraction

C++ Optimization

  • Performance-critical path optimization
  • Batch processing of popular skills
  • Learning flow optimization
  • Resource diversity balancing

🛠️ Development

Project Structure

saas_learning/
├── backend/
│   ├── rust-backend/          # Main API service
│   ├── ai-service/            # Learning path generation
│   ├── trends-ingestion/      # Trend data collection
│   └── cpp-optimizer/         # Performance optimization
├── frontend/                  # Next.js web application
├── database/                  # Schema and migrations
├── docker/                    # Docker configurations
└── docs/                      # Documentation

Adding New Skills

  1. Insert skill into database:
INSERT INTO skills (name, description, category) 
VALUES ('New Skill', 'Description', 'Category');
  1. The trends ingestion service will automatically collect data for the new skill

Extending Trend Sources

To add new trend sources, modify backend/trends-ingestion/services.py:

class NewTrendSource:
    async def get_trends_data(self, keyword: str):
        # Implement trend collection logic
        pass

Customizing Learning Path Generation

Modify backend/ai-service/services.py to customize:

  • Step templates for different difficulty levels
  • Resource type preferences
  • Learning objective integration

📈 Performance Considerations

  • Response Time: < 2 seconds for cached paths
  • Database: Optimized queries with proper indexing
  • Caching: Redis integration recommended for production
  • Load Balancing: Multiple service instances for high availability

🔒 Security

  • Environment variable configuration
  • Database connection encryption
  • Input validation and sanitization
  • Rate limiting (recommended for production)
  • HTTPS enforcement (production)

🚀 Deployment

Production Considerations

  1. Environment Variables: Update all secrets in production
  2. Database: Use managed PostgreSQL service
  3. Monitoring: Implement logging and metrics collection
  4. Scaling: Use container orchestration (Kubernetes)
  5. Backup: Regular database backups

Docker Production Build

# Build production images
docker-compose -f docker-compose.yml -f docker-compose.prod.yml build

# Deploy with production configuration
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🆘 Support

For support and questions:

  • Check the documentation in /docs
  • Review the API endpoints
  • Check service logs for debugging
  • Open an issue for bugs or feature requests

🔮 Future Enhancements

  • User authentication and profiles
  • Progress tracking and analytics
  • Social features (sharing paths)
  • Mobile application
  • Advanced AI recommendations
  • Integration with learning platforms
  • Real-time trend notifications