A modern, full-stack web application that scrapes lottery results from Google Sheets (and optionally Apify), stores them in InstantDB, and provides six core ML prediction models plus Miro — an optional LLM swarm-style meta-predictor — for multiple lottery games.

📚 Detailed Documentation: For comprehensive system documentation including workflow flowchart and architecture details, see SOFTWARE_DOCUMENTATION.html
Modern Tech Aesthetic:
- Electric Blue (#3498DB) – Innovation, clarity
- Bright Orange (#E67E22) – Excitement, urgency
- Charcoal Black (#2C3E50) – Sleek, modern background
- Silver (#BDC3C7) – Futuristic accents
Typography:
- BayanWin Title: Montserrat Bold
- Clean, modern UI with smooth animations and hover effects
-
Automated Data Scraping:
- Auto-scrapes new data when a game is selected
- Default path uses the Google Sheets API (via
gspread) to read only a sliding row window whenGOOGLE_SERVICE_ACCOUNT_FILEis set; cursors are stored in InstantDB (sheet_ingest_cursors) - Falls back to a full public CSV export per scrape when service-account credentials are missing
POST /api/scrapeacceptsfull_sync: true(body or?full_sync=true) to re-download the whole sheet, reconcile duplicates, and reset the ingest cursor — schedule weekly for safety- Automatically detects and skips duplicate entries based on draw_date and draw_number
- Supports 5 lottery games with separate data sources
-
InstantDB Database Integration:
- Backend-as-a-Service (BaaS) for seamless data management
- Backend uses InstantDB Admin SDK via Node.js bridge scripts for reliable writes
- REST API used for reads and queries
- All predictions are automatically saved to InstantDB
- Automatic accuracy calculation when new results are scraped
-
Six core ML prediction models:
- XGBoost: Gradient boosting model using historical patterns (~6-10 seconds)
- Decision Tree: Random Forest classifier based on frequency analysis (~4-6 seconds)
- Markov Chain: State transition model for sequence prediction (~1-3 seconds)
- Anomaly Detection: Monte Carlo / Gaussian (sum/product) distribution analysis for highest-probability patterns (~0.5-3 seconds)
- NashHotFilter: Nash Equilibrium mixed-strategy + Hot-Number probability filter (smart wheel, 3-even/3-odd balance; instant)
- Deep Reinforcement Learning (DRL): DRL agent with 3 feedback loops, continuously improves through accuracy feedback (~20-40 seconds, 5 episodes)
-
Miro — LLM “swarm” synthesis (7th prediction)
Naming: the codebase and UI useMiro(model_type: "Miro"). “Swarm” here means a coordinated multi-voice LLM workflow (specialist round + chair), not separate third-party agent services.
After the six models finish, the backend can run Miro: a two-step OpenAI-compatible LLM workflow — one structured JSON “round table” simulating the six specialist names, then a chairman call that outputs six numbers.- Round 1: The model is prompted to simulate six named specialists (same names as your ML models: XGBoost … DRL) and return JSON (
reaction_to_others, optionalpreferred_numbers,concerns) over a shared analytics bundle. - Round 2 (chairman): A second call reads that transcript plus the same context and returns
final_numbers(six distinct integers in game bounds). - Context (
build_miro_contextinbackend/services/miro_strategy.py) includes: base model picks, pairwise overlap, historical error-by-model, hot/cold snapshot, overdue numbers, Gaussian sum + product bands (including log-mean / log-std on products viabackend/utils/gaussian_summary.py), top co-occurrence and cross-draw transition edges, and draw metadata. Apify-ingested rows are included automatically because they live in the same InstantDB*_resultstables. - Validation: Bounds and uniqueness checks, one repair LLM pass if needed, then a deterministic vote fallback across the six base models’ numbers so the UI rarely breaks.
- Persistence: Same shape as other models —
model_type: "Miro"saved via InstantDB. - Config: Requires
LLM_API_KEY; toggle withMIRO_STRATEGY_ENABLED(seebackend/.env.example). Advisory only — does not retrain the six ML models. - UI: “Core models” grid plus a separate “Miro — LLM synthesis” panel in
PredictionDisplay.
- Round 1: The model is prompted to simulate six named specialists (same names as your ML models: XGBoost … DRL) and return JSON (
-
AI Council (separate from Miro)
Optional advisory prose (POST /api/predict/.../council-reportor bundled wheninclude_council=true): overlap, historical leaders, caveats — text only, not a replacement for Miro’s numeric pick. -
Smart Model Training:
- Models automatically retrain when switching between game types
- Parallel processing for faster prediction generation
- Real-time training status indicators
-
Modern Web Interface:
- React 18 frontend with Vite, Tailwind CSS, and modern tech design
- Real-time "Learning..." status indicators for each model
- Partial results display - shows successful predictions immediately
- Error states clearly displayed for failed models
- Responsive design with smooth animations
-
Accuracy Tracking & Analysis:
- Auto-calculate accuracy when predictions match actual results
- Error Distance Analysis with multiple metrics
- Track prediction accuracy trends over time
- Compare model performance across different time periods
-
Statistical Analysis:
- Frequency Analysis: Hot numbers, cold numbers, overdue numbers
- Gaussian Distribution Analysis: Visualize sum and product distributions with scatter plots
- Highlights draws with winners
- Statistical analysis of number patterns
- Real-time statistics dashboard
- Ultra Lotto 6/58
- Grand Lotto 6/55
- Super Lotto 6/49
- Mega Lotto 6/45
- Lotto 6/42
LOF_V2/
├── backend/ # FastAPI backend API
│ ├── app.py # Main FastAPI application
│ ├── config.py # Configuration (InstantDB credentials, Google Sheets IDs)
│ ├── services/ # InstantDB, Apify ingest, prediction council, Miro LLM strategy
│ ├── ml_models/ # 6 ML prediction models
│ ├── scrapers/ # Google Sheets scraper (CSV + gspread incremental)
│ ├── scripts/ # Node.js bridge scripts for InstantDB writes
│ │ ├── save_results.js # Save lottery results via Admin SDK
│ │ ├── save_predictions.js # Save predictions via Admin SDK
│ │ └── query_results.js # Query results with proper sorting
│ ├── utils/ # Utility functions
│ └── requirements.txt # Python dependencies
├── frontend/ # React frontend with Vite
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── services/ # API service layer
│ │ ├── assets/ # Images (Logo.png)
│ │ └── styles/ # CSS styles
│ ├── package.json # Node dependencies
│ └── tailwind.config.js # Tailwind configuration
├── lof-v2-db/ # InstantDB schema and configuration
├── .gitignore # Git ignore rules
├── README.md # This file
└── SOFTWARE_DOCUMENTATION.html # Detailed system documentation with flowchart
- Python 3.8+ (Python 3.13+ recommended)
- Node.js 16+ (required for InstantDB Admin SDK bridge scripts)
- InstantDB Account (https://www.instantdb.com)
- Google Sheets with publicly accessible lottery data (or service account credentials)
- Navigate to backend directory:
cd backend- Create virtual environment:
python -m venv venv-
Activate virtual environment:
- Windows PowerShell:
.\venv\Scripts\Activate.ps1
- Windows Command Prompt:
venv\Scripts\activate.bat
- Linux/Mac:
source venv/bin/activate
- Windows PowerShell:
-
Install Python dependencies:
python -m pip install --upgrade pip
pip install -r requirements.txt- Install Node.js dependencies (for InstantDB bridge scripts):
npm install @instantdb/adminNote: The Node.js bridge scripts are required for saving data to InstantDB. The Admin SDK provides reliable write operations.
- Set up environment variables:
Create a .env file in the backend directory:
# InstantDB Configuration (REQUIRED)
INSTANTDB_APP_ID=your-app-id-here
INSTANTDB_ADMIN_TOKEN=your-admin-token-here
# Google Sheets
# Service account JSON: required for private sheets and for incremental API sync (recommended).
GOOGLE_SERVICE_ACCOUNT_FILE=path/to/service-account.json
SHEETS_INCREMENTAL_ENABLED=true
SHEETS_INCREMENTAL_WINDOW=250
SHEETS_WORKSHEET_NAME=Sheet1
# Optional (for uvicorn reload)
DEBUG=True
# OpenAI-compatible LLM — required for Miro (7th prediction) and optional AI Council
LLM_API_KEY=
LLM_BASE_URL=https://api.openai.com/v1
LLM_MODEL_NAME=gpt-4o-mini
LLM_COUNCIL_ENABLED=true
MIRO_STRATEGY_ENABLED=true
# Apify (optional) — merge actor dataset into same InstantDB *_results as Sheets
APIFY_API_TOKEN=
APIFY_ACTOR_ID=
APIFY_AUTO_INGEST=trueGet your InstantDB credentials:
- App ID: https://www.instantdb.com/dash → Your App → App ID
- Admin Token: https://www.instantdb.com/dash → Admin → Secret field (click to reveal)
Google Sheets:
- Sheet IDs are configured in
backend/config.py - With
GOOGLE_SERVICE_ACCOUNT_FILEset (and the spreadsheet shared with that service account as Viewer, Google Sheets API enabled on the GCP project), scrapes use incremental range reads and persist the next row in InstantDB - Without credentials, the backend uses the public CSV export URL every time (full download)
- Edits or new rows inserted above the current cursor are not seen until you run a
full_syncscrape (full CSV + cursor reset) - Assume a single header row on the tab named by
SHEETS_WORKSHEET_NAME(defaultSheet1)
- Deploy InstantDB Schema:
Navigate to the lof-v2-db directory and deploy the schema:
cd ../lof-v2-db
npm install
npm run devThis deploys the database schema and permissions required for the app to function.
- Run FastAPI server:
uvicorn app:app --host 0.0.0.0 --port 5000 --reloadThe API will be available at http://localhost:5000
- API docs:
http://localhost:5000/docs - Alternative docs:
http://localhost:5000/redoc
- Navigate to frontend directory:
cd frontend- Install dependencies:
npm install- Start development server:
npm run devThe frontend will be available at http://localhost:3000 (port set in vite.config.js)
Note: The frontend communicates exclusively with the backend API. No InstantDB SDK or frontend .env file is required.
The application is deployed on Google Cloud Run for production use. For complete deployment documentation, see:
- GOOGLE_CLOUD_DEPLOYMENT.md - Detailed markdown guide
- GOOGLE_CLOUD_DEPLOYMENT.html - Browser-friendly HTML guide
Deployed Services:
- Frontend: React app on Cloud Run (https://lof-frontend-XXXXX.run.app)
- Backend: FastAPI API on Cloud Run (https://lof-backend-XXXXX.run.app)
- Database: InstantDB (cloud-hosted, no deployment needed)
Deployment Process:
- Backend: Build and deploy to Cloud Run with InstantDB credentials
- Frontend: Build with backend URL and deploy to Cloud Run
- Schema: Deploy InstantDB schema once (local
npm run dev)
Checking Your Project ID:
# See current Google Cloud project ID
gcloud config get-value project
# List all projects
gcloud projects listUpdating Deployments:
- Backend changes: Rebuild and redeploy backend service
- Frontend changes: Rebuild with backend URL and redeploy
- Schema changes: Run
npm run devinlof-v2-dbto sync
For detailed step-by-step instructions, troubleshooting, and update procedures, see the Google Cloud Deployment Guide.
The .env file in the backend directory should contain:
| Variable | Required | Description |
|---|---|---|
INSTANTDB_APP_ID |
✅ Yes | Your InstantDB App ID from dashboard |
INSTANTDB_ADMIN_TOKEN |
✅ Yes | Your InstantDB Admin Token (Secret) |
GOOGLE_SERVICE_ACCOUNT_FILE |
❌ No | Service account JSON — private sheets + incremental Sheets API sync |
SHEETS_INCREMENTAL_ENABLED |
❌ No | true/false — disable API incremental path (default true) |
SHEETS_INCREMENTAL_WINDOW |
❌ No | Rows per incremental get (default 250) |
SHEETS_WORKSHEET_NAME |
❌ No | Worksheet tab name (default Sheet1) |
DEBUG |
❌ No | Set to True for uvicorn auto-reload (development) |
LLM_API_KEY |
✅ For Miro / Council | OpenAI-compatible API key (sk- or sk-proj-) |
LLM_BASE_URL |
❌ No | Default https://api.openai.com/v1 |
LLM_MODEL_NAME |
❌ No | Chat model id (e.g. gpt-4o-mini) |
MIRO_STRATEGY_ENABLED |
❌ No | true/false — run Miro after the six ML models (default true) |
APIFY_API_TOKEN |
❌ No | Run actor + ingest dataset into InstantDB |
APIFY_ACTOR_ID |
❌ No | Actor id for PCSO / bulletin scrape |
APIFY_AUTO_INGEST |
❌ No | If true, optional auto-run after POST /api/scrape |
Important:
- Never commit
.envfiles to Git - InstantDB credentials are required for backend to function
- Without
GOOGLE_SERVICE_ACCOUNT_FILE, Google Sheets are accessed via public CSV export only (full sheet each scrape) - Node.js and
@instantdb/adminare required for saving data - No PostgreSQL connection string needed - InstantDB handles everything!
Actor runs can append normalized draw rows into the same InstantDB *_results entities used by Google Sheets (backend/services/apify_ingest.py). Dedupe uses draw_date|draw_number. Triggers:
POST /api/ingest/apify— body:run_id, optionalgame_typePOST /api/webhooks/apify— optional HMAC/secret if configured- After
POST /api/scrape— whenAPIFY_API_TOKEN,APIFY_ACTOR_ID, andAPIFY_AUTO_INGESTare set
Downstream Miro, graphs (co-occurrence, transitions), and statistics automatically include Apify-backed rows because they all read get_results.
GET /api/games- List all available games
GET /api/results/{game_type}- Get historical results (paginated, sorted by draw_date)- Query params:
page,limit
- Query params:
POST /api/scrape- Trigger data scraping from Google Sheets- Body:
{ "game_type": "ultra_lotto_6_58", "full_sync": false }(game_typeoptional — scrapes all games if omitted) - Query:
?full_sync=true— same asfull_sync: truein the body (weekly reconcile recommended) - Response stats include per-game
sync_mode,rows_fetched, andcursor_afterwhen available - Auto-scrapes when a game is selected in the frontend
- Automatically skips duplicate entries based on draw_date and draw_number
- Body:
POST /api/predict/{game_type}- Generate predictions from all six core ML models, then Miro (ifLLM_API_KEYis set andMIRO_STRATEGY_ENABLED=true)- Query param:
include_council— optional LLM advisory report (separate from Miro’s numeric pick) - Returns a predictions map including
Miro(numbers or error); saves each pick to InstantDB - Triggers background accuracy calculation
- Query param:
POST /api/predict/{game_type}/council-report- LLM advisory summary (agreement, outliers, caveats, etc.)GET /api/predictions/{game_type}- Get stored predictions- Query params:
limit
- Query params:
GET /api/predictions/{game_type}/accuracy- Get prediction accuracy metrics- Query params:
limit - Returns error distance, numbers matched, and distance metrics
- Query params:
POST /api/predictions/{prediction_id}/calculate-accuracy- Calculate accuracy for a prediction- Body:
{ "result_id": "...", "game_type": "..." }
- Body:
POST /api/accuracy/auto-calculate- Manually trigger auto-calculation of accuracy- Body:
{ "game_type": "..." }(optional - processes all games if omitted)
- Body:
GET /api/stats/{game_type}- Get frequency statistics- Returns: hot numbers, cold numbers, overdue numbers, general stats
GET /api/stats/{game_type}/gaussian- Get Gaussian distribution analysis- Returns: sum/product distributions, statistics, winners data for scatter plot visualization
GET /api/graphs/{game_type}/cooccurrence— pair counts within drawsGET /api/graphs/{game_type}/markov-edges— directed transitions between consecutive drawsPOST /api/graphs/{game_type}/sankey— hot vs “other” counts per model pick (body: currentpredictionsmap)
GET /api/accuracy/diagnostics/{game_type}- Get diagnostic info (results/predictions/accuracy counts, date ranges, matching status) for debugging
GET /api/health- API health check
Full API Documentation: Visit http://localhost:5000/docs when backend is running
- Deploy InstantDB schema (run
npm run devinlof-v2-dbdirectory) - Start the backend server (port 5000)
- Start the frontend development server (port 5173)
- Open browser to
http://localhost:3000
-
Select a Game from the game selector
- Automatically scrapes new data from Google Sheets
- Validates and saves new results to InstantDB (skips duplicates)
- Auto-calculates accuracy for matching predictions and results
-
Generate Predictions by clicking "⚡ Generate Predictions"
- System fetches historical data from InstantDB
- All six core ML models train and predict (thread pool with per-model timeouts)
- Miro runs afterward (LLM swarm synthesis, ~2 API calls, server timeout up to ~180s) when enabled and
LLM_API_KEYis set - Predictions appear in the UI; all picks including Miro are saved to InstantDB
- Background process matches predictions to results and calculates accuracy
-
View Results & Analysis
- Predictions Display: Core models in a grid; Miro — LLM synthesis in a separate panel below
- Historical Results: Browse past lottery results with pagination
- Statistics Panel: View hot/cold/overdue numbers and frequency analysis
- Error Distance Analysis: Track prediction accuracy with detailed metrics
- Gaussian Distribution: Visualize sum/product distributions with scatter plots
- Highlights draws with winners
- Statistical analysis of number patterns
-
DRL Learning Loop (Automatic)
- DRL agent receives feedback from accuracy calculations
- Continuously improves predictions based on error metrics
- Learning happens automatically when accuracy records are available
Each stage below maps to the actual code paths so you can trace requests end-to-end.
- Frontend: In
frontend/src/App.jsx,handleGameSelect(gameType)runs when the user picks a game. It setsselectedGame, then callsscrapeData({ game_type: gameType })fromfrontend/src/services/api.js. - API call:
api.jsuses Axios withbaseURL: API_BASE_URL(fromfrontend/src/utils/constants.js, defaulthttp://localhost:5000). So the request isPOST /api/scrapewith body{ game_type: "ultra_lotto_6_58" }(or the chosen game). - Backend: In
backend/app.py,@app.post("/api/scrape")receives the request. It builds aGoogleSheetsScraper(), then callsscrape_game(..., full_sync=...)orscrape_all_games(full_sync=...). Incremental pulls usegspreadwhen credentials exist;full_syncforces the CSV path and resets the stored cursor. Rows are written viainstantdb_client/save_results.js. - After scrape: If new results were added, the backend calls
auto_calculate_accuracy_for_new_results()and may trigger DRL learning from new accuracy records. The response (e.g.success,stats,message) is returned to the frontend.
- Frontend: In
App.jsx,handleGeneratePredictions()callsgeneratePredictions(selectedGame)fromapi.js, which sendsPOST /api/predict/{game_type}to the backend. - Backend: In
app.py,@app.post("/api/predict/{game_type}")runs the six core entries inmodel_types(XGBoost … DRL) withThreadPoolExecutortimeouts. Historical data is read from InstantDB via each model /data_processoras before. - Miro: After that loop, if
MIRO_STRATEGY_ENABLEDandLLM_API_KEYare set,run_miro_strategy_predictfrombackend/services/miro_strategy.pyruns (separate pool, long timeout), thencreate_predictionwithmodel_type: "Miro". If the LLM key is missing, the response still includesMirowith an error message for the UI. - Saving predictions: Each successful prediction is stored with
instantdb.create_prediction(...). A background thread runsauto_calculate_accuracy_for_new_results(game_type)afterward. - Response:
{ success, game_type, target_draw_date, predictions, timestamp }plus optionalcouncil_reportwhen requested.predictionsincludesMiro.
- Historical results: Components like
HistoricalResultscallgetResults(gameType, page, limit)→GET /api/results/{game_type}. Inapp.py,@app.get("/api/results/{game_type}")(around line 96) usesinstantdb.get_results(...)and returns paginated results. - Statistics:
StatisticsPanelusesgetStatistics(gameType)→GET /api/stats/{game_type}. The backend usesutils/frequency_analysis.py(e.g.get_hot_numbers,get_cold_numbers,get_overdue_numbers) and returns JSON for the panel. - Error distance:
ErrorDistanceAnalysisusesgetPredictionAccuracy(gameType)→GET /api/predictions/{game_type}/accuracy. The backend usesinstantdb.get_prediction_accuracy()andutils/error_distance_calculator.pyto build the metrics returned to the frontend. - Gaussian view: Any component that shows Gaussian distribution calls
getGaussianDistribution(gameType)→GET /api/stats/{game_type}/gaussian; the backend computes sum/product stats and returns data for the scatter visualization.
- Why CORS matters: The React app runs in the browser at
http://localhost:3000. The API runs athttp://localhost:5000. Browsers enforce the same-origin policy, so a request from the frontend origin to the backend origin is “cross-origin” and the backend must send allowed CORS headers for the browser to accept the response. - Backend CORS (FastAPI): In
backend/app.py, right after creating the FastAPI app (around lines 50–56), the app addsCORSMiddleware:allow_origins=["*"]— any origin (e.g.http://localhost:3000) can call the API. In production you should set this to your real frontend origin(s).allow_credentials=True— allows cookies/credentials if you add them later.allow_methods=["*"]andallow_headers=["*"]— all usual HTTP methods and headers are allowed. So every response from the FastAPI server includes CORS headers that tell the browser “this cross-origin response is allowed.”
- Development proxy (optional): In development,
frontend/vite.config.jsconfigures a proxy: requests to path/apiare forwarded tohttp://localhost:5000. The frontend can then useAPI_BASE_URLas the same origin (e.g. relative/apior full URL depending on config). When using the proxy, the browser only sees same-origin requests to the Vite server; Vite forwards/api/*to the backend, so CORS is less of an issue in that setup. If the frontend is configured to callhttp://localhost:5000directly (e.g.API_BASE_URL = 'http://localhost:5000'), then CORS middleware on the backend is what allows those cross-origin requests to succeed.
BayanWin follows a three-tier architecture with clear separation of concerns:
- Frontend Layer: React-based user interface with real-time updates
- Backend Layer: FastAPI REST API with ML model orchestration
- Data Layer: InstantDB BaaS for data storage and management
📊 For detailed architecture diagrams and workflow flowchart, see SOFTWARE_DOCUMENTATION.html
Backend:
- FastAPI - Modern Python web framework with async support
- InstantDB - Backend-as-a-Service (REST API + Admin SDK via Node.js)
- Uvicorn - ASGI server for high-performance async operations
- Pandas - Google Sheets CSV reading and data processing
- XGBoost, TensorFlow, scikit-learn - ML libraries for predictions
- OpenAI-compatible API (
openaiPython SDK) — Miro + optional AI Council - NumPy - Numerical computing and array operations
- Node.js - Bridge scripts for InstantDB Admin SDK writes
Frontend:
- React 18 - Modern UI library with hooks
- Vite - Fast build tool and dev server
- Tailwind CSS - Utility-first CSS framework
- Axios - HTTP client for API communication
- D3.js - Co-occurrence, cross-draw transition, and hot-band Sankey views
- Recharts - Chart library for data visualization
- React Router - Client-side routing
- Electric Blue (
#3498DB): Primary actions, headers, accents - Bright Orange (
#E67E22): CTAs, number balls, highlights - Charcoal Black (
#2C3E50): Background, dark elements - Silver (
#BDC3C7): Borders, subtle accents
- BayanWin Title: Montserrat Bold (Google Fonts)
- Body: Inter, system fonts
- Data Source: Lottery data is scraped from Google Sheets (public CSV or Sheets API + service account)
- Auto-Scraping: Data is automatically scraped when a game is selected
- Duplicate Detection: System automatically skips duplicate entries based on draw_date and draw_number
- Auto-Accuracy Calculation: Accuracy is automatically calculated when new results are scraped
- XGBoost: ~6-10 seconds per prediction (includes training time)
- Decision Tree: ~4-6 seconds per prediction
- Markov Chain: ~1-3 seconds per prediction
- Anomaly Detection: ~0.5-3 seconds per prediction (vectorized Monte Carlo)
- NashHotFilter: Instant (Nash equilibrium + hot-number filter, no training)
- DRL Agent: ~20-40 seconds per prediction (5 episodes, continuous learning)
- Miro (LLM swarm): Highly variable — typically tens of seconds to a few minutes (two
chat.completionsJSON calls; server timeout 180s). RequiresLLM_API_KEY. Disable withMIRO_STRATEGY_ENABLED=falseto save latency/cost. - Total wall time: Six core models (much runs in parallel) plus Miro when enabled — budget several minutes end-to-end on a slow LLM.
- Smart Retraining: Models automatically retrain when switching between game types
- DRL Feedback Loop: DRL agent continuously improves through feedback from accuracy records
- Historical Data Requirement: Historical data is required for accurate predictions
- First-time Training: First-time prediction generation may take longer as models train
- Node.js Required: Must have Node.js installed for InstantDB writes to work (Admin SDK bridge scripts)
- Environment Variables: Make sure your InstantDB credentials are correct in
.env - Schema Deployment: Must deploy InstantDB schema before first use (run
npm run devinlof-v2-db) - Ports:
- Frontend: Vite dev server (port 3000; see
frontend/vite.config.js) - Backend: FastAPI/Uvicorn (port 5000)
- Frontend: Vite dev server (port 3000; see
- Prediction Saving: All predictions are automatically saved to InstantDB
- Accuracy Tracking: All accuracy metrics are stored for trend analysis
- Result Storage: Historical results are stored with full metadata (draw_date, numbers, jackpot, winners)
- Environment Variables:
.envfiles are gitignored - never commit sensitive data - Dependencies:
venv/andnode_modules/are gitignored - Credentials: InstantDB Admin Token should be kept secret and never shared
- Google Sheets: Service account credentials (if used) should be kept secret
- Configuration: Use environment variables for all sensitive configuration
- Data Access: Prefer a locked-down spreadsheet shared with a service account; public CSV remains a fallback without credentials
- API Security: In production, configure CORS middleware to allow only specific origins
- README.md (this file) - Quick start guide and overview
- SOFTWARE_DOCUMENTATION.html - Comprehensive system documentation with:
- Detailed system overview
- Architecture diagrams
- Complete workflow flowchart
- ML models detailed explanation
- Data flow and storage details
- API endpoints reference
- Performance characteristics
MIT License
Built with ❤️ using FastAPI, React, InstantDB, and Machine Learning
- User selects game → Auto-scrapes data from Google Sheets
- Data validation → Saves new results to InstantDB (skips duplicates)
- User generates predictions → System fetches historical data
- ML models train & predict → Six core models in parallel; Miro (LLM) may follow
- Predictions saved → All picks including Miro stored in InstantDB
- Accuracy calculated → Auto-matched with results when available
- DRL learning loop → Agent improves through feedback
- Results displayed → Real-time updates on frontend with statistics
Contributions are welcome! If you'd like to improve this project, fix bugs, or add new features, feel free to fork the repository, make your changes, and submit a pull request. Your efforts will help make this trading application even better!
If you found this project helpful or learned something new from it, you can support the development with just a cup of coffee ☕. It's always appreciated and keeps the ideas flowing!
For detailed flowchart visualization, see SOFTWARE_DOCUMENTATION.html