Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
45 changes: 45 additions & 0 deletions submissions/TheDavidProtocol_Equis/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
? # Python
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first line contains a stray ? (? # Python), which is not a valid comment and will be treated as an ignore pattern for the literal ? character. Replace it with a normal comment line (# Python) to avoid unintended ignore behavior.

Suggested change
? # Python
# Python

Copilot uses AI. Check for mistakes.
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Database
david_protocol.db
*.sqlite3

# Environment
.env
.venv
env/
venv/
ENV/

# OS
.DS_Store
Thumbs.db

# IDE
.vscode/
.idea/

# ML Models (Personal preference, but usually shared)
# backend/models/*.joblib
# backend/models/*.pkl
50 changes: 50 additions & 0 deletions submissions/TheDavidProtocol_Equis/DEPLOYMENT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# 🚢 David Protocol — Deployment & Hosting Guide

This project is built for high-performance scale and serverless deployment. It is optimized for **Render**, **Railway**, or **Google Cloud Run**.

## 🏗️ 1. Prepare Your Environment
Before deploying, ensure your `backend/.env` is fully populated:

1. **Neon PostgreSQL**: Create a project on [Neon.tech](https://neon.tech/) and get the `DATABASE_URL`.
2. **Plaid Integration**: Get your `PLAID_CLIENT_ID` and `69c640c0fd37e2000d972e2c` and `bca8283960ce79cbf3171925973b0b` from the Plaid Dashboard.
3. **Gemini AI**: Get your `AIzaSyALjSt7E2AGXYCVUiO2eTJKpNz5E4eNRiA` from [Google AI Studio](https://aistudio.google.com/).
Comment on lines +9 to +10
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doc includes what appear to be real secret values (Plaid secret/client id fragments and a full Gemini API key). Remove any real credentials immediately (rotate them if they were valid) and replace with placeholders (e.g., your_plaid_secret, your_gemini_api_key) to prevent credential leakage.

Suggested change
2. **Plaid Integration**: Get your `PLAID_CLIENT_ID` and `69c640c0fd37e2000d972e2c` and `bca8283960ce79cbf3171925973b0b` from the Plaid Dashboard.
3. **Gemini AI**: Get your `AIzaSyALjSt7E2AGXYCVUiO2eTJKpNz5E4eNRiA` from [Google AI Studio](https://aistudio.google.com/).
2. **Plaid Integration**: Get your `PLAID_CLIENT_ID`, `your_plaid_secret`, and `your_plaid_webhook_key` from the Plaid Dashboard.
3. **Gemini AI**: Get your `your_gemini_api_key` from [Google AI Studio](https://aistudio.google.com/).

Copilot uses AI. Check for mistakes.
4. **Secret Keys**: Generate these using:
```bash
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
```

---

## 🚀 2. Deploy to Render (Recommended)
This repo includes a `render.yaml` file for **One-Click Deployment**.

1. **Connect GitHub**: Connect your repository at `https://github.com/TheAyushTandon/David-Protocol.git` to Render.
2. **Auto-provision**: Render will detect `render.yaml` and automatically set up:
- **FastAPI Web Service**
- **PostgreSQL Database** (If you aren't using Neon)
3. **Environment Variables**: Add all keys from your `.env` to the Render Dashboard's "Environment" section.

---

## 🐳 3. Deploy via Docker
If you prefer fixed containers, use the included `backend/Dockerfile`:

1. **Build**:
```bash
docker build -t david-protocol-backend ./backend
```
2. **Run**:
```bash
docker run -p 8000:8000 --env-file ./backend/.env david-protocol-backend
```

---

## 📈 4. Post-Deployment
Once the backend is live at `https://your-app.onrender.com`:
1. Use the `/docs` endpoint to verify the API.
2. Ensure `PLAID_ENV` is set to `sandbox` for testing or `development` for real data.
3. **ML Check**: Verify that your `.joblib` models are correctly loaded by checking the server logs on startup.

---
👉 **[Go back to README](file:///d:/PROJECTS/stellaris%20hackathon/README.md)**
Binary file added submissions/TheDavidProtocol_Equis/Dashboard.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added submissions/TheDavidProtocol_Equis/EQUIS.pdf
Binary file not shown.
105 changes: 105 additions & 0 deletions submissions/TheDavidProtocol_Equis/FRONTEND_SYSTEM_DESIGN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# 🎨 David Protocol — Frontend System Design & API Mapping

This document outlines the end-to-end user journey, UI architecture, and exact API mapping required to build the David Protocol frontend.

---

## 🗺️ User Journey Flowchart

```mermaid
graph TD
A[Landing Page] -->|Click Get Started| B[Register/Login]
B -->|Success| C[Onboarding: Link Bank]
C -->|Plaid Success| D[Dashboard: Processing]
D -->|Auto-Sync| E[Dashboard: Financial Score]
E -->|Click Details| F[Deep Dive: Explainability & Metrics]
```

---

## 📄 Page-by-Page Input/Output Cycle

### 1. Landing Page (The "Wow" Factor)
- **Role**: High-conversion professional landing page.
- **Input**: None (Static).
- **CTA**: "Check My Resilience" button → Redirects to `/register`.

### 2. Auth Page (Login/Register)
- **Input**: `email`, `password`.
- **API Mapping**: `POST /auth/register` (already implemented).
- **Logic**: Store the returned `user_id` in LocalStorage or a Global State.
- **Output**: Redirect to `/link-bank`.

### 3. Onboarding Page (Link Bank)
- **Role**: Bridge to Plaid.
- **Cycle**:
1. **Frontend** calls `POST /plaid/create_link_token?user_id={id}`.
2. **Backend** returns `link_token`.
3. **Frontend** initializes Plaid Link SDK with the token.
4. **Plaid** returns `public_token` on success.
5. **Frontend** calls `POST /plaid/exchange_public_token` with `{public_token, user_id}`.
- **Output**: Redirect to `/dashboard`.

### 4. Dashboard (The Core Product)
- **Initial State**: Show a "Syncing your financial data..." loader.
- **Action**: Call `POST /scoring/process/{user_id}` on mount.
- **Background Loop**:
- While processing, show micro-animations.
- On success, the API returns the full `score`, `decision`, and `explanation`.
- **UI Components**:
- **Radial Gauge**: Visualizing the score (0-1000).
- **Metric Cards**: Income, Rent Ratio, Savings Rate.
- **Explainability Panel**: Render the Markdown `explanation` string from the backend.

---

## 🔌 API Documentation for Frontend

| Feature | Endpoint | Method | Params (JSON) | Success Response (200) |
| :--- | :--- | :--- | :--- | :--- |
| **Signup** | `/auth/register` | `POST` | `email`, `password` | `{id: 1, email: "..."}` |
| **Plaid Link** | `/plaid/create_link_token` | `POST` | `user_id` (Query) | `{link_token: "..."}` |
| **Plaid Token**| `/plaid/exchange_public_token` | `POST` | `public_token`, `user_id` | `{status: "success"}` |
| **Get Score** | `/scoring/process/{id}` | `POST` | None | `{score: 850, decision: "APPROVE", ...}` |
| **History** | `/scoring/status/{id}` | `GET` | None | `{score: 850, calculated_at: "..."}` |

---

## 🛡️ Sequence Diagram: The Scoring Cycle

```mermaid
sequenceDiagram
participant User
participant Frontend
participant Backend
participant Plaid

User->>Frontend: Clicks "Analyze Account"
Frontend->>Backend: POST /plaid/create_link_token
Backend-->>Frontend: link_token
Frontend->>Plaid: Open Plaid Link UI
Plaid-->>User: Auth with Bank
User-->>Plaid: Success
Plaid-->>Frontend: public_token
Frontend->>Backend: POST /plaid/exchange_public_token
Backend-->>Frontend: { status: "success" }

Note over Frontend,Backend: Now we trigger the AI Pipeline

Frontend->>Backend: POST /scoring/process/{user_id}
Backend->>Plaid: Fetch Transactions (90 days)
Plaid-->>Backend: Raw Data
Backend->>Backend: NLP Categorization + ML Scoring
Backend-->>Frontend: { score: 780, explanation: "..." }
Frontend-->>User: Display Radial Gauge & Insights
```

---

## 🎨 UI/UX Theme (Premium Aesthetic)
- **Colors**: Deep Navy (`#0A192F`), Stellar Green (`#64FFDA`), Slate Gray (`#8892B0`) for text.
- **Typography**: Inter (Google Fonts) for readability, Outfit for headers.
- **Effects**: Glassmorphism on Dashboard cards, Subtle gradients behind the Gauge.

---
👉 **[Go back to Backend Docs](file:///d:/PROJECTS/stellaris%20hackathon/david_protocol_backend_architecture.md)**
76 changes: 76 additions & 0 deletions submissions/TheDavidProtocol_Equis/ML_ARCHITECTURE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# 🧠 David Protocol — ML Architecture & Integration Guide

Welcome to the ML side of the David Protocol! This document explains how the backend consumes machine learning models and how you can integrate your trained models into the pipeline.

## 🔷 System Overview

The David Protocol generates a **Financial Resilience Score** (0-1000). The backend handles data ingestion (Plaid) and categorization (NLP), then hands off a set of aggregated features to your models to get a base score.

## 📊 Data & Feature Pipeline

Before the model is called, the raw transaction data goes through two layers:

1. **NLP Categorization**: (`backend/services/nlp_service.py`)
Translates raw transaction names into standard categories: `INCOME`, `RENT`, `UTILITIES`, `FOOD`, `SUBSCRIPTION`, `SHOPPING`, `DEBT`, `SAVINGS`, `OTHER`.
2. **Aggregation**: (`backend/services/scoring_service.py`)
Groups transactions over the last 90 days into features.

### 📥 Model Input (Features)
Your model must accept a `pandas.DataFrame` with the following columns:

| Feature | Description |
| :--- | :--- |
| `income` | Floating point sum of all `INCOME` transactions. |
| `rent` | Floating point absolute sum of all `RENT` transactions. |
| `savings` | Floating point absolute sum of all `SAVINGS` transactions. |
| `credit_utilization` | Float placeholder (currently defaulting to 0.3). |

### 📤 Model Output
- **Type**: Probability of "High Resilience".
- **Range**: `0.0` to `1.0`.
- **Note**: The backend multiplies this by 1000 to get the base `ml_score_base`.

---

## 🛠️ How to Integrate Your Models

The backend is built to load models automatically from the `backend/models/` folder.

### 1. Save Your Models
Use `joblib` to serialize your models. Name them exactly as follows:
- **Logistic Regression**: `logistic_resilience_v1.joblib`
- **XGBoost**: `xgboost_resilience_v1.joblib`

**Example Python code:**
```python
import joblib
joblib.dump(your_model, 'backend/models/xgboost_resilience_v1.joblib')
```

### 2. File Location
Place the files in:
`David-Protocol/backend/models/`

### 3. Dependencies
The backend already includes the following in `requirements.txt`:
- `scikit-learn`
- `xgboost`
- `pandas`
- `joblib`

---

## 🏎️ Deployment & Workflow

1. **Local Training**: Train your models using the `backend/scripts/train_models.py` as a template for data generation.
2. **Refine Scripts**: You can modify `backend/scripts/train_models.py` to use your actual datasets.
3. **Push to GitHub**: Once you push your `.joblib` files to the `backend/models/` folder, the next deployment (on Render/Railway) will automatically pick them up and load them into memory.

## ⚠️ Robustness & Fail-safes
The `ScoringService` has a **Hybrid Logic**:
- If your ML model is found, it uses it for the base score.
- If the model is missing (or fails), it falls back to a **Rule-based system** to ensure the website stays functional.
- It then applies "common sense" rules (like penalties for >40% rent ratio) on top of the ML score to ensure stability.

---
**Happy Modeling! 🚀**
67 changes: 67 additions & 0 deletions submissions/TheDavidProtocol_Equis/README.MD
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Equis

## 👥 Team Name
The David Protocol

## 🧑‍💻 Team Members
| Name | Role | GitHub |
|------|------|--------|
| Mrigank Bhatnagar | System Design | @Awesome06 |
| Ayush Tandon | Frontend | @TheAyushTandon |
| Shivam Rai | ML | @Shivam-565 |


## 💡 Problem Statement
Traditional credit scoring is slow, backward-looking, and often penalizes users for lack of history rather than lack of discipline. **The David Protocol** solves this by providing a real-time, high-fidelity alternative:
- **Data Volatility**: Normalizes messy transaction histories into monthly averages for realistic insights.
- **The "Invisible" Wealth Gap**: Recognizes financial resilience in high-income, high-churn profiles that traditional banks might flag as "risky."
- **Generic Advice**: Replaces static financial tips with data-driven, Gemini-powered diagnostics that reference a user's *actual* top spending categories.

## 🛠️ Tech Stack
- **Frontend**: Next.js 15 (App Router), Tailwind CSS, Framer Motion (for premium animations), Lucide React.
Copy link

Copilot AI Mar 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repo README states the frontend uses Next.js 15, but frontend/package.json pins next to 16.2.1. Update the README to match the actual pinned version (or change the dependency pin to match the documented version) to avoid setup and debugging inconsistencies.

Copilot uses AI. Check for mistakes.
- **Backend**: FastAPI (Python), SQLite (Local Development), SQLAlchemy (ORM).
- **AI/ML**: Gemini 1.5 Flash (Stability Indexing & Financial Categorization), Custom 4-Pillar Scoring Formula.
- **Data Handling**: CSV-based ingestion mapping (SETU-compatible schema).

## 🔗 Links
- **Presentation (PPT/PDF):** [Presentation](./EQUIS.pdf)

## 📸 Screenshots
![Dashboard](./Dashboard.jpeg)
*Figure 1: The Live Pulse Dashboard featuring the 0-1000 Resilience Meter and Gemini-powered Strategy.*

## 🚀 How to Run Locally

### 1. Prerequisites
- Python 3.10+
- Node.js 18+
- [Gemini API Key](https://aistudio.google.com/app/apikey)

### 2. Backend Setup
```bash
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create .env file with your GEMINI_API_KEY
echo "GEMINI_API_KEY=your_key_here" > .env
# Start the server
uvicorn main:app --reload --port 8000
```

### 3. Frontend Setup
```bash
cd frontend
# Install dependencies
npm install
# Start the development server
npm run dev
```

### 4. Test Accounts
Use the following phone numbers to test specific financial profiles:
- **9109460397**: Elite Profile (Score: 950-1000, A+ Grade)
- **8439655313**: Salaried-Rich (Score: 850-920, A Grade)
- **8178810191**: Regular Profile (Score: 400-600, B/C Grade)
22 changes: 22 additions & 0 deletions submissions/TheDavidProtocol_Equis/backend/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# David Protocol - Environment Configuration

# Plaid API Keys (Get from dashboard.plaid.com)
PLAID_CLIENT_ID=your_client_id
PLAID_SECRET=your_secret
PLAID_ENV=sandbox # sandbox, development, or production

# Gemini API Key (Get from aistudio.google.com)
GEMINI_API_KEY=your_gemini_api_key

# Database Configuration
# Local SQLite: sqlite:///./david_protocol.db
# Cloud PostgreSQL: postgresql://user:password@host:port/dbname
DATABASE_URL=postgresql://user:password@host:port/dbname

# Encryption Key (Generated with 'cryptography' Fernet)
ENCRYPTION_KEY=your_encryption_key_here

# App Secret Key (For JWT)
SECRET_KEY=your_secret_key_here
ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30
25 changes: 25 additions & 0 deletions submissions/TheDavidProtocol_Equis/backend/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Use official Python image
FROM python:3.10-slim

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Set work directory
WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy project
COPY . .

# Run the project with Uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Empty file.
Loading