Skip to content

dinraj910/pothole-intelligence-system

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

27 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Header

Status License Deploy

Python Flask YOLOv8 JS

Typing SVG



πŸ“– Overview

Road Damage AI is an advanced, multi-model ensemble pipeline built for scalable road quality inspection. It upgrades standard single-model pothole detection by running an ensemble of three specialized YOLOv8 models simultaneously.

By combining models trained on different points of view (Dashcam, Close-up, Street-level) using a cascaded transfer-learning strategy, the application achieves incredible recall and utilizes Cross-Model Non-Maximum Suppression (NMS) to track multi-model consensus and filter out false positives.

Users can interface with the AI pipeline via a Server-Side Rendered Image Upload Dashboard or a Live Camera Feed that runs inference every 2 seconds without requiring client-side GPU power.


✨ Key Features

βœ… Capability
πŸ€– 3-Model YOLOv8 Ensemble: RDD2022 (Dashcam), Pothole-600 (Close-up), Kaggle (Street).
πŸ›‘οΈ Consensus Tracking: Overlapping bounding boxes are merged via cross-model NMS (IoU=0.45). Detections flagged by 2+ models earn a high-confidence consensus star.
πŸ“‘ Live Camera Detection: Uses the MediaStream API to capture webcam frames entirely in the browser, passing base64 JPEG blobs to /api/live-frame for seamless JS-driven live tracking.
πŸ“Š Damage Analytics: Calculates cumulative damage counts and real-time damage area percentages.
🎨 Premium Dark-Mode UI: Glassmorphism accents, live statistics counting, dynamic color-coded bounding boxes (πŸ”΄ Model A, 🟒 Model B, πŸ”΅ Model C).
πŸš€ Production-Ready: Optimized with gunicorn (1 worker) so all 3 models load only once into memory, perfectly tuned for Render's free tier.

πŸ—οΈ Architecture

graph TD
    A[Browser Client] -->|1. Image Upload /predict| B(Flask App Engine)
    A -->|2. Webcam Frame /live-frame| B
    
    subgraph Multi-Model Ensemble [Singleton Inference Engine]
        B --> C{Cross-Model Inference}
        C -->|conf=0.05| M1[Model A: RDD2022 ]
        C -->|conf=0.10| M2[Model B: Pothole-600 ]
        C -->|conf=0.05| M3[Model C: Kaggle ]
        
        M1 --> N[Cross-Model NMS IoU=0.45]
        M2 --> N
        M3 --> N
        N --> D[Consensus Scoring & Data Cleanup]
    end

    D --> E[Render cv2 Bounding Boxes]
    E --> F[Generate JSON & Analytics]
    F -->|Return HTML / JSON| A
Loading

πŸ“ Project Structure

πŸ“¦ Pothole-AI-System
β”œβ”€ app/
β”‚  β”œβ”€ app.py                  # Core Flask routing & JSON APIs
β”‚  β”œβ”€ RoadDamageAI_Phase1/    
β”‚  β”‚  └─ weights/             # 3 specialized YOLOv8 models
β”‚  β”‚     β”œβ”€ model_a_rdd2022.pt
β”‚  β”‚     β”œβ”€ model_b_pothole600.pt
β”‚  β”‚     └─ model_c_kaggle.pt
β”‚  β”œβ”€ utils/
β”‚  β”‚  └─ detector.py          # Array/Disk inference & NMS logic
β”‚  β”œβ”€ static/
β”‚  β”‚  β”œβ”€ js/live_cam.js       # Webcam stream & API sync
β”‚  β”‚  β”œβ”€ styles.css           # Global Dark Theme UI
β”‚  β”‚  β”œβ”€ uploads/             # Ephemeral image uploads
β”‚  β”‚  └─ results/             # Annotated output storage
β”‚  └─ templates/
β”‚     β”œβ”€ landing.html         # Hero page
β”‚     β”œβ”€ dashboard.html       # Mode Selector (Live vs Upload)
β”‚     β”œβ”€ index.html           # Upload Interface
β”‚     └─ result.html          # Detailed tabular analytics
β”œβ”€ notebook/
β”‚  └─ Road_Damage_MultiModel_Pipeline_final.ipynb # Source training
β”œβ”€ requirements.txt           # Dependencies
β”œβ”€ Procfile                   # Gunicorn config for Render
└─ render.yaml                # Render Infrastructure-as-Code

πŸ““ Training Notebooks

The deep learning models powering this pipeline were trained and evaluated in the following notebooks. They contain the data visualization, cascaded transfer learning strategy, and metrics validation.

Notebook Description
πŸ”— Multi-Model Ensemble Training (⭐ Core) The final Phase 1 notebook. Contains the end-to-end cascaded training logic (COCO β†’ Model A β†’ Model B β†’ Model C), cross-model NMS PoC, and ensemble thresholding experiments.
πŸ”— Initial Prototype Training The initial baseline single-model YOLOv8 experiment on a basic pothole dataset to validate feasibility before upgrading to the 3-model paradigm.

⚑ Quick Start

Prerequisites

  • Python 3.10+
  • gunicorn (for Unix environments)

Local Installation

  1. Clone and Setup Virtual Environment:

    git clone https://github.com/<your-username>/Pothole-AI-System.git
    cd Pothole-AI-System
    python -m venv .venv
    source .venv/bin/activate  # Windows: .venv\Scripts\activate
    pip install -r requirements.txt
  2. Ensure Weights Are Present: Store your 3 trained YOLOv8 .pt models in app/RoadDamageAI_Phase1/weights/.

  3. Run the Development Server:

    cd app
    python app.py
    # Visit http://127.0.0.1:5000 in your browser

πŸš€ Deployment (Render)

This project includes a Procfile and render.yaml for one-click deployment on Render.

Important Note regarding Render's Free Tier: The application restricts gunicorn to --workers 1. This is done intentionally because keeping three YOLO models in memory consumes ~66MB, and spinning up multiple workers on a 512MB RAM free instance will cause out-of-memory (OOM) crashes.

To deploy:

  1. Connect this repo to Render.
  2. The render.yaml blueprint will automatically detect the settings.
  3. Access your live app!

πŸ–ΌοΈ Screenshots / Demo

alt text alt text alt text alt text

alt text alt text alt text alt text

alt text alt text alt text alt text

alt text

alt text

alt text

alt text


πŸ—ΊοΈ Roadmap (Phase 2 & 3)

  • Phase 2 β€” Severity Classification: Train an additional CNN to classify the detected bounding boxes by severity (Small / Medium / Severe).
  • Phase 3 β€” Location Intelligence: Implement GPS EXIF extraction for image uploads and browser Geolocation API for the live camera to build dynamic pothole maps.
  • DB Integration: Migrate to PostgreSQL for maintaining historic detection logs.

🀝 Contributing

  1. Fork the repo
  2. Create a feature branch (git checkout -b feature/awesome)
  3. Commit changes (git commit -m 'Add awesome feature')
  4. Push to branch (git push origin feature/awesome)
  5. Open a Pull Request

πŸ“œ License & Acknowledgments

  • Distributed under the MIT License.
  • Built using Ultralytics YOLOv8.
  • Datasets utilized: RDD2022, Pothole-600, Kaggle Pothole Dataset.

Footer

About

πŸš€ AI-powered end-to-end system for detecting potholes using YOLOv8 and analyzing real-world road conditions from images. This project goes beyond basic object detection by building a practical, production-oriented pipeline that identifies potholes in diverse environments, handles domain shifts, and prepares for real-world deployment.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors