Important
📋 Version Updates from v1.0:
- Interactive Frontend Design: Features a responsive layout with Tailwind CSS, a side-overlay chatbot, and smooth form interactions.
- Health Risk Prediction: Integrates a RandomForestClassifier model trained on the PIMA Indians Diabetes Dataset, logged via MLflow.
- RAG Chatbot: Implements a local RAG system using Ollama (Llama 3.2 1B and nomic-embed-text) for health-related queries.
- CORS Middleware: Added FastAPI CORS support for seamless communication between frontend (
http://localhost:5173) and backends (http://localhost:8000,http://localhost:8001). - CI/CD Integration: GitHub Actions automates model training, testing, and deployment for continuous improvement of ML models.
- Overview
- Technical Flow Chart
- Key Features
- Technology Stack
- Installation and Setup
- Usage
- Contributions
- License
- Contact
The Health Risk Prediction App is an AI-powered web application designed to predict health risks (e.g., diabetes) based on user-submitted health metrics and provide health-related information via a RAG-powered chatbot.
🚀 Powered by Local AI and ML, this system integrates:
- 🤖 Machine Learning: RandomForestClassifier for health risk prediction, tracked with MLflow.
- 📚 RAG Chatbot: Local Ollama models (Llama 3.2 1B and nomic-embed-text) for answering health queries using ingested documents.
- 🌐 Responsive Frontend: Built with React and Tailwind CSS, featuring a clean UI and side-overlay chatbot.
- ⚡ Scalable Backend: FastAPI servers for health data management and RAG query processing, with PostgreSQL for data storage.
Below are two screenshots of the Health Risk Prediction App.
🔹 👨💻 Machine Learning Integration: Train and deploy ML models with MLflow and FastAPI.
🔹 🔍 RAG Implementation: Build a local RAG system for health data queries.
🔹 ⚡ Modern Frontend Design: Create responsive UIs with React and Tailwind CSS.
🔹 🔒 Scalable Backend: Use FastAPI and SQLAlchemy for robust data handling.
📂 For learners: Refer to health-app-backend/app/main.py, backend/api.py, and frontend/src/App.jsx for detailed code comments explaining the ML and RAG workflows!
Flowchart: The system collects health data via a React frontend, sends it to a FastAPI backend for storage and ML-based risk prediction, and uses a separate FastAPI server with Ollama for RAG-based chatbot responses from ingested health documents.
- 🤖 Health Risk Prediction: Predicts diabetes risk using a RandomForestClassifier trained on the PIMA Indians Diabetes Dataset, logged via MLflow.
- 📚 RAG-Powered Chatbot: Answers health-related queries using local Ollama models and ingested documents (e.g., PDFs).
- 💻 Responsive UI: React frontend with Tailwind CSS, featuring a health data form, prediction display, and side-overlay chatbot.
- ⚡ Continuous Model Improvement: CI/CD pipeline with GitHub Actions automates model training and evaluation.
- 🔄 Data Management: Stores health metrics in PostgreSQL via FastAPI and SQLAlchemy.
- 🌐 CORS Support: Ensures seamless frontend-backend communication across ports.
- 📜 Local AI Processing: Runs all AI models (ML and RAG) locally for privacy.
Note
Upcoming features:
- Support for additional health risk models (e.g., heart disease).
- Enhanced RAG with larger document ingestion.
- User authentication for personalized data tracking.
- Open to suggestions and contributions.
| Component | Technologies |
|---|---|
| 🔹 Backend Framework | FastAPI, SQLAlchemy |
| 🔹 Machine Learning | scikit-learn, MLflow |
| 🔹 RAG System | Ollama (Llama 3.2 1B, nomic-embed-text), LangChain |
| 🔹 Database | PostgreSQL |
| 🔹 Frontend | React, Tailwind CSS, react-router-dom |
| 🔹 Deployment | Vite (frontend), Uvicorn (backend), Docker |
Backend Dependencies (from health-app-backend/requirements.txt):
fastapi==0.115.0,uvicorn==0.30.6,sqlalchemy==2.0.35mlflow==2.16.2,scikit-learn==1.5.2,pandas==2.2.3ollama==0.4.8,langchain==0.3.3,psycopg2-binary==2.9.9- Full list in
health-app-backend/requirements.txt.
Frontend Dependencies (from frontend/package.json):
react==18.2.0,react-dom==18.2.0axios==1.7.2,tailwindcss==3.4.13,react-router-dom==6.26.2- Full list in
frontend/package.json.
git clone https://github.com/TyrelM10/healthApp.git
cd healthAppCreate health-app-backend/.env:
DATABASE_URL=postgresql://user:password@localhost:5432/health_db
MLFLOW_TRACKING_URI=http://localhost:5000Create backend/.env:
OLLAMA_HOST=http://localhost:11434Run Ollama in a separate terminal or as a service:
ollama serve
ollama pull llama3.2:1b
ollama pull nomic-embed-textVerify:
ollama listEnsure llama3.2:1b and nomic-embed-text are listed.
cd health-app-backend
docker build -t health-app-backend .
cd ../backend
docker build -t health-rag-backend .
cd ../frontend
docker build -t health-app-frontend .docker run -d --name health-app-backend -p 8001:8000 --env-file health-app-backend/.env health-app-backend
docker run -d --name health-rag-backend -p 8000:8000 --env-file backend/.env health-rag-backend
docker run -d --name health-app-frontend -p 5173:5173 health-app-frontend- Health Backend: http://localhost:8001
- RAG Backend: http://localhost:8000
- Frontend: http://localhost:5173
docker stop health-app-backend health-rag-backend health-app-frontenddocker start health-app-backend health-rag-backend health-app-frontenddocker logs health-app-backend
docker logs health-rag-backend
docker logs health-app-frontenddocker rm health-app-backend health-rag-backend health-app-frontenddocker inspect --format='{{.State.Health.Status}}' health-app-backendCheck logs for errors:
docker logs health-app-backendEnsure Ollama and PostgreSQL are running and environment variables are correct.
- Python 3.9+
- Node.js 18+, npm 9+
- Ollama for local LLM support
- PostgreSQL instance
- MLflow server
git clone https://github.com/TyrelM10/healthApp.git
cd healthAppcd health-app-backend
python -m venv venv
source venv/bin/activate # Mac/Linux
venv\Scripts\activate # Windowspip install -r requirements.txtCreate health-app-backend/.env:
DATABASE_URL=postgresql://user:password@localhost:5432/health_db
MLFLOW_TRACKING_URI=http://localhost:5000mlflow server --host 0.0.0.0 --port 5000python app/ml/train.pyuvicorn app.main:app --port 8001 --reloadHealth Backend available at: http://localhost:8001
cd ../backend
python -m venv venv
source venv/bin/activate # Mac/Linux
venv\Scripts\activate # Windowspip install -r requirements.txtCreate backend/.env:
OLLAMA_HOST=http://localhost:11434uvicorn api:app --port 8000 --reloadRAG Backend available at: http://localhost:8000
cd ../frontendnpm installnpm run devFrontend available at: http://localhost:5173
Follow instructions at ollama.ai.
ollama serveollama pull llama3.2:1b
ollama pull nomic-embed-textollama listEnsure llama3.2:1b and nomic-embed-text are listed.
Note
- The first run may take time as Ollama downloads models.
- Ensure Ollama, MLflow, and both backends home are running before starting the frontend.
- Check browser console (F12) for frontend errors and backend terminals for warnings.
- Open http://localhost:5173 to access the app.
- Navigate to Home for health tips.
- Use Health Data to submit metrics (age, BMI, glucose, etc.).
- Visit Predictions to get risk predictions.
- Interact with the Chatbot to ask health-related questions.
- Ingest health PDFs via http://localhost:8000/ingest or
streamlit run backend/ingest_ui.py.
Contributions are welcome! Check the issues tab for feature requests and improvements.
To contribute:
- Fork the repository.
- Create a feature branch (
git checkout -b feature/YourFeature). - Commit changes (
git commit -m 'Add YourFeature'). - Push to the branch (
git push origin feature/YourFeature). - Open a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.
For questions or collaboration inquiries, reach out to Tyrel Menezes on:
🔗 LinkedIn: https://www.linkedin.com/in/tyrel-menezes
🔗 GitHub: https://github.com/TyrelM10



