Skip to content

CodeWithAkthar/Helpora-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Helpora � Academic AI Assistant (MES AIMAT)

A modern campus information chatbot that answers student questions about admissions, academics, departments, and facilities using a lightweight retrieval layer over a curated knowledge base, backed by FastAPI + Groq LLM, with a clean React (Vite) chat UI.

Why this project is cool

  • Instant, friendly answers: Student-first responses with concise, professional formatting.
  • RAG-style grounding: Retrieves relevant snippets from backend/college_data.txt and injects them as context (reduces hallucinations for institution-specific info).
  • Conversation memory: Sends prior turns (history) to maintain context across a chat session.
  • Production-ready API surface: Clean request/response schemas and a /health endpoint.
  • Modern UI: Polished chat experience with quick actions and Markdown rendering.

Tech stack

  • Frontend: React + Vite, react-markdown
  • Backend: FastAPI, Pydantic, Uvicorn
  • LLM provider: Groq (llama-3.1-8b-instant)
  • Config: python-dotenv (.env support)

Repo structure

frontend/   # Vite + React UI
backend/    # FastAPI API + retrieval over college_data.txt

Quickstart (local development)

1) Backend (FastAPI)

Prereqs: Python 3.10+ recommended.

cd backend
python -m venv .venv
.\.venv\Scripts\activate
pip install -r requirements.txt

copy .env.example .env

Edit backend/.env and set:

GROQ_API_KEY=your_groq_api_key_here

Run the API:

uvicorn main:app --reload --host 127.0.0.1 --port 8000

Health check:

  • GET http://127.0.0.1:8000/health

2) Frontend (React + Vite)

Prereqs: Node.js 18+ recommended.

cd frontend
npm install
npm run dev

Open the app from the Vite dev server URL (shown in your terminal).

API

POST /chat

Request body:

{
  "message": "How do I apply for MBA?",
  "history": [
    { "role": "user", "content": "Hi" },
    { "role": "assistant", "content": "Hello! How can I help?" }
  ]
}

Response:

{
  "reply": "& assistant response& ",
  "context_used": ["& retrieved snippet 1& ", "& retrieved snippet 2& "]
}

GET /health

Returns status + how many knowledge chunks were loaded from college_data.txt.

Configuration notes

  • Backend requires GROQ_API_KEY at startup (it will crash early if missing � by design).
  • CORS is currently open (allow_origins=["*"]) for development; restrict this for production.
  • Frontend API URL: the frontend currently calls a hosted backend:
    • frontend/src/api/chat.js uses https://helpora-ai.onrender.com/chat
    • For local backend development, point it to http://127.0.0.1:8000/chat

How the � RAG� works (high level)

  1. backend/college_data.txt is split into overlapping chunks at startup.
  2. A simple keyword scoring retrieves the most relevant chunks for the user query.
  3. Retrieved context is injected into the system prompt before calling the LLM.

Roadmap ideas

  • Replace keyword retrieval with embeddings + vector search (FAISS / pgvector)
  • Add citations (link answers to specific sections in college_data.txt)
  • Add environment-based API URL in the frontend (VITE_API_URL)
  • Rate limiting + request logging

License

MIT (or update this section if you want a different license).

Helpora � Academic AI Assistant (MES AIMAT)

A modern campus information chatbot that answers student questions about admissions, academics, departments, and facilities using a lightweight retrieval layer over a curated knowledge base, backed by FastAPI + Groq LLM, with a clean React (Vite) chat UI.

Why this project is cool

  • Instant, friendly answers: Student-first responses with concise, professional formatting.

  • RAG-style grounding: Retrieves relevant snippets from backend/college_data.txt and injects them as context (reduces hallucinations for institution-specific info).

  • Conversation memory: Sends prior turns (history) to maintain context across a chat session.

  • Production-ready API surface: Clean request/response schemas and a /health endpoint.

  • Modern UI: Polished chat experience with quick actions and Markdown rendering.

Tech stack

  • Frontend: React + Vite, react-markdown

  • Backend: FastAPI, Pydantic, Uvicorn

  • LLM provider: Groq (llama-3.1-8b-instant)

  • Config: python-dotenv (.env support)

Repo structure


frontend/   # Vite + React UI

backend/    # FastAPI API + retrieval over college_data.txt

Quickstart (local development)

1) Backend (FastAPI)

Prereqs: Python 3.10+ recommended.

cd backend

python -m venv .venv

.\.venv\Scripts\activate

pip install -r requirements.txt



copy .env.example .env

Edit backend/.env and set:

GROQ_API_KEY=your_groq_api_key_here

Run the API:

uvicorn main:app --reload --host 127.0.0.1 --port 8000

Health check:

  • GET http://127.0.0.1:8000/health

2) Frontend (React + Vite)

Prereqs: Node.js 18+ recommended.

cd frontend

npm install

npm run dev

Open the app from the Vite dev server URL (shown in your terminal).

API

POST /chat

Request body:

{

  "message": "How do I apply for MBA?",

  "history": [

    { "role": "user", "content": "Hi" },

    { "role": "assistant", "content": "Hello! How can I help?" }

  ]

}

Response:

{

  "reply": "& assistant response& ",

  "context_used": ["& retrieved snippet 1& ", "& retrieved snippet 2& "]

}

GET /health

Returns status + how many knowledge chunks were loaded from college_data.txt.

Configuration notes

  • Backend requires GROQ_API_KEY at startup (it will crash early if missing � by design).

  • CORS is currently open (allow_origins=["*"]) for development; restrict this for production.

  • Frontend API URL: the frontend currently calls a hosted backend:

    • frontend/src/api/chat.js uses https://helpora-ai.onrender.com/chat

    • For local backend development, point it to http://127.0.0.1:8000/chat

How the � RAG� works (high level)

  1. backend/college_data.txt is split into overlapping chunks at startup.

  2. A simple keyword scoring retrieves the most relevant chunks for the user query.

  3. Retrieved context is injected into the system prompt before calling the LLM.

Roadmap ideas

  • Replace keyword retrieval with embeddings + vector search (FAISS / pgvector)

  • Add citations (link answers to specific sections in college_data.txt)

  • Add environment-based API URL in the frontend (VITE_API_URL)

  • Rate limiting + request logging

License

MIT (or update this section if you want a different license).

About

AI-powered chatbot designed for MES College Aluva to assist students with queries, information, and campus-related support. Built as a final MCA project.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors