A modern campus information chatbot that answers student questions about admissions, academics, departments, and facilities using a lightweight retrieval layer over a curated knowledge base, backed by FastAPI + Groq LLM, with a clean React (Vite) chat UI.
- Instant, friendly answers: Student-first responses with concise, professional formatting.
- RAG-style grounding: Retrieves relevant snippets from
backend/college_data.txtand injects them as context (reduces hallucinations for institution-specific info). - Conversation memory: Sends prior turns (history) to maintain context across a chat session.
- Production-ready API surface: Clean request/response schemas and a
/healthendpoint. - Modern UI: Polished chat experience with quick actions and Markdown rendering.
- Frontend: React + Vite,
react-markdown - Backend: FastAPI, Pydantic, Uvicorn
- LLM provider: Groq (
llama-3.1-8b-instant) - Config:
python-dotenv(.envsupport)
frontend/ # Vite + React UI
backend/ # FastAPI API + retrieval over college_data.txt
Prereqs: Python 3.10+ recommended.
cd backend
python -m venv .venv
.\.venv\Scripts\activate
pip install -r requirements.txt
copy .env.example .envEdit backend/.env and set:
GROQ_API_KEY=your_groq_api_key_hereRun the API:
uvicorn main:app --reload --host 127.0.0.1 --port 8000Health check:
GET http://127.0.0.1:8000/health
Prereqs: Node.js 18+ recommended.
cd frontend
npm install
npm run devOpen the app from the Vite dev server URL (shown in your terminal).
Request body:
{
"message": "How do I apply for MBA?",
"history": [
{ "role": "user", "content": "Hi" },
{ "role": "assistant", "content": "Hello! How can I help?" }
]
}Response:
{
"reply": "& assistant response& ",
"context_used": ["& retrieved snippet 1& ", "& retrieved snippet 2& "]
}Returns status + how many knowledge chunks were loaded from college_data.txt.
- Backend requires
GROQ_API_KEYat startup (it will crash early if missing � by design). - CORS is currently open (
allow_origins=["*"]) for development; restrict this for production. - Frontend API URL: the frontend currently calls a hosted backend:
frontend/src/api/chat.jsuseshttps://helpora-ai.onrender.com/chat- For local backend development, point it to
http://127.0.0.1:8000/chat
backend/college_data.txtis split into overlapping chunks at startup.- A simple keyword scoring retrieves the most relevant chunks for the user query.
- Retrieved context is injected into the system prompt before calling the LLM.
- Replace keyword retrieval with embeddings + vector search (FAISS / pgvector)
- Add citations (link answers to specific sections in
college_data.txt) - Add environment-based API URL in the frontend (
VITE_API_URL) - Rate limiting + request logging
MIT (or update this section if you want a different license).
A modern campus information chatbot that answers student questions about admissions, academics, departments, and facilities using a lightweight retrieval layer over a curated knowledge base, backed by FastAPI + Groq LLM, with a clean React (Vite) chat UI.
-
Instant, friendly answers: Student-first responses with concise, professional formatting.
-
RAG-style grounding: Retrieves relevant snippets from
backend/college_data.txtand injects them as context (reduces hallucinations for institution-specific info). -
Conversation memory: Sends prior turns (history) to maintain context across a chat session.
-
Production-ready API surface: Clean request/response schemas and a
/healthendpoint. -
Modern UI: Polished chat experience with quick actions and Markdown rendering.
-
Frontend: React + Vite,
react-markdown -
Backend: FastAPI, Pydantic, Uvicorn
-
LLM provider: Groq (
llama-3.1-8b-instant) -
Config:
python-dotenv(.envsupport)
frontend/ # Vite + React UI
backend/ # FastAPI API + retrieval over college_data.txt
Prereqs: Python 3.10+ recommended.
cd backend
python -m venv .venv
.\.venv\Scripts\activate
pip install -r requirements.txt
copy .env.example .env
Edit backend/.env and set:
GROQ_API_KEY=your_groq_api_key_here
Run the API:
uvicorn main:app --reload --host 127.0.0.1 --port 8000
Health check:
GET http://127.0.0.1:8000/health
Prereqs: Node.js 18+ recommended.
cd frontend
npm install
npm run dev
Open the app from the Vite dev server URL (shown in your terminal).
Request body:
{
"message": "How do I apply for MBA?",
"history": [
{ "role": "user", "content": "Hi" },
{ "role": "assistant", "content": "Hello! How can I help?" }
]
}
Response:
{
"reply": "& assistant response& ",
"context_used": ["& retrieved snippet 1& ", "& retrieved snippet 2& "]
}
Returns status + how many knowledge chunks were loaded from college_data.txt.
-
Backend requires
GROQ_API_KEYat startup (it will crash early if missing � by design). -
CORS is currently open (
allow_origins=["*"]) for development; restrict this for production. -
Frontend API URL: the frontend currently calls a hosted backend:
-
frontend/src/api/chat.jsuseshttps://helpora-ai.onrender.com/chat -
For local backend development, point it to
http://127.0.0.1:8000/chat
-
-
backend/college_data.txtis split into overlapping chunks at startup. -
A simple keyword scoring retrieves the most relevant chunks for the user query.
-
Retrieved context is injected into the system prompt before calling the LLM.
-
Replace keyword retrieval with embeddings + vector search (FAISS / pgvector)
-
Add citations (link answers to specific sections in
college_data.txt) -
Add environment-based API URL in the frontend (
VITE_API_URL) -
Rate limiting + request logging
MIT (or update this section if you want a different license).