Upload any document. Ask anything. Get answers with proof.
SideQuest is a full-stack AI application that allows users to upload large documents and have an intelligent conversation with them. Every response is grounded strictly in the uploaded document and comes with precise citations.
SideQuest uses a Retrieval-Augmented Generation (RAG) pipeline:
- Ingestion: Documents are parsed, chunked, and embedded into a Vector Database.
- Retrieval: User queries are embedded to find the most semantically relevant document chunks.
- Generation: The top chunks are passed to the open-source LLM (
meta-llama/Llama-3.1-8B-Instructvia Hugging Face) to generate grounded answers with citations. - Summarization: Queries are dynamically summarized into concise conversation titles.
┌─────────────────────────────────────────────────────────────┐
│ NEXT.JS FRONTEND │
│ │
│ [Document Upload] [Chat Workspace] │
│ │ │ │
└──────────┼───────────────────────────────┼──────────────────┘
POST /upload POST /query
│ │
▼ ▼
┌─────────────────────────────────────────────────────────────┐
│ FASTAPI BACKEND │
│ │
│ ┌───────────────┐ ┌─────────────────────┐ │
│ │ Ingestion │ │ Retrieval Flow │ │
│ │ │ │ │ │
│ │ 1. Parse File │ │ 1. Embed Question │ │
│ │ 2. Chunk text │ │ 2. Find top vectors │ │
│ │ 3. Embed │ │ 3. Build Prompt │ │
│ │ 4. Store │ │ 4. Query LLM │ │
│ └──────┬────────┘ └─────────┬───────────┘ │
└──────────┼─────────────────────────────────┼────────────────┘
│ │
▼ ▼
┌─────────────────────┐ ┌───────────────────────────┐
│ ChromaDB │◄──────────│ Hugging Face │
│ (Vector Storage) │ │ (Llama 3.1 8B Model) │
└─────────────────────┘ └───────────────────────────┘
- Frontend: Next.js, React, CSS Modules (Google Outfit typography)
- Backend: FastAPI, Python
- AI Models: Hugging Face Hub (Llama 3.1 8B Instruct)
- Vector Database: ChromaDB
Create a .env file in your backend directory:
# Hugging Face Configuration
HUGGINGFACEHUB_API_TOKEN=hf_...
LLM_MODEL=meta-llama/Llama-3.1-8B-Instruct
# Frontend Connection
FRONTEND_URL=http://localhost:3000- Python 3.11+
- Node.js 18+
- An active Hugging Face API Token (Free tier works!)
cd backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Ensure you configure your .env file
uvicorn main:app --reload --port 8000cd frontend
npm install
npm run devNavigate to http://localhost:3000 in your browser.
To begin, navigate to the Library tab via the sidebar and upload an informative document to use as context!
- Google OAuth Integration & Account Authentication
- OCR Support for scanned PFDs and Images