Advanced RAG Example: query markdown documents.
Install uv and execute the command uv sync in the project root to install dependencies.
Also install ollama and pull the models llama3.1:8b and nomic-embed-text-v2-moe
Create an .env file in the project root with your OpenAI and HuggingFace secret tokens
OPENAI_API_KEY=***
HF_TOKEN=***
To ingest the markdown files and convert them to a vector (Chroma) database run:
python langchain_rag/ingest.pyTo run the chatbot UI (Gradio) after having executed the ingestion:
python langchain_rag_app.pyTo ingest the markdown files without using the Langchain
python advanced_rag/ingest.pyTo run the chatbot UI (Gradio) after having executed the advanced ingestion without Langchain:
python advanced_rag_app.py