This is a FastAPI backend that combines LangChain Agents, Supabase, and pgvector to deliver an AI-powered food tracking system. The API supports natural-language queries, vector search, and tool calling agents to retrieve meals, symptoms, and user-specific data.
It uses an Agentic RAG pipeline to ground responses on user-uploaded documents stored in Supabase.
Features:
- FastAPI backend with clean modular architecture
- LangChain agent with custom tools
- Supabase pgvecytor for document embeddings and semantic search
- Agentic RAG ( tool invocation + retrieval )
- Authentication and per-user data isolation
- Pydantic schemas
- Chat endpoint with streaming to improve the perceived responsivenes by creating a faster, more fluid, and human-like interaction.
- Guardrails for monitoring agent work
- LangGraph workflows for monthly recommendations
This is the list of existing endpoints:
| Verb | Resource | Description | Scope |
|---|---|---|---|
| POST | /login | Supabase sign in | Public |
| GET | /foods | Get food list | Protected |
| GET | /foods/:id | Get a single food | Protected |
| POST | /foods | Create a food | Protected |
| PUT | /foods/:id | Update food | Protected |
| DELETE | /foods/:id | Delete food | Protected |
| GET | /meals | Get meal list | Protected |
| GET | /meals/:id | Get a single meal | Protected |
| POST | /meals | Create a meal | Protected |
| PUT | /meals/:id | Update meal | Protected |
| DELETE | /meals/:id | Delete a meal | Protected |
| GET | /symptoms | Get symptom list | Protected |
| GET | /symptoms/:id | Get a single symptom | Protected |
| POST | /symptoms | Create a symptom | Protected |
| PUT | /symptoms/:id | Update a symptom | Protected |
| DELETE | /symptoms/:id | Delete a symptom | Protected |
| POST | /documents | Uploads a document | Protected |
| POST | /chat | Sends a message to the agent | Protected |
| POST | /insights | Gathers monthly insights per user | Hidden |
I evaluated the quality of this pipeline using RAGAS, focusing on grounding, retrieval quality, and answer relevance.
Results:
| Metric | Score |
|---|---|
| Faithfulness | 0.93 |
| Answer Relevancy | 0.82 |
| Context Precision | 1.00 |
| Context Recall | 1.00 |
Interpretation:
- The system shows high faithfulness, indicating responses are well-grounded in retrieved context.
- Perfect context precision and recall confirm that retrieved documents are both relevant and sufficient.
- Answer relevancy remains strong while allowing richer, user-friendly responses.
The GET /insights endpoint is a special endpoint that runs every month. It is triggered by a cron job and its sole purpose is to prepare recommendations for users based on the meals and symptoms caused by them in the last month.
You can see the process summarized in the image below:
- Scheduled Trigger
A cron job runs on the first of every month (automated scheduling)
- Insights Generation
The cron job makes a POST /insights request to the API endpoint, which triggers the AnalystAgent and OutputAgent workflow.
- Dual Database Operations
Upserts to email_jobs table: Stores the email job record with user info, content, and status Queue message: Sends a message to the email_jobs queue for processing
- Asynchronous Processing
Jobs are queued for background processing (using PostgreSQL's pgmq - PostgreSQL Message Queue)
- Email Delivery
A background worker processes the queue and sends the actual emails
Key Architecture Benefits:
- Reliability: Database record ensures no emails are lost
- Idempotency: Upsert prevents duplicate emails for the same user/period
- Scalability: Async queue handles email delivery without blocking the API
- Monitoring: Database records allow tracking of email status (PENDING/SENT/FAILED)
The diagram of the workflow is as follows:
As you can see it's simply a graph with two nodes:
The analysis node is responsible for:
Data Analysis: Acts as an expert histamine allergy data analyst that processes user data to identify patterns and correlations between foods consumed and symptoms experienced.
Tool Usage: Has access to specialized tools (via create_tools(db)) that can query the database to gather necessary food and symptom data for a specific user.
Insight Generation: Analyzes which foods a user should avoid and which ones they should consume more of based on their symptom patterns.
Data Processing: Takes a user ID as input and performs deep analysis by invoking: "Which foods the user {user_id} should avoid and which ones they should eat more of based on their symptoms."
Raw Analysis Output: Returns detailed analytical findings that serve as input for the next stage.
The output node is responsible for:
Content Formatting: Takes the raw analysis from the analyst and transforms it into user-friendly, digestible insights.
User Experience: Ensures the recommendations are clear, actionable, and empathetic to users dealing with histamine allergies.
Content Constraints: Applies specific formatting rules:
Maximum 8 bullet points Each bullet point limited to 20 words Uses simple, accessible language Maintains a neutral, clear tone No Additional Analysis: Focuses solely on presentation and communication rather than performing new analysis.
Final Output: Produces the final insights that will be delivered to the user.
You can deploy it to railway by conteinerizing the application, pushing it to dockerhub and then pulling it from there.
docker buildx build --platform linux/amd64,linux/arm64 -t username/food-tracker:v1.0 --push .Clone the repository
git clone food-tracker-apiCreate a virtual environment
python3 -m venv food-tracker-envActivate it
source food-tracker-env/bin/activateInstall your exact dependencies
pip install -r requirements.txtRun it
fastapi dev main.pyYou can run the unit tests by issuing the following command:
pytestPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License - see the LICENSE file for details.