Skip to content

🤖 AI-powered food tracker api built with fastapi, langchain agent, langgraph workflows, and supabase for vector search and rag.

License

Notifications You must be signed in to change notification settings

eiberham/food-tracker-api

Repository files navigation

Food Tracker

GitHub GitHub code size in bytes GitHub top language GitHub last commit GitHub stars GitHub workflow status

This is a FastAPI backend that combines LangChain Agents, Supabase, and pgvector to deliver an AI-powered food tracking system. The API supports natural-language queries, vector search, and tool calling agents to retrieve meals, symptoms, and user-specific data.

It uses an Agentic RAG pipeline to ground responses on user-uploaded documents stored in Supabase.

Features:

  • FastAPI backend with clean modular architecture
  • LangChain agent with custom tools
  • Supabase pgvecytor for document embeddings and semantic search
  • Agentic RAG ( tool invocation + retrieval )
  • Authentication and per-user data isolation
  • Pydantic schemas
  • Chat endpoint with streaming to improve the perceived responsivenes by creating a faster, more fluid, and human-like interaction.
  • Guardrails for monitoring agent work
  • LangGraph workflows for monthly recommendations

This is the list of existing endpoints:

VerbResourceDescriptionScope
POST/loginSupabase sign inPublic
GET/foodsGet food listProtected
GET/foods/:idGet a single foodProtected
POST/foodsCreate a foodProtected
PUT/foods/:idUpdate foodProtected
DELETE/foods/:idDelete foodProtected
GET/mealsGet meal listProtected
GET/meals/:idGet a single mealProtected
POST/mealsCreate a mealProtected
PUT/meals/:idUpdate mealProtected
DELETE/meals/:idDelete a mealProtected
GET/symptomsGet symptom listProtected
GET/symptoms/:idGet a single symptomProtected
POST/symptomsCreate a symptomProtected
PUT/symptoms/:idUpdate a symptomProtected
DELETE/symptoms/:idDelete a symptomProtected
POST/documentsUploads a documentProtected
POST/chatSends a message to the agentProtected
POST/insightsGathers monthly insights per userHidden

Retrieval-Augmented Generation (RAG) Evaluation

I evaluated the quality of this pipeline using RAGAS, focusing on grounding, retrieval quality, and answer relevance.

Results:

Metric Score
Faithfulness 0.93
Answer Relevancy 0.82
Context Precision 1.00
Context Recall 1.00

Interpretation:

  • The system shows high faithfulness, indicating responses are well-grounded in retrieved context.
  • Perfect context precision and recall confirm that retrieved documents are both relevant and sufficient.
  • Answer relevancy remains strong while allowing richer, user-friendly responses.

Workflow

The GET /insights endpoint is a special endpoint that runs every month. It is triggered by a cron job and its sole purpose is to prepare recommendations for users based on the meals and symptoms caused by them in the last month.

You can see the process summarized in the image below:

insights

The Process Flow

  1. Scheduled Trigger

A cron job runs on the first of every month (automated scheduling)

  1. Insights Generation

The cron job makes a POST /insights request to the API endpoint, which triggers the AnalystAgent and OutputAgent workflow.

  1. Dual Database Operations

Upserts to email_jobs table: Stores the email job record with user info, content, and status Queue message: Sends a message to the email_jobs queue for processing

  1. Asynchronous Processing

Jobs are queued for background processing (using PostgreSQL's pgmq - PostgreSQL Message Queue)

  1. Email Delivery

A background worker processes the queue and sends the actual emails

Key Architecture Benefits:

  • Reliability: Database record ensures no emails are lost
  • Idempotency: Upsert prevents duplicate emails for the same user/period
  • Scalability: Async queue handles email delivery without blocking the API
  • Monitoring: Database records allow tracking of email status (PENDING/SENT/FAILED)

The diagram of the workflow is as follows:

workflow

As you can see it's simply a graph with two nodes:

Analysis Node(AnalystAgent)

The analysis node is responsible for:

Data Analysis: Acts as an expert histamine allergy data analyst that processes user data to identify patterns and correlations between foods consumed and symptoms experienced.

Tool Usage: Has access to specialized tools (via create_tools(db)) that can query the database to gather necessary food and symptom data for a specific user.

Insight Generation: Analyzes which foods a user should avoid and which ones they should consume more of based on their symptom patterns.

Data Processing: Takes a user ID as input and performs deep analysis by invoking: "Which foods the user {user_id} should avoid and which ones they should eat more of based on their symptoms."

Raw Analysis Output: Returns detailed analytical findings that serve as input for the next stage.

Output Node(OutputAgent)

The output node is responsible for:

Content Formatting: Takes the raw analysis from the analyst and transforms it into user-friendly, digestible insights.

User Experience: Ensures the recommendations are clear, actionable, and empathetic to users dealing with histamine allergies.

Content Constraints: Applies specific formatting rules:

Maximum 8 bullet points Each bullet point limited to 20 words Uses simple, accessible language Maintains a neutral, clear tone No Additional Analysis: Focuses solely on presentation and communication rather than performing new analysis.

Final Output: Produces the final insights that will be delivered to the user.

Deployment

You can deploy it to railway by conteinerizing the application, pushing it to dockerhub and then pulling it from there.

docker buildx build --platform linux/amd64,linux/arm64 -t username/food-tracker:v1.0 --push .

How to run it locally ?

Clone the repository

git clone food-tracker-api

Create a virtual environment

python3 -m venv food-tracker-env

Activate it

source food-tracker-env/bin/activate

Install your exact dependencies

pip install -r requirements.txt

Run it

fastapi dev main.py

How to run unit tests ?

You can run the unit tests by issuing the following command:

pytest

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

🤖 AI-powered food tracker api built with fastapi, langchain agent, langgraph workflows, and supabase for vector search and rag.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published