Multi-Language Content Automation | LangGraph Orchestration + FastAPI + OpenAI
An end-to-end autonomous content generation system that takes a topic and produces a fully structured, SEO-optimised blog post β translated into any of 6 languages β through a 4-node LangGraph state machine, served via a FastAPI REST API with a live web frontend deployed on Vercel.
Live Demo β blog-agent on Vercel
| Metric | Value |
|---|---|
| Minimum blog length | 500+ words per generation |
| Languages supported | 6 (English, Spanish, French, German, Telugu, Swahili) |
| LangGraph nodes in pipeline | 4 (title β content β router β translation) |
| Graph execution modes | 2 (topic-only / language-aware) |
| API endpoints | 2 (GET / frontend, POST /blogs) |
| LLM providers supported | 2 (OpenAI GPT-4o-mini, Groq Llama 3.3-70B) |
| Lines of application code | < 250 across the entire src/ package |
Client Request β POST /blogs { topic, language? }
β
app.py selects graph mode
based on presence of "language"
β
βββββββββββββ΄βββββββββββββ
β β
Topic Mode Language Mode
(3-node graph) (4-node graph)
β β
βββββββββββββ¬βββββββββββββ
β
βββββββββββΌβββββββββββ
β title_creation β LLM generates SEO-optimised
β _node β markdown title from topic
βββββββββββ¬βββββββββββ
β
βββββββββββΌβββββββββββ
β content_generationβ LLM writes 500+ word
β _node β structured markdown blog
βββββββββββ¬βββββββββββ
β
βββββββββββΌβββββββββββ (Language Mode only)
β language_router β Conditional edge β routes
β _node β to correct translation node
ββββ¬βββ¬βββ¬βββ¬βββ¬ββββ or exits if English
β β β β β
ES FR DE TE SW END
β β β β β
ββββΌβββΌβββΌβββΌβββΌβββ
β translation β Translates content preserving
β _node(language) β tone, structure & markdown
βββββββββββ¬βββββββββ
β
βββββββββββΌβββββββββββ
β JSON Response β { data: { topic, blog:
β β { title, content } } }
ββββββββββββββββββββββ
Topic Graph: START β title_creation β content_generation β END
Language Graph: START β title_creation β content_generation
β language_router βββ¬ββ spanish_node βββ
βββ french_node βββ€
βββ german_node βββ€β END
βββ telugu_node βββ€
βββ swahili_node βββ
βββ END (if english)
Blogstate = {
"topic": str, # user input
"blog": Blog, # { title: str, content: str }
"language_content": str, # target language (optional)
}State flows immutably through every node β each node receives the full state and returns only the fields it modifies.
Rather than chaining LLM calls imperatively, the pipeline is modelled as a directed acyclic graph. LangGraph manages state transitions, conditional routing, and execution order. This makes the pipeline easy to extend (adding a new language is 3 lines: new node, new edge, new router branch) and trivial to debug in LangGraph Studio.
A simpler design would be a single graph with an optional translation branch. Instead, two distinct graphs are compiled at request time based on whether a language field is present. This keeps graph complexity low and avoids dead nodes for the majority of requests.
The LLM class in src/llms/llm.py exposes openaillm() and groqllm() factory methods. Swapping the underlying model requires zero changes to any node or graph β only the factory call in app.py changes.
The Blog model is a Pydantic BaseModel. Every node that writes to blog produces a validated, type-safe object. Invalid LLM outputs fail fast at the state boundary rather than causing silent downstream errors.
| Library | Version | Role |
|---|---|---|
| LangGraph | 1.0.7 | State machine graph orchestration |
| LangChain | 1.2.7 | LLM interface & prompt management |
| OpenAI GPT-4o-mini | β | Primary content generation model |
| Groq Llama 3.3-70B | β | Alternative LLM provider |
| LangSmith | 0.6.4 | Tracing & observability (optional) |
| Library | Version | Role |
|---|---|---|
| FastAPI | 0.128 | Async REST API framework |
| Pydantic | 2.12 | Data validation & state typing |
| Uvicorn | 0.40 | ASGI server |
| python-dotenv | 1.2 | Environment configuration |
| Tool | Role |
|---|---|
| Vanilla JS + Tailwind CSS | Responsive single-page frontend |
| Marked.js | Markdown β HTML rendering |
| highlight.js | Code block syntax highlighting |
| Vercel | Serverless deployment (Python 3.12 runtime) |
Blog-Agent/
βββ app.py # FastAPI entrypoint β graph selection, route handlers
βββ frontend/
β βββ index.html # Single-page frontend (Tailwind + Marked.js)
βββ src/
β βββ graphs/
β β βββ graph_builder.py # Graph_builder class β compiles both graph modes
β βββ llms/
β β βββ llm.py # LLM factory β OpenAI & Groq providers
β βββ nodes/
β β βββ blog_node.py # All 4 node functions + 5 translation nodes
β βββ states/
β βββ blogstate.py # Blogstate TypedDict + Blog Pydantic model
βββ langgraph.json # LangGraph Studio config β points at graph_builder:graph
βββ pyproject.toml # Dependencies (uv lockfile)
βββ vercel.json # Vercel build config
βββ .python-version # Pins Python 3.12 for Vercel runtime
- Python 3.12+
- OpenAI API key (or Groq API key)
git clone https://github.com/Puneeth0106/Blog-Agent.git
cd Blog-Agent
pip install -e .# .env
OPENAI_API_KEY=your_key_here
OPENAI_MODEL=gpt-4o-mini
# Optional β Groq alternative
GROQ_API_KEY=your_key_here
GROQ_MODEL=llama-3.3-70b-versatile
# Optional β LangSmith tracing
LANGCHAIN_API_KEY=your_key_here
LANGCHAIN_PROJECT=blog-agent
LANGCHAIN_TRACING_V2=truepython3 app.py
# β http://localhost:8000langgraph up
# Visual graph debugger at http://localhost:8123Generate a blog post from a topic, with optional translation.
Request body
{ "topic": "The Future of Quantum Computing", "language": "spanish" }language is optional. Omit it for English output. Supported values: english, spanish, french, german, telugu, swahili.
Response
{
"data": {
"topic": "The Future of Quantum Computing",
"blog": {
"title": "# El Futuro de la ComputaciΓ³n CuΓ‘ntica",
"content": "## IntroducciΓ³n\n\nLa computaciΓ³n cuΓ‘ntica..."
},
"language_content": "spanish"
}
}Examples
# English
curl -X POST http://localhost:8000/blogs \
-H "Content-Type: application/json" \
-d '{"topic": "The Future of Quantum Computing"}'
# Spanish
curl -X POST http://localhost:8000/blogs \
-H "Content-Type: application/json" \
-d '{"topic": "The Future of Quantum Computing", "language": "spanish"}'3 steps, ~10 lines of code:
src/nodes/blog_node.pyβ add a translation node method following the existing patternsrc/graphs/graph_builder.pyβ register the node and its edge inbuild_language_graph()src/nodes/blog_node.pyβ add a routing branch inlanguage_router_node()
- Agentic pipeline design β multi-step LLM workflow with stateful graph orchestration, not a single prompt call
- Conditional graph routing β dynamic execution paths determined at runtime based on input, not hardcoded branches
- Dual-provider LLM abstraction β production-ready pattern for model-agnostic AI systems
- Type-safe state machine β Pydantic validation at every state boundary prevents silent failures
- Full-stack deployment β Python serverless backend + static frontend, both served from a single Vercel project
- Extensible by design β new language in 3 files, new LLM provider in 1 file, new node in 2 files
GitHub: Puneeth0106 LinkedIn: LinkedIn Profile Email: puneethkumaramudala7@gmail.com