A Model Context Protocol (MCP) server for comprehensive monitoring and observability of systems using Langfuse.
This MCP server allows you to:
- Monitor all your agents in real-time
- Track performance metrics (latency, cost, token usage)
- Debug failed executions with detailed traces
- Analyze agent performance across time periods
- Compare different agent versions via metadata filters
- Manage costs and set budget alerts
- Visualize agent workflows
- Python 3.11 or higher
- A Langfuse account (sign up here)
- agents instrumented with Langfuse
# Install via pip
pip install -r requirements.txt
# Or install from source
git clone https://github.com/yourusername/langfuse-mcp-python.git
cd langfuse-mcp-python
pip install -e .Create a .env file with your Langfuse credentials:
cp .env.example .env
# Edit .env and add your credentialsYour .env should look like:
LANGFUSE_PUBLIC_KEY=pk-lf-xxxxx
LANGFUSE_SECRET_KEY=sk-lf-xxxxx
LANGFUSE_HOST=https://cloud.langfuse.comIf you want a Streamable HTTP URL that works across all tools, run the server with the Streamable HTTP transport:
python -m langfuse_mcp_python --transport streamable-http --host 127.0.0.1 --port 8000 --path /mcppython -m langfuse_mcp_python --transport sse --host 127.0.0.1 --port 8000You can then connect any Streamable HTTP-compatible MCP client to:
http://127.0.0.1:8000/mcp
If you are using Claude Desktop or Cursor, keep the default stdio transport in their configs.
Add to claude_desktop_config.json:
{
"mcpServers": {
"langfuse-monitor": {
"command": "uvx",
"args": ["--python", "3.11", "langfuse-mcp-python"],
"env": {
"LANGFUSE_PUBLIC_KEY": "pk-lf-xxxxx",
"LANGFUSE_SECRET_KEY": "sk-lf-xxxxx",
"LANGFUSE_HOST": "https://cloud.langfuse.com"
}
}
}
}Add to .cursor/mcp.json:
{
"mcpServers": {
"langfuse-monitor": {
"command": "python",
"args": ["-m", "langfuse_mcp_python"],
"env": {
"LANGFUSE_PUBLIC_KEY": "pk-lf-xxxxx",
"LANGFUSE_SECRET_KEY": "sk-lf-xxxxx"
}
}
}
}Make sure your agents send traces to Langfuse:
from langfuse.langchain import CallbackHandler
from langgraph.graph import StateGraph
# Create Langfuse callback handler
langfuse_handler = CallbackHandler(
public_key="pk-lf-xxxxx",
secret_key="sk-lf-xxxxx",
host="https://cloud.langfuse.com"
)
# Create your agent
workflow = StateGraph(AgentState)
workflow.add_node("planner", planner_node)
workflow.add_node("executor", executor_node)
app = workflow.compile()
# Run with Langfuse monitoring
result = app.invoke(
{"input": "user query"},
config={
"callbacks": [langfuse_handler],
"metadata": {
"agent_name": "my_planner_agent",
"version": "v1.0"
}
}
)src/langfuse_mcp_python/server.pyCLI entrypoint and stdio transportsrc/langfuse_mcp_python/http_server.pyStreamable HTTP and SSE transportsrc/langfuse_mcp_python/utils/tool_registry.pyTool setup and registrationsrc/langfuse_mcp_python/tools/Tool implementations and specssrc/langfuse_mcp_python/integrations/langfuse_client.pyLangfuse API clientsrc/langfuse_mcp_python/core/base_tool.pyShared cache and metrics
watch_agentsMonitor active agentsget_traceFetch a trace by IDanalyze_performanceAggregate performance over timeget_metricsAggregate metrics (latency, cost, tokens)
get_scoresFetch scoressubmit_scoreCreate a scoreget_score_configsList score configurations
get_promptsList promptscreate_promptCreate a promptdelete_promptDelete a prompt
get_sessionsList sessions
get_datasetsList datasetscreate_datasetCreate a datasetcreate_dataset_itemAdd an item to a dataset
get_modelsList modelscreate_modelCreate a modeldelete_modelDelete a model
get_commentsList commentsadd_commentAdd a comment
delete_traceDelete a trace
get_annotation_queuesList annotation queuescreate_annotation_queueCreate a queueget_queue_itemsList queue itemsresolve_queue_itemResolve a queue item
get_blob_storage_integrationsList integrationsupsert_blob_storage_integrationCreate or update an integrationget_blob_storage_integration_statusFetch integration statusdelete_blob_storage_integrationDelete an integration
get_llm_connectionsList connectionsupsert_llm_connectionCreate or update a connection
get_projectsList projectscreate_projectCreate a projectupdate_projectUpdate a projectdelete_projectDelete a project
Monitor all active agents in real-time.
Example:
Show me all active agents from the last hour
Response:
Active Agent Monitoring (last_1h)
Total Traces Found: 15
Showing: Top 10 traces
1. research_agent (Trace: trace-abc12...)
- Status: completed
- Session: session-xyz
- Started: 2026-03-19T10:25:00Z
- Latency: 1250ms
- Tokens: 3420
- Cost: $0.0234
Watch only my research_agent and planner_agent from the last 24 hours
Analyze performance of my planner_agent over the last 24 hours
Show cost breakdown by agent for the last week
Show trace details for trace-abc123
MCP Client (Claude, Cursor, etc.)
-> Langfuse MCP Server (stdio/HTTP)
-> Langfuse API
-> Langfuse Platform
-> Your Langfuse Agents
- Never commit credentials - Use environment variables
- Rotate API keys regularly
- Use read-only keys where possible
- Enable rate limiting in production
- Mask sensitive data in traces
- Check active agents:
watch_agents - Review performance:
analyze_performance - Check costs:
get_metrics - Investigate failures:
get_trace
- Establish baseline:
analyze_performancefor current version metadata - Deploy new version with different metadata
- Compare versions by running
analyze_performancewith version filters - Make data-driven deployment decisions
- Track costs:
get_metricsgrouped by agent - Identify expensive agents
- Optimize high-cost operations
- Track savings over time
- Check environment variables are set correctly
- Verify Langfuse API keys are valid
- Ensure Python 3.11+ is installed
- Check logs:
tail -f ~/.mcp/logs/langfuse-monitor.log
- Verify agents are instrumented with Langfuse
- Check
langfuse_handleris passed to agent invocations - Ensure metadata includes
agent_name - Verify time window is appropriate
- Reduce number of traces fetched (use filters)
- Enable caching:
CACHE_ENABLED=true - Use "minimal" depth for trace details
- Consider batch processing for large datasets
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
MIT License - see LICENSE file for details
- Langfuse - Open-source LLM observability
- LangGraph - Agent framework
- Model Context Protocol - MCP specification
- Core monitoring tools
- Performance analysis
- Cost tracking
- Debugging utilities
- Real-time streaming updates
- Custom alert system
- Predictive analytics
- A/B testing support
- Multi-project support
- Export to data warehouses
Version: 1.0.0
Last Updated: March 23, 2026
Status: Production Ready