graph TD
LLMChain --> Chains
LLMChain --> Agents
LLMChain --> Providers
LLMChain --> Memory
LLMChain --> Retrieval
LLMChain --> Monitoring
LLMChain --> UI
LLMChain --> Plugins
LLMChain --> Deploy
Chains --> Workflows
Agents --> Orchestration
Providers --> AIProviders
Retrieval --> RetrieverStore
Monitoring --> Tracking
UI --> VisualUI
Plugins --> PluginSystem
Deploy --> APICLI
LLMChain is a professional, modular framework for building advanced LLM-powered applications and agents. It offers robust orchestration, broad AI provider support, compositional chains, memory, retrievers, monitoring, experiment tracking, enterprise features, and a modern UI.
- Modular Chains: Compose flexible workflows (QA, summarization, map-reduce, sequential, multi-prompt, etc.)
- Agent Orchestration: Pluggable strategies, subagents, voting, cascading, and more
- AI LLM Providers: OpenAI, Anthropic, Google, Cohere, HuggingFace, DeepSeek, Groq, Ollama, Mistral, Azure, VertexAI, Replicate, Fireworks, Together, Perplexity, MosaicML, PaLM, Bedrock, SageMaker, Clarifai, Petals, AlephAlpha, Forefront, Writer, Yandex, Baidu, Qianfan, Zhipu, Baichuan, ERNIE, Spark, DashScope, Moonshot, Zero, Qwen, Yi, MiniMax, DeepInfra, BaiduCloud, SparkDesk, BaiduWenxin, ErnieBot, Baichuan2, Qwen2, Yi1, Yi34, MiniChain, Moon
- Embeddings & Vector Stores: In-memory, semantic, Chroma, Pinecone, Weaviate, and more
- Retrievers: Simple, multi-query, document compression, rerankers, conversational QA
- Memory: Generic, chat, episodic, retrieval-augmented
- Prompt Engineering: Templates, manager, multi-prompt routing
- Monitoring & Evaluation: Logger, evaluator, experiment tracker, distributed tracing
- Enterprise Features: Security, scaling, environment config, .env loader
- UI Tools: Visual chain builder, dashboards
- Plugin System: Extend with custom tools, retrievers, chains
- Production Deployment: FastAPI server, CLI, Docker
- Comprehensive Documentation: API reference, tutorials, migration guides
graph TD
UserInput -->|Prompt| Chain
Chain -->|LLM Call| LLM[LLM Provider]
Chain -->|Retrieve| Retriever
Retriever -->|Search| VectorStore
Chain -->|Memory| Memory
Chain -->|Callback| Monitoring
LLM -->|Embedding| Embedding
Chain -->|Agent| AgentOrchestrator
AgentOrchestrator -->|Strategy| SubAgent
Monitoring -->|Log| Logger
Monitoring -->|Evaluate| Evaluator
Chain -->|Deploy| FastAPI
Chain -->|Deploy| CLI
Chain -->|Plugin| PluginManager
PluginManager -->|Extend| Plugin
Chain -->|Prompt| PromptManager
Chain -->|Config| EnvConfig
Chain -->|Load| DotenvLoader
Chain -->|Trace| DistributedTracing
Chain -->|Scale| Scaling
Chain -->|Secure| Security
Chain -->|Docs| Documentation
Chain -->|UI| UI[Visual Builder]
UI -->|Dashboard| Dashboard
Chain -->|Experiment| ExperimentTracker
Chain -->|Debug| Debugger
flowchart LR
UserInput --> Chain
Chain --> LLMProviders
LLMProviders[AI LLM Providers: OpenAI, Anthropic, Google, Cohere, HuggingFace, etc.]
Chain --> VectorStore
Chain --> Retriever
Chain --> Memory
Chain --> Monitoring
Chain --> ExperimentTracker
Chain --> UI
Chain --> PluginManager
Chain --> FastAPI
Chain --> CLI
Chain --> Documentation
Chain --> Security
Chain --> Scaling
Chain --> DistributedTracing
Chain --> EnvConfig
Chain --> DotenvLoader
Chain --> Debugger
Chain --> PromptManager
Chain --> AgentOrchestrator
AgentOrchestrator --> SubAgent
UI --> Dashboard
- Install dependencies:
pip install fastapi openai
- Run the FastAPI server:
python llmchain/deployment/server.py
- Use the CLI:
python llmchain/deployment/cli.py "Your prompt here"
- Add new LLM providers, retrievers, vector stores, plugins, and chains easily
- Integrate with enterprise features, monitoring, and experiment tracking
- Visual builder and dashboard for rapid prototyping
- Comprehensive API reference and tutorials
- Fork the repo, create a branch, and submit a pull request
- Add tests for new features
- Follow the modular structure for new components
- See CONTRIBUTING.md for details
- Issues and discussions are welcome on GitHub
- Contributions, feedback, and feature requests encouraged
MIT License