Add an AI assistant to your
/docspage.
Try the Standalone Demo
pip install docbuddyfrom fastapi import FastAPI
from docbuddy import setup_docs
app = FastAPI()
setup_docs(app) # replaces default /docsThat's it! Visit /docs
| API Explorer | Chat Interface |
|---|---|
![]() |
![]() |
| Workflow Panel | LLM Settings |
|---|---|
![]() |
![]() |
- 💬 Chat interface with full OpenAPI context
- 🤖 LLM Settings panel with local providers (Ollama, LM Studio, vLLM, Custom)
- 🔗 Tool-calling for API Requests
- 🎨 Dark/light theme support
Ask questions like:
- "What endpoints are available?"
- "Create a curl cmd for adding a new user"
- "Ping health"
Enable tool calling in the settings to allow the assistant to make API requests on your behalf.
DocBuddy has a standalone webpage (e.g. hosted on GitHub Pages) that connects to any OpenAPI schema and LLM provider. However, due to browser security restrictions (CORS), if you want to use local LLMs, you must run DocBuddy locally instead of from GitHub Pages.
- Run
python3 -m http.server 8080from the repo root - Visit in your browser http://localhost:8080/docs/index.html
- Choose your local LLM provider (Ollama, LM Studio, vLLM, or Custom)
- Enter the API endpoint for your LLM (e.g.
http://localhost:1234/v1for LMStudio) - Verify that the plugin can connect to your LLM provider and select a model from the drop down after.
- Enable tool calling if you want the assistant to make API requests on your behalf.
Some local LLM providers will require users to enable CORS in their API settings to allow the plugin to connect.

uvicorn examples.demo_server:app --reload --host 0.0.0.0 --port 3333pip install -e ".[dev]"
pytest tests/
pre-commit run --all-files


