Scaffold OBSMCP: local MCP tool + FastAPI backend + React dashboard#1
Scaffold OBSMCP: local MCP tool + FastAPI backend + React dashboard#1
Conversation
Ground-up rewrite of OBSMCP as three-tier observable dev system: - tool/obsmcp: Python MCP stdio tool, dual-mode (standalone/cloud), monitors, scanners, graph builder - server/obsmcp_server: FastAPI + raw sqlite3, SSE bus, WebSocket, Bearer auth, 10 routers - frontend: React 18 + Vite + TanStack Query + Tailwind, 10 pages, SSE-driven cache invalidation Local checks green: ruff, pytest (16), npm typecheck + build.
| try: | ||
| loop = asyncio.get_running_loop() | ||
| except RuntimeError: | ||
| # No loop in this thread; silently drop (not fatal). | ||
| return |
There was a problem hiding this comment.
🔴 broadcast_event silently drops all SSE events because sync route handlers run in a thread pool with no event loop
broadcast_event() calls asyncio.get_running_loop() at server/obsmcp_server/sse.py:49 and silently returns on RuntimeError. Every mutation handler in the codebase (tasks, sessions, blockers, decisions, work_logs, code_atlas, knowledge_graph, performance_logs, agents) is a sync def, not async def. FastAPI runs sync handlers in a worker thread pool where there is no running asyncio event loop, so get_running_loop() always raises RuntimeError and the event is silently dropped. This means the entire SSE live-update system and WebSocket mirror are non-functional — the React dashboard will never receive real-time mutation events, and the EventBus in frontend/src/events/EventBus.ts will never invalidate TanStack Query caches.
Affected call sites (all silently fail)
Every broadcast_event(...) call in every router: tasks.py:68,98,105,117,121, sessions.py:41,70,94, blockers.py:40,73,80, decisions.py:41,61,68, work_logs.py:40,67,74, code_atlas.py:57,72, knowledge_graph.py:68,82,104,112,122,137,164, performance_logs.py:42, agents.py:48.
Prompt for agents
The broadcast_event function uses asyncio.get_running_loop() which fails in thread pool workers where all sync FastAPI route handlers execute. The fix needs to capture and store the event loop reference at module/startup time and use it later.
Approach 1 (recommended): Store the event loop at startup in the lifespan handler or when the first listener registers. Then in broadcast_event, use the stored loop reference with loop.call_soon_threadsafe instead of trying to get the running loop from the current thread.
Approach 2: Convert all mutation route handlers to async def so they run on the event loop thread directly, allowing get_running_loop() to succeed. This is a larger change and may not be desirable if the SQLite operations are blocking.
Key files: server/obsmcp_server/sse.py (broadcast_event), server/obsmcp_server/main.py (lifespan for loop capture). The fix in sse.py would look something like adding a module-level _loop variable set during register_listener (which is async and runs on the loop), then using _loop.call_soon_threadsafe in broadcast_event.
Was this helpful? React with 👍 or 👎 to provide feedback.
| def _has_created_at(table: str) -> bool: | ||
| return table not in {"agent_configs", "code_atlas_files", "performance_logs", "sessions"} |
There was a problem hiding this comment.
🔴 list_scans crashes with 'no such column: created_at' because code_atlas_scans table is missing from _has_created_at exclusion set
The _has_created_at helper at server/obsmcp_server/routers/_helpers.py:81-82 excludes sessions, performance_logs, code_atlas_files, and agent_configs from the ORDER BY created_at DESC clause — but does not exclude code_atlas_scans. The code_atlas_scans schema (server/obsmcp_server/schema.sql:68-78) only has started_at, not created_at. When list_scans() at server/obsmcp_server/routers/code_atlas.py:118-121 calls list_rows("code_atlas_scans"), the generated SQL includes ORDER BY created_at DESC, which causes a SQLite OperationalError: no such column: created_at. This crashes the Code Atlas listing endpoint (GET /api/code-atlas) with a 500 error, breaking the entire Code Atlas page in the React dashboard.
| def _has_created_at(table: str) -> bool: | |
| return table not in {"agent_configs", "code_atlas_files", "performance_logs", "sessions"} | |
| def _has_created_at(table: str) -> bool: | |
| return table not in {"agent_configs", "code_atlas_files", "code_atlas_scans", "performance_logs", "sessions"} |
Was this helpful? React with 👍 or 👎 to provide feedback.
| @router.post("") | ||
| def create_project(body: ProjectCreate) -> dict[str, Any]: | ||
| now = now_iso() | ||
| data = { | ||
| "id": body.id or new_id(), | ||
| "name": body.name, | ||
| "path": body.path, | ||
| "repo_url": body.repo_url, | ||
| "created_at": now, | ||
| "updated_at": now, | ||
| } | ||
| return insert_row("projects", data) | ||
|
|
||
|
|
||
| @router.get("") | ||
| def list_projects() -> list[dict[str, Any]]: | ||
| return list_rows("projects") | ||
|
|
||
|
|
||
| @router.get("/{project_id}") | ||
| def get_project(project_id: str) -> dict[str, Any]: | ||
| return get_row("projects", project_id) | ||
|
|
||
|
|
||
| @router.put("/{project_id}") | ||
| def update_project(project_id: str, body: ProjectUpdate) -> dict[str, Any]: | ||
| updates = body.model_dump(exclude_unset=True) | ||
| updates["updated_at"] = now_iso() | ||
| return update_row("projects", project_id, updates) | ||
|
|
||
|
|
||
| @router.delete("/{project_id}") | ||
| def delete_project(project_id: str) -> dict[str, Any]: | ||
| delete_row("projects", project_id) | ||
| return {"ok": True} |
There was a problem hiding this comment.
🔴 Projects router mutations do not emit SSE events, violating CONTRIBUTING.md rule
CONTRIBUTING.md states: "Every mutation must emit an SSE event via broadcast_event(...)." The create_project, update_project, and delete_project endpoints in server/obsmcp_server/routers/projects.py perform INSERT, UPDATE, and DELETE operations but never call broadcast_event(). The module doesn't even import broadcast_event from ..sse. This means project mutations are invisible to SSE/WebSocket listeners and the dashboard won't reflect project changes in real time.
Prompt for agents
The projects router at server/obsmcp_server/routers/projects.py needs to import broadcast_event from ..sse, then call it in each mutation endpoint:
- create_project should emit broadcast_event('project_created', row) after insert_row
- update_project should emit broadcast_event('project_updated', row) after update_row
- delete_project should emit broadcast_event('project_deleted', {'id': project_id}) after delete_row
Also add the corresponding event types to the EVENT_TO_QUERY mapping in frontend/src/events/EventBus.ts so the frontend invalidates the right query keys.
Was this helpful? React with 👍 or 👎 to provide feedback.
| @router.post("/files") | ||
| def add_file(body: FileCreate) -> dict[str, Any]: | ||
| data = { | ||
| "id": new_id(), | ||
| **body.model_dump(), | ||
| "scanned_at": now_iso(), | ||
| } | ||
| row = insert_row("code_atlas_files", data, json_columns=("imports", "exports")) | ||
| return row | ||
|
|
||
|
|
||
| @router.post("/files/bulk") | ||
| def add_files_bulk(body: FileBatch) -> dict[str, Any]: | ||
| results = [] | ||
| for f in body.files: | ||
| data = { | ||
| "id": new_id(), | ||
| **f.model_dump(), | ||
| "scanned_at": now_iso(), | ||
| } | ||
| results.append(insert_row("code_atlas_files", data, json_columns=("imports", "exports"))) | ||
| return {"count": len(results)} |
There was a problem hiding this comment.
🔴 Code atlas file endpoints (add_file, add_files_bulk) do not emit SSE events, violating CONTRIBUTING.md rule
CONTRIBUTING.md states: "Every mutation must emit an SSE event via broadcast_event(...)." The add_file endpoint (server/obsmcp_server/routers/code_atlas.py:93-101) and add_files_bulk endpoint (server/obsmcp_server/routers/code_atlas.py:104-114) both perform INSERT operations on code_atlas_files but never call broadcast_event(). Other mutation endpoints in the same router (e.g., start_scan, update_scan) do emit events, making this an inconsistency.
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
Ground-up rewrite of OBSMCP as a three-tier observable development system per the new architecture spec:
tool/obsmcp/) — Python 3.12+, dual-mode (standalone / cloud-sync), first-run setup wizard, stdio MCP server exposing 17 tools, session/git/file/perf monitors, Code Atlas scanner (~20 languages), knowledge graph builder, optional LLM semantic descriptions.server/obsmcp_server/) — rawsqlite3(no ORM), 13-table schema, 10 routers with full CRUD, SSE bus at/api/events, WebSocket mirror at/ws/dashboard, Bearer-token middleware,/healthz/readyz/runtime-discovery/mode/api/stats.frontend/) — React 18 + Vite + TanStack Query v5 + Tailwind + React Router + React Flow + Recharts. 10 pages: Dashboard, Tasks, Sessions, Blockers, Decisions, Work Logs, Code Atlas, Knowledge Graph, Performance Logs, Settings. Single SSE connection at App root →EventBus→ query-key invalidation (no polling). Live/Offline indicator, Bearer-token in Settings.Entry points
./start.sh/start.bat→ first-run wizard writes~/.obsmcp/config.json→python -m obsmcp→ dashboard at http://localhost:8000, backed by~/.obsmcp/data/obsmcp.db. No server required.obsmcp-server(ordocker compose up) serves the backend; the local tool mirrors every write to it in the background. Offline-resilient.python -m obsmcp --mcp-stdioplugs into Claude Desktop / Cursor / Claude Code via stdio.Green locally
No CI is configured — this project intentionally runs all checks locally.
Tests
Authorizationheader accepted,?token=query acceptedBackendClientlocal SQLite writes for tasks + blockersNot in this PR (by design)
tool/obsmcp/scanners/code_atlas.py)tool/obsmcp/llm/semantic_descriptions.pybut gated onANTHROPIC_API_KEYNotes
mainbranch has the older OBSMCP codebase; this branch is a complete replacement.node_modulesare committed (.gitignorecovers them).