Skip to content

Scaffold OBSMCP: local MCP tool + FastAPI backend + React dashboard#1

Open
nikzdevz wants to merge 3 commits intomainfrom
devin/scaffold-obsmcp
Open

Scaffold OBSMCP: local MCP tool + FastAPI backend + React dashboard#1
nikzdevz wants to merge 3 commits intomainfrom
devin/scaffold-obsmcp

Conversation

@nikzdevz
Copy link
Copy Markdown
Owner

@nikzdevz nikzdevz commented Apr 19, 2026

Summary

Ground-up rewrite of OBSMCP as a three-tier observable development system per the new architecture spec:

  1. Local MCP tool (tool/obsmcp/) — Python 3.12+, dual-mode (standalone / cloud-sync), first-run setup wizard, stdio MCP server exposing 17 tools, session/git/file/perf monitors, Code Atlas scanner (~20 languages), knowledge graph builder, optional LLM semantic descriptions.
  2. FastAPI backend (server/obsmcp_server/) — raw sqlite3 (no ORM), 13-table schema, 10 routers with full CRUD, SSE bus at /api/events, WebSocket mirror at /ws/dashboard, Bearer-token middleware, /healthz /readyz /runtime-discovery /mode /api/stats.
  3. React dashboard (frontend/) — React 18 + Vite + TanStack Query v5 + Tailwind + React Router + React Flow + Recharts. 10 pages: Dashboard, Tasks, Sessions, Blockers, Decisions, Work Logs, Code Atlas, Knowledge Graph, Performance Logs, Settings. Single SSE connection at App root → EventBus → query-key invalidation (no polling). Live/Offline indicator, Bearer-token in Settings.

Entry points

  • Standalone./start.sh / start.bat → first-run wizard writes ~/.obsmcp/config.jsonpython -m obsmcp → dashboard at http://localhost:8000, backed by ~/.obsmcp/data/obsmcp.db. No server required.
  • Cloudobsmcp-server (or docker compose up) serves the backend; the local tool mirrors every write to it in the background. Offline-resilient.
  • MCP clientpython -m obsmcp --mcp-stdio plugs into Claude Desktop / Cursor / Claude Code via stdio.

Green locally

ruff check .                                  → All checks passed!
pytest                                        → 16 passed (server + tool)
cd frontend && npm run typecheck && npm run build   → OK

No CI is configured — this project intentionally runs all checks locally.

Tests

  • Health / readiness / mode / runtime-discovery endpoints
  • Task CRUD + bulk + stats
  • Bearer-token auth: missing → 401, Authorization header accepted, ?token= query accepted
  • Knowledge graph nodes + edges + BFS query
  • Config load/save (standalone + cloud mode)
  • BackendClient local SQLite writes for tasks + blockers

Not in this PR (by design)

  • tree-sitter language parsing (regex heuristics are the scaffold; swap-in is isolated to tool/obsmcp/scanners/code_atlas.py)
  • SQLite backup rotation (flagged in README roadmap)
  • Anthropic calls are wired through tool/obsmcp/llm/semantic_descriptions.py but gated on ANTHROPIC_API_KEY
  • Optional GraphQL endpoint (spec marks optional)
  • GitHub Actions / CI workflow

Notes

  • The existing main branch has the older OBSMCP codebase; this branch is a complete replacement.
  • No secrets, DBs, or node_modules are committed (.gitignore covers them).

nikzdevz added 3 commits April 19, 2026 14:15
Ground-up rewrite of OBSMCP as three-tier observable dev system:
- tool/obsmcp: Python MCP stdio tool, dual-mode (standalone/cloud), monitors, scanners, graph builder
- server/obsmcp_server: FastAPI + raw sqlite3, SSE bus, WebSocket, Bearer auth, 10 routers
- frontend: React 18 + Vite + TanStack Query + Tailwind, 10 pages, SSE-driven cache invalidation

Local checks green: ruff, pytest (16), npm typecheck + build.
Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 4 potential issues.

View 6 additional findings in Devin Review.

Open in Devin Review

Comment on lines +48 to +52
try:
loop = asyncio.get_running_loop()
except RuntimeError:
# No loop in this thread; silently drop (not fatal).
return
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 broadcast_event silently drops all SSE events because sync route handlers run in a thread pool with no event loop

broadcast_event() calls asyncio.get_running_loop() at server/obsmcp_server/sse.py:49 and silently returns on RuntimeError. Every mutation handler in the codebase (tasks, sessions, blockers, decisions, work_logs, code_atlas, knowledge_graph, performance_logs, agents) is a sync def, not async def. FastAPI runs sync handlers in a worker thread pool where there is no running asyncio event loop, so get_running_loop() always raises RuntimeError and the event is silently dropped. This means the entire SSE live-update system and WebSocket mirror are non-functional — the React dashboard will never receive real-time mutation events, and the EventBus in frontend/src/events/EventBus.ts will never invalidate TanStack Query caches.

Affected call sites (all silently fail)

Every broadcast_event(...) call in every router: tasks.py:68,98,105,117,121, sessions.py:41,70,94, blockers.py:40,73,80, decisions.py:41,61,68, work_logs.py:40,67,74, code_atlas.py:57,72, knowledge_graph.py:68,82,104,112,122,137,164, performance_logs.py:42, agents.py:48.

Prompt for agents
The broadcast_event function uses asyncio.get_running_loop() which fails in thread pool workers where all sync FastAPI route handlers execute. The fix needs to capture and store the event loop reference at module/startup time and use it later.

Approach 1 (recommended): Store the event loop at startup in the lifespan handler or when the first listener registers. Then in broadcast_event, use the stored loop reference with loop.call_soon_threadsafe instead of trying to get the running loop from the current thread.

Approach 2: Convert all mutation route handlers to async def so they run on the event loop thread directly, allowing get_running_loop() to succeed. This is a larger change and may not be desirable if the SQLite operations are blocking.

Key files: server/obsmcp_server/sse.py (broadcast_event), server/obsmcp_server/main.py (lifespan for loop capture). The fix in sse.py would look something like adding a module-level _loop variable set during register_listener (which is async and runs on the loop), then using _loop.call_soon_threadsafe in broadcast_event.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +81 to +82
def _has_created_at(table: str) -> bool:
return table not in {"agent_configs", "code_atlas_files", "performance_logs", "sessions"}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 list_scans crashes with 'no such column: created_at' because code_atlas_scans table is missing from _has_created_at exclusion set

The _has_created_at helper at server/obsmcp_server/routers/_helpers.py:81-82 excludes sessions, performance_logs, code_atlas_files, and agent_configs from the ORDER BY created_at DESC clause — but does not exclude code_atlas_scans. The code_atlas_scans schema (server/obsmcp_server/schema.sql:68-78) only has started_at, not created_at. When list_scans() at server/obsmcp_server/routers/code_atlas.py:118-121 calls list_rows("code_atlas_scans"), the generated SQL includes ORDER BY created_at DESC, which causes a SQLite OperationalError: no such column: created_at. This crashes the Code Atlas listing endpoint (GET /api/code-atlas) with a 500 error, breaking the entire Code Atlas page in the React dashboard.

Suggested change
def _has_created_at(table: str) -> bool:
return table not in {"agent_configs", "code_atlas_files", "performance_logs", "sessions"}
def _has_created_at(table: str) -> bool:
return table not in {"agent_configs", "code_atlas_files", "code_atlas_scans", "performance_logs", "sessions"}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +28 to +62
@router.post("")
def create_project(body: ProjectCreate) -> dict[str, Any]:
now = now_iso()
data = {
"id": body.id or new_id(),
"name": body.name,
"path": body.path,
"repo_url": body.repo_url,
"created_at": now,
"updated_at": now,
}
return insert_row("projects", data)


@router.get("")
def list_projects() -> list[dict[str, Any]]:
return list_rows("projects")


@router.get("/{project_id}")
def get_project(project_id: str) -> dict[str, Any]:
return get_row("projects", project_id)


@router.put("/{project_id}")
def update_project(project_id: str, body: ProjectUpdate) -> dict[str, Any]:
updates = body.model_dump(exclude_unset=True)
updates["updated_at"] = now_iso()
return update_row("projects", project_id, updates)


@router.delete("/{project_id}")
def delete_project(project_id: str) -> dict[str, Any]:
delete_row("projects", project_id)
return {"ok": True}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Projects router mutations do not emit SSE events, violating CONTRIBUTING.md rule

CONTRIBUTING.md states: "Every mutation must emit an SSE event via broadcast_event(...)." The create_project, update_project, and delete_project endpoints in server/obsmcp_server/routers/projects.py perform INSERT, UPDATE, and DELETE operations but never call broadcast_event(). The module doesn't even import broadcast_event from ..sse. This means project mutations are invisible to SSE/WebSocket listeners and the dashboard won't reflect project changes in real time.

Prompt for agents
The projects router at server/obsmcp_server/routers/projects.py needs to import broadcast_event from ..sse, then call it in each mutation endpoint:
- create_project should emit broadcast_event('project_created', row) after insert_row
- update_project should emit broadcast_event('project_updated', row) after update_row
- delete_project should emit broadcast_event('project_deleted', {'id': project_id}) after delete_row

Also add the corresponding event types to the EVENT_TO_QUERY mapping in frontend/src/events/EventBus.ts so the frontend invalidates the right query keys.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +93 to +114
@router.post("/files")
def add_file(body: FileCreate) -> dict[str, Any]:
data = {
"id": new_id(),
**body.model_dump(),
"scanned_at": now_iso(),
}
row = insert_row("code_atlas_files", data, json_columns=("imports", "exports"))
return row


@router.post("/files/bulk")
def add_files_bulk(body: FileBatch) -> dict[str, Any]:
results = []
for f in body.files:
data = {
"id": new_id(),
**f.model_dump(),
"scanned_at": now_iso(),
}
results.append(insert_row("code_atlas_files", data, json_columns=("imports", "exports")))
return {"count": len(results)}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Code atlas file endpoints (add_file, add_files_bulk) do not emit SSE events, violating CONTRIBUTING.md rule

CONTRIBUTING.md states: "Every mutation must emit an SSE event via broadcast_event(...)." The add_file endpoint (server/obsmcp_server/routers/code_atlas.py:93-101) and add_files_bulk endpoint (server/obsmcp_server/routers/code_atlas.py:104-114) both perform INSERT operations on code_atlas_files but never call broadcast_event(). Other mutation endpoints in the same router (e.g., start_scan, update_scan) do emit events, making this an inconsistency.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant