Skip to content

feat: --auto-watch flag to embed watcher inside MCP server process#130

Open
Dhruv-Darji wants to merge 3 commits intotirth8205:mainfrom
Dhruv-Darji:feat/auto-watch-mcp-server
Open

feat: --auto-watch flag to embed watcher inside MCP server process#130
Dhruv-Darji wants to merge 3 commits intotirth8205:mainfrom
Dhruv-Darji:feat/auto-watch-mcp-server

Conversation

@Dhruv-Darji
Copy link
Copy Markdown

@Dhruv-Darji Dhruv-Darji commented Apr 7, 2026

This pull request adds robust offline daemon support and improves the CLI and server integration for the code review graph tool. The main enhancements include a new daemon command for managing a background filesystem watcher, improved serve/mcp command wiring with optional auto-watch, and extensive internal refactoring for process management and status reporting. Comprehensive tests for the new CLI commands and daemon helpers are also included.

CLI and Daemon Command Enhancements:

  • Added a new daemon command to the CLI for starting, stopping, and checking the status of an offline watch daemon, including support for foreground/background operation and status metadata. (code_review_graph/cli.py, code_review_graph/incremental.py) [1] [2] [3] [4]
  • Updated the CLI to support serve and mcp commands with an --auto-watch flag, allowing the MCP server to run alongside a background file watcher. (code_review_graph/cli.py, code_review_graph/main.py) [1] [2] [3] [4]

Daemon Process Management and Watcher Refactoring:

  • Implemented process management helpers for daemon lifecycle: PID/lock/log file handling, cross-platform process checks, and clean startup/shutdown logic. (code_review_graph/incremental.py) [1] [2]
  • Refactored the watcher to enforce a single-writer lock using PID files, with robust acquisition/release and error handling. (code_review_graph/incremental.py)
  • Added support for running the watcher in a background thread for integration with the MCP server, and provided helpers for starting/stopping/status reporting of the daemon. (code_review_graph/incremental.py, code_review_graph/main.py) (Fa10b2a3L933, code_review_graph/main.pyL690-R714)

Testing Improvements:

  • Added new tests for CLI command wiring, including serve, mcp, and all daemon subcommands, as well as tests for the new daemon helpers and lock acquisition logic. (tests/test_cli.py, tests/test_incremental.py) [1] [2] [3]

These changes make the tool more robust for offline and background operation, and provide a more user-friendly and scriptable interface for managing the file watcher and MCP server.

Copy link
Copy Markdown
Owner

@tirth8205 tirth8205 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting feature — embedding a file watcher inside the MCP server process. This is a significant addition (+657 lines) with PID/lock file management and daemon lifecycle.

A few concerns:

  1. Cross-platform PID handling needs careful testing (Windows, WSL).
  2. Lock file cleanup on crash — what happens if the process dies without cleaning up?
  3. This overlaps with the existing watch command — how do they interact?

Please rebase on latest main and address these questions. We'll give it a thorough review once it's up to date.

@Dhruv-Darji Dhruv-Darji force-pushed the feat/auto-watch-mcp-server branch from 00a8c74 to efdf7f1 Compare April 9, 2026 05:25
@Dhruv-Darji Dhruv-Darji requested a review from tirth8205 April 9, 2026 05:32
@tirth8205
Copy link
Copy Markdown
Owner

Review: PR #130 — feat: --auto-watch flag and daemon command

Needs rebase and addressing open questions before merge.

Code quality observations:

  1. Cross-platform PID handling: The Windows path in _is_pid_running() uses subprocess.run(['tasklist', ...]) — this will fail on WSL (where Windows processes are not visible from Linux). The POSIX path with os.kill(pid, 0) is more portable and correct for most environments. Consider adding a WSL detection check.

  2. Lock file race: _acquire_watch_lock uses O_CREAT | O_EXCL which is correct for atomic creation. However, checking if the existing lock's PID is still running and then unlinking it is a TOCTOU race — another process could acquire the lock between the check and the unlink. This is low-risk in practice but worth noting.

  3. Daemon log file: The daemon writes to watch-daemon.log in .code-review-graph/. This file is not cleaned up on daemon stop. Consider truncating or rotating on start.

  4. Interaction with existing watch command: The PR raises RuntimeError if a daemon is already running when watch is called directly. This is the right behavior, but the error message should mention the specific lock file path so users know exactly what to clean up if the lock is stale.

  5. Thread safety: _watch_background_thread runs the watcher in a daemon thread inside the MCP server. If the MCP server exits without clean shutdown (e.g., SIGKILL), the thread dies without releasing the lock file. Consider using a try/finally to release the lock on thread exit.

  6. Tests: The new tests check CLI wiring but not the lock acquisition / release logic under concurrent access. At minimum, add a test that two concurrent lock acquisitions fail correctly.

Please rebase on latest main and address the open questions from the owner's review (cross-platform PID handling, lock cleanup on crash, interaction with watch command). Once rebased and CI passes, this is a good addition.

@tirth8205
Copy link
Copy Markdown
Owner

Status: BEHIND but not conflicting — this PR is mergeable but behind main. However, it should not be merged until the open questions from the owner's review are addressed: (1) cross-platform PID handling (WSL), (2) lock file cleanup on crash (try/finally in background thread), (3) clarification of interaction with the existing watch command. Please address these and rebase to bring the branch current with main.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants