Local-first AI workbench for operators who want execution control.
ProWorkBench is a local orchestration shell for:
WebChat(operator control plane)Tools(filesystem/process operations)MCP(browser/media automation)Doctorchecks and remediation guidance
Execution is operator-mediated: no hidden background tool execution, no silent approvals.
- Not a hosted SaaS.
- Not a no-ops “auto-pilot” agent.
- Not a replacement for your model server. You bring your own OpenAI-compatible backend (commonly Text Generation WebUI).
cd /home/jamiegrl100/Apps/proworkbench
npm install
npm run devcd C:\path\to\proworkbench
npm install
npm run devThen:
- Open
http://127.0.0.1:5173 - Open Doctor and run checks
- Connect your model server (
http://127.0.0.1:5000) and run your first preset
Web UI (5173) -> PB Server (8787) -> Model API (OpenAI-compatible)
|-> Tools runtime
|-> MCP runtime
|-> SQLite + local state
- Local-first state and logs
- Approvals queue and policy hooks
- Doctor diagnostics and guided remediation
- MCP template/build/test/install flow
- WebChat route control (direct vs browse flow)
- Workflow triage states (
new,needs_review,done,ignored)
No. Connect your own OpenAI-compatible server.
Core runtime is local-first. Internet depends on the integrations/tools you enable.
Local app data + SQLite in your configured data directory.
By policy design, channel behavior can be restricted. Keep execution on WebChat for operator visibility.
- Installer packaging (Windows/macOS/Linux)
- Projects tab hardening
- Submission Assistant stabilization
- Signed release artifacts
If a roadmap item is not implemented yet, track it as a GitHub issue before shipping claims.
- Website: https://proworkbench.com
- GitHub Issues: https://github.com/jamiegrl100/proworkbench/issues
- Discussions: https://github.com/jamiegrl100/proworkbench/discussions
- Releases: https://github.com/jamiegrl100/proworkbench/releases



