Fix your writing anywhere. No cloud required.
Cross-platform AI text correction from the menu bar
| GhostEdit | Grammarly | LanguageTool | Apple Writing Tools | |
|---|---|---|---|---|
| Works in any app | Yes (system-wide) | Browser + select apps | Browser + select apps | Yes |
| Offline mode | Yes (built-in T5 model) | No | Server only | Yes |
| Multi-provider AI | Claude, GPT, Gemini, local | Grammarly AI only | LanguageTool only | Apple AI only |
| Open source | MIT | No | LGPL | No |
| Price | Free | $12/mo | $5/mo (premium) | Free (Apple only) |
- Works in every app -- System-wide hotkey correction, not a browser extension
- Offline-first AI -- Built-in T5 grammar model runs on-device, no API keys needed
- Multi-provider -- Switch between local AI, Claude, GPT, or Gemini in one click
- Real-time monitoring -- Traffic light indicator shows writing quality as you type
- Developer-friendly -- Preserves code, URLs, @mentions, :emoji:, and file paths
- 13 languages + tone presets -- Auto-detect language, choose from 5 writing styles
- Download the latest release from GitHub Releases
- Open GhostEdit -- it lives in your menu bar
- Select text anywhere and press
Cmd+Shift+E(Mac) orCtrl+Shift+E(Win/Linux)
- Features
- Requirements
- Installation
- Quick Start
- Usage
- Configuration
- AI Providers
- Dictionary Engine
- Architecture
- Development
- Testing
- Building & Packaging
- Project Structure
- Troubleshooting
- Global hotkey — Press
Cmd+Shift+E(Mac) orCtrl+Shift+E(Win/Linux) from any app - Offline-first — Built-in T5 grammar model runs entirely on-device, no API keys needed
- Cloud providers — Claude, Codex (OpenAI), and Gemini via their CLI tools
- Dictionary pre-pass — Harper.js (grammar) + nspell (spelling) fix obvious errors before the AI sees the text, making corrections faster and cheaper
- Dictionary polish — Same engine runs again on model output to catch any remaining issues
- Diff preview — Side-by-side streaming diff before accepting changes (Tab to accept, Esc to cancel)
- Token preservation — URLs, @mentions,
:emoji:, `code`, email addresses, and file paths are never modified - Tone presets — Default, Casual, Professional, Academic, and Slack styles
- 13 languages — Auto-detect or specify: English, Spanish, French, German, Italian, Portuguese, Japanese, Korean, Chinese, Russian, Arabic, Hindi
- Correction history — Browse, search, and review past corrections
- Correction cache — Identical inputs skip the AI entirely
- GPU acceleration — DirectML (Windows), CUDA (Linux), WebGPU (macOS) for fast local inference
- 4 model variants — Q4F16 (210 MB, bundled), INT8, FP16, FP32 (downloadable)
- System tray — Runs silently in the menu bar with no dock icon (macOS)
- Node.js >= 18
- npm >= 9
- Operating System: macOS 12+, Windows 10+, or Linux (x64/arm64)
- Accessibility permissions (macOS): Required for keyboard simulation — System Settings > Privacy & Security > Accessibility > enable GhostEdit
| Provider | CLI Tool | Install |
|---|---|---|
| Claude | claude |
claude.ai/cli |
| Codex | codex |
npm install -g @openai/codex |
| Gemini | gemini |
ai.google.dev/gemini-api/docs/cli |
Cloud providers are not required — the built-in local model works offline with zero setup.
git clone https://github.com/nareshnavinash/ghostedit-electron.git
cd ghostedit-electron
npm installThe Q4F16 model variant (~210 MB) is bundled for offline use:
npm run download-modelThis downloads the T5 grammar correction model to resources/models/ for packaging.
# Start the app in development mode
npm startOn first launch, a 3-step onboarding wizard will guide you through:
- Choose a provider — Local (offline) or a cloud CLI
- Set the hotkey — Default is
Cmd+Shift+E/Ctrl+Shift+E - Grant permissions — On macOS, allow Accessibility access when prompted
After setup:
- Select text in any application
- Press the hotkey
- A HUD overlay shows "Working..."
- The corrected text replaces your selection (or appears in a diff preview)
- Select text in any application (editor, browser, Slack, email, etc.)
- Press the hotkey (
Cmd+Shift+Eby default) - GhostEdit captures the selection, corrects it, and pastes the result back
When Show diff preview is enabled (default):
- A preview window opens showing original vs. corrected text side-by-side
- Insertions are highlighted in green, deletions in red
- Press Tab to accept, Escape to cancel, R to regenerate
When enabled, corrected text is copied to clipboard instead of being pasted back. Useful when paste simulation doesn't work in a specific app.
Right-click (or click on macOS) the tray icon to access:
- Correct Selected Text — Trigger correction manually
- Settings... — Open the configuration window
- History... — Browse past corrections
- Quit GhostEdit — Exit the app
All settings are stored in ~/.ghostedit/config.json and editable through the Settings window.
General — Provider, model, CLI path, language, tone, timeout
Hotkey — Record a custom global keyboard shortcut
Behavior — Toggle features:
| Setting | Description | Default |
|---|---|---|
| Fast correction mode | Greedy decoding for local model (faster, slight quality trade-off) | On |
| Clipboard-only mode | Copy to clipboard instead of pasting back | Off |
| Show diff preview | Side-by-side preview before applying | On |
| Sound feedback | Play sound on completion | On |
| Notify on success | System notification on correction | Off |
| Developer mode | Show inference device info in tray and settings | Off |
| History limit | Max stored corrections | 50 |
| Preset | Style |
|---|---|
| Default | Standard grammar/spelling correction |
| Casual | Friendly, conversational, keeps contractions |
| Professional | Polished business communication |
| Academic | Formal vocabulary, precise structure |
| Slack | Concise, upbeat, preserves emoji and abbreviations |
Create ~/.ghostedit/prompt.txt to override the default system prompt. The file is read on each correction.
| File | Purpose |
|---|---|
~/.ghostedit/config.json |
Application settings |
~/.ghostedit/history.json |
Correction history |
~/.ghostedit/prompt.txt |
Custom system prompt (optional) |
~/.ghostedit/models/ |
Downloaded model variants |
~/.ghostedit/device-cache.json |
Cached GPU/CPU detection result |
The built-in model is Xenova/t5-base-grammar-correction, a T5 model fine-tuned for grammar correction, running via ONNX Runtime through @huggingface/transformers.
Model variants (downloadable from Settings):
| Variant | Size | Quality | Speed |
|---|---|---|---|
| Q4F16 | 210 MB | Good | Fastest |
| INT8 | 250 MB | Better | Fast |
| FP16 | 496 MB | High | Moderate |
| FP32 | 963 MB | Best | Slowest |
Q4F16 is bundled with the app. Other variants can be downloaded from the Settings window.
GPU acceleration is automatic:
| Platform | Primary Device | Fallback |
|---|---|---|
| macOS (Apple Silicon) | WebGPU (Metal) | WASM (CPU) |
| Windows (any GPU) | DirectML | WebGPU / WASM |
| Linux x64 (NVIDIA) | CUDA | WebGPU / WASM |
| Linux arm64 | WebGPU | WASM (CPU) |
Cloud providers use their respective CLI tools, spawned as subprocesses. You must install and authenticate the CLI separately.
Claude
# Install
# Visit https://claude.ai/cli
# Authenticate
claude auth login
# GhostEdit auto-detects the CLI, or set the path in SettingsCodex (OpenAI)
npm install -g @openai/codex
codex authGemini
# Install from https://ai.google.dev/gemini-api/docs/cli
gemini authGhostEdit includes a two-layer dictionary engine that mirrors the approach from the companion macOS-native app:
-
Harper.js (primary) — Rust-powered grammar checker compiled to WASM. Catches spelling, grammar, capitalization, and style issues with suggestions.
-
nspell (secondary) — JavaScript port of Hunspell with a full English dictionary. Fills non-overlapping gaps that Harper misses.
-
Merge strategy — All Harper issues are kept. nspell issues are added only if they don't overlap with any Harper issue. Proper nouns (capitalized mid-sentence) and acronyms (2+ uppercase letters) are filtered out.
-
Iterative passes — The engine runs up to 3 passes. Fixing one error can reveal another (e.g., fixing punctuation may expose a grammar error). Stops early when no more changes are found.
Selected text
-> Token protection (URLs, @mentions, code, emoji)
-> Dictionary pre-pass (fix obvious errors) <-- HERE
-> Cache lookup
-> AI model (local T5 or cloud CLI)
-> Token restoration
-> Dictionary polish (cleanup model output) <-- AND HERE
-> Paste back
The pre-pass means most simple typos ("teh" -> "the") never reach the AI model, making corrections faster and reducing API costs for cloud providers.
+-----------------------------------------------------------+
| Electron Main Process |
| |
| Global Hotkey --> Clipboard Capture --> Token Protection |
| | |
| v |
| Dictionary Pre-pass (Harper + nspell, up to 3 passes) |
| | |
| v |
| Cache Lookup --[hit]--> Skip AI |
| | |
| v [miss] |
| Correction Dispatcher |
| +-- CLI Runner (claude/codex/gemini subprocess) |
| +-- Local Model Runner (transformers.js + ONNX) |
| +-- Inference Window (WebGPU/WASM in hidden renderer) |
| | |
| v |
| Token Restoration --> Dictionary Polish --> Paste Back |
| | |
| v |
| History Store --> Correction Cache |
| |
| System Tray | IPC Handlers | Device Selector |
+-----------------------------------------------------------+
| IPC
+-----------------------------------------------------------+
| Renderer Process (React) |
| |
| Settings | History | HUD Overlay | Streaming Preview |
| | Onboarding Wizard | Inference Worker |
+-----------------------------------------------------------+
- Tray-only app — No dock icon on macOS (
LSUIElement: true). Runs silently in the menu bar. - Context isolation — Renderer processes are sandboxed. All main/renderer communication goes through a secure preload bridge.
- Graceful degradation — If Harper fails to load (WASM issue), nspell still works. If both fail, the AI model handles everything. If the inference window crashes, the main process pipeline takes over.
- Fire-and-forget pre-warming — Model, dictionary checkers, and keyboard simulator are loaded in parallel at startup without blocking the UI.
- Correction cache — Keyed on
(text, provider, model, tone, language). Invalidated on any config change.
npm startThis launches Electron with Vite HMR — changes to renderer code are hot-reloaded instantly. Main process changes require a restart.
npx tsc --noEmitnpm run lintThe project uses:
- TypeScript 5.7 with strict mode
- Vite 6 for building main, preload, and renderer
- React 19 for the UI
- Tailwind CSS 3 with a custom dark theme
- Zustand 5 for renderer state management
- Vitest 4 for testing
# Run all tests
npm test
# Watch mode
npm run test:watch
# Run a specific test file
npx vitest run src/main/dictionary-checker.test.ts
# Run with verbose output
npx vitest run --reporter=verbose| File | Type | Tests | Covers |
|---|---|---|---|
dictionary-checker.test.ts |
Unit | 50 | Harper/nspell extraction, merge, filtering, fix application, initialization |
dictionary-checker.integration.test.ts |
Integration | 24 | End-to-end with real Harper WASM and nspell Hunspell |
local-model-runner.test.ts |
Unit | 22 | Pipeline loading, device routing, streaming, variant switching |
device-selector.test.ts |
Unit | 12 | Platform detection, disk caching, DirectML/CUDA probing |
correction-dispatcher.test.ts |
Unit | — | Provider routing logic |
correction-cache.test.ts |
Unit | — | Cache hit/miss/invalidation |
ipc-handlers.test.ts |
Unit | — | IPC handler registration and behavior |
cli-runner.test.ts |
Unit | — | CLI subprocess spawning and error handling |
Current: 161 tests, all passing.
npm run packageCreates an unpacked app in out/.
npm run makeProduces platform-specific installers:
| Platform | Format | Output |
|---|---|---|
| macOS | DMG | out/make/*.dmg |
| Windows | Squirrel | out/make/squirrel.windows/ |
| Linux | ZIP | out/make/zip/linux-x64/ |
- The Electron app (ASAR-packed)
- Bundled Q4F16 model variant (
resources/models/) - All production dependencies
- App icon (
assets/icon.svg)
src/
main/ # Electron main process
index.ts # App lifecycle, correction pipeline, window management
config-manager.ts # Config persistence (~/.ghostedit/)
correction-dispatcher.ts # Routes corrections to CLI or local model
cli-runner.ts # Spawns CLI subprocesses
local-model-runner.ts # Local T5 model via transformers.js
dictionary-checker.ts # Harper.js + nspell spell/grammar checking
clipboard-manager.ts # Cmd+C/V simulation via nut.js
token-preservation.ts # Protects URLs, @mentions, code, emoji
correction-cache.ts # In-memory correction cache
history-store.ts # Persists correction history to JSON
tray-manager.ts # System tray icon and context menu
inference-window.ts # Hidden BrowserWindow for GPU inference
device-selector.ts # GPU/CPU detection and caching
global-shortcuts.ts # Global hotkey registration
ipc-handlers.ts # IPC event handlers
cli-arguments.ts # CLI path resolution
error-messages.ts # User-friendly error messages
renderer/ # React UI
App.tsx # Root component (routes by window type)
main.tsx # React entry point
windows/
Settings.tsx # Configuration UI
History.tsx # Correction history browser
HudOverlay.tsx # Transparent status overlay
StreamingPreview.tsx # Real-time diff preview
components/
Welcome.tsx # Onboarding wizard
DiffView.tsx # Side-by-side diff viewer
HotkeyInput.tsx # Hotkey recording input
ProviderSelector.tsx # Provider selection buttons
ModelSelector.tsx # Model dropdown
inference-worker.ts # Web worker for model inference
preload/
index.ts # Secure IPC bridge (contextBridge)
shared/
types.ts # TypeScript interfaces and IPC channels
constants.ts # Providers, models, languages, defaults
assets/
icon.svg # App icon
resources/
models/ # Bundled T5 model (Q4F16 variant)
scripts/
download-model.mjs # Model download script
- Make sure text is selected before pressing the hotkey
- On macOS, grant Accessibility permission: System Settings > Privacy & Security > Accessibility > enable GhostEdit
- Try clipboard-only mode if paste simulation fails in a specific app
- Install the CLI tool for your chosen provider (see AI Providers)
- Or set the CLI path manually in Settings > General
- Run the auth command for your provider:
- Claude:
claude auth login - Codex:
codex auth - Gemini:
gemini auth
- Claude:
- Switch to the Q4F16 variant (smallest, fastest) in Settings
- Enable "Fast correction mode" in Behavior tab
- On Windows/Linux, ensure GPU drivers are up to date for DirectML/CUDA acceleration
- Check the inference device in Settings with Developer Mode enabled
- Increase the timeout in Settings > General
- Switch to a faster model (e.g.,
haikufor Claude,o4-minifor Codex) - Use the local model for instant offline corrections
- Check for conflicts with other apps using the same shortcut
- Change the hotkey in Settings > Hotkey tab
- On macOS, ensure the app has Accessibility permissions
- Check your internet connection
- Try running
npm run download-modelfrom the terminal for detailed error output - Models are downloaded from HuggingFace — ensure
huggingface.cois accessible
See LICENSE for details.