Privacy-First AI Medical Scribe for Doctors - 100% Browser-Based, Zero Server
MedScribe is a professional medical tool designed exclusively for doctors and healthcare professionals.
MedScribe is a fully local AI-powered medical scribe that runs entirely in your browser. Record your patient consultations, and get instant transcriptions with structured medical data extraction - all offline, all private.
MedScribe is intended for use by licensed doctors and healthcare professionals only. It is designed to assist medical professionals during patient consultations by automating documentation tasks.
- 🎤 Voice Recording: Record consultations directly in the browser
- 📝 Live Transcription: Speech-to-text powered by LFM2.5-Audio-1.5B
- 🏥 Medical Data Extraction: Automatically extracts structured medical information
- 🔒 100% Private: No server, no API calls, no data transmission
- 💻 Local-First: Runs entirely in-browser via WebGPU/WASM
- 📊 Structured Dashboard: Auto-fills medical report sections
MedScribe is designed with privacy as the core principle:
| Feature | Implementation |
|---|---|
| Audio Processing | Done locally in-browser |
| Mic Privacy | Released immediately after recording |
| AI Models | Run locally via ONNX Runtime Web |
| Data Storage | Models cached in IndexedDB (first visit only) |
| Network Access | Only for initial model download from HuggingFace |
| Telemetry | None. Zero. Nada. |
- STT Model: Liquid AI LFM2.5-Audio-1.5B (ONNX Q4)
- LLM Model: Liquid AI LFM2.5-1.2B-Instruct (ONNX Q4)
- Build Tool: Vite 5
- Inference: ONNX Runtime Web + Transformers.js
- Runtime: WebGPU (with WASM fallback)
# Install dependencies
npm install
# Start development server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview- Chrome/Edge 113+ (for WebGPU support)
- Firefox 117+ (WebGPU support)
- Safari 18.2+ (WebGPU support)
For unsupported browsers, the app automatically falls back to WASM.
On first launch, the app will download AI models (~500MB). This is a one-time operation:
- Models are downloaded from HuggingFace
- Cached to browser's IndexedDB
- Subsequent launches load from cache (offline-capable)
- Open the app in a WebGPU-supported browser
- Wait for models to load (first run: ~2-5 minutes for download)
- Click "Start Consultation" - grant microphone permission
- Record the consultation - the app listens in the background
- Click "End & Generate Report" - mic closes, processing starts
- View the dashboard - transcript and extracted medical data appear
The app automatically extracts:
- Incident Record: Patient complaint and history
- Prescription: Medications with dose, frequency, duration
- Lab Recommendations: Required lab tests
- Radiology Recommendations: Required imaging
- Treatment Plan: Step-by-step treatment approach
- Diet Advice: Dietary recommendations
- Consultation Summary: Brief overview
medscribe/
├── src/
│ ├── config/
│ │ └── models.js # Model configuration
│ ├── modules/
│ │ ├── AudioRecorder.js # Microphone handling
│ │ ├── SpeechToText.js # STT pipeline
│ │ ├── MedicalExtractor.js # LLM extraction
│ │ └── UIManager.js # Dashboard rendering
│ └── main.js # App orchestration
├── index.html # Main UI
├── vite.config.js # COOP/COEP headers for WebGPU
└── package.json
After the first model download, MedScribe works completely offline:
- Models persist in IndexedDB
- No external requests are made
- All processing is local
Currently configured for English transcription.
To enable Arabic support, modify src/config/models.js:
stt: {
language: 'arabic', // Change from 'english'
}This project uses models subject to the LFM 1.0 License.
MedScribe is an AI assistant and should not replace professional medical judgment. Always verify extracted information for accuracy.
Built with ❤️ for privacy-first healthcare