-
Notifications
You must be signed in to change notification settings - Fork 260
Description
Is your feature request related to a problem? Please describe.
Currently, launching the MONAILabel Reviewer Workflow requires a full segmentation or radiology app to be initialized. This creates several points of friction:
- Resource Waste: Loading a heavy model or bundle is unnecessary when the reviewer is only validating or correcting existing masks.
- Hardware Constraints: Reviewers (e.g., radiologists) often work on machines without high-end GPUs, making it difficult to "run" a full AI app just to access the review interface.
- UX Friction: The current "hack" of using a dummy app is unintuitive for new users who simply want to perform manual QA or label refinement on pre-existing data.
Describe the solution you'd like
I would like a lightweight, "Inference-free" or "Review-only" option to launch the MONAILabel server. This server-level mode should:
- Bypass Model Requirements: Start the server without requiring a specific model.pt or MONAI Bundle.
- Dedicated Entry Point: Provide a streamlined path for the Reviewer workflow that doesn't trigger AI inference engine initialization.
- Direct Dataset Mapping: Allow users to simply point the server to a dataset and an existing label set for validation/correction tasks.
Describe alternatives you've considered
Dummy Apps: Using a standard segmentation app as a placeholder to force the server to start. This is misleading and requires unnecessary app packaging and dummy model checkpoints.
Additional context
This change would allow MONAILabel to function as a standalone, lightweight tool for the manual QA and label refinement phase of the data lifecycle, independent of the training hardware.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status