Skip to content

genmilab/MedMO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

29 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

MedMO: Grounding and Understanding Multimodal Large Language Model for Medical Images

Paper Model Model Model Model License

MedMO Logo

MedMO-8B-Next is the latest and most powerful iteration of the MedMO family β€” an open-source multimodal foundation model purpose-built for comprehensive medical image understanding and grounding. Trained on 26M+ diverse medical samples across 45 datasets, MedMO-8B-Next achieves state-of-the-art performance across all major medical imaging benchmarks, outperforming both open-source and closed-source competitors on VQA, Text QA, grounding, and report generation tasks.

🎯 Capabilities

MedMO excels at a comprehensive range of medical imaging tasks:

  • Visual Question Answering (VQA): Answer complex questions about medical images across radiology, pathology, ophthalmology, and dermatology
  • Text-Based Medical QA: Clinical reasoning and medical knowledge question answering
  • Radiology Report Generation: Generate detailed, clinically accurate radiology reports from medical images
  • Disease Localization with Bounding Boxes: Precise spatial detection and localization of pathological findings
  • Anatomical Grounding: Spatial localization and grounding of anatomical structures
  • Clinical Reasoning: Step-by-step diagnostic reasoning and clinical decision support
  • Diagnostic Classification: Multi-class disease classification across diverse imaging modalities
  • Spatial Object Detection: Fine-grained detection in microscopy, pathology slides, and cellular imaging
  • Medical Report Summarization: Extract and summarize key clinical findings from complex medical reports

Supported Modalities

  • Radiology (X-ray, CT, MRI, Ultrasound)
  • Pathology & Microscopy
  • Ophthalmology (Fundus, OCT)
  • Dermatology
  • Nuclear Medicine (PET, SPECT)

πŸ† Benchmark Performance

VQA & Text QA Results

MedMO-8B-Next sets a new state-of-the-art across the board, achieving the highest average scores on both medical VQA and Text QA benchmarks β€” surpassing strong baselines including Lingshu-7B and Fleming-VL-8B.

OMIVQA = OmniMedVQA Β· MedXQA = MedXpertQA Β· Medbullets reported as op4/op5

Medical VQA Benchmarks

Model MMMU-Med VQA-RAD (closed/all) SLAKE (closed/all) PathVQA PMC-VQA OmniMedVQA MedXpertQA Avg.
Lingshu-7B 54.0 77.2 / 43.0 82.4 / 33.2 41.9 54.2 82.9 26.9 55.1
Fleming-VL-8B 63.3 78.4 / 56.4 86.9 / 80.0 56.5 64.3 88.2 21.6 66.1
MediX-R1-8B 63.3 75.2/51.6 70.3/54.4 41.0 55.3 73.8 24.9 57.1
MedMO-4B 54.6 50.9 / 35.0 41.0 / 30.0 42.4 50.6 79.7 24.8 45.4
MedMO-8B 64.6 72.3 / 64.7 70.6 / 70.0 56.3 59.4 84.8 26.2 63.2
MedMO-4B-Next 58.7 79.7 / 59.6 78.0 / 74.0 73.3 75.7 90.6 27.0 68.5
MedMO-8B-Next 69.3 86.4 / 68.0 83.0 / 81.6 56.3 74.1 93.3 42.9 72.7

Medical Text QA Benchmarks

Model MMLU-Med PubMedQA MedMCQA MedQA Medbullets (op4/op5) MedXpertQA SGPQA Avg.
Lingshu-7B 69.6 75.8 56.3 63.5 62.0 / 53.8 16.4 27.5 53.1
Fleming-VL-8B 71.8 74.0 51.8 53.7 40.5 / 37.3 12.1 24.9 45.7
MediX-R1-8B 79.0 73.4 60.1 85.8 55.1/47.0 14.4 34.3 56.1
MedMO-4B 75.7 78.0 58.0 78.5 57.5 / 47.7 16.4 29.4 55.1
MedMO-8B 81.0 77.6 65.0 84.3 66.5 / 60.2 19.9 36.0 61.3
MedMO-4B-Next 74.8 78.2 58.1 78.3 57.4 / 47.6 16.5 29.5 55.0
MedMO-8B-Next 80.2 75.6 62.0 83.8 65.2 / 57.8 20.9 35.5 60.1

Bold = best result, underline = second-best result.

  • Benchmarked on AMD MI210 GPU.

πŸš€ Quick Start

Installation

pip install transformers torch qwen-vl-utils

Basic Usage

from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch

# Load model
model = Qwen3VLForConditionalGeneration.from_pretrained(
    "MBZUAI/MedMO-8B",
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
    device_map="auto",
)

processor = AutoProcessor.from_pretrained("MBZUAI/MedMO-8B")

# Prepare your input
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "path/to/medical/image.png",
            },
            {"type": "text", "text": "What abnormalities are present in this chest X-ray?"},
        ],
    }
]

# Process and generate
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to(model.device)

# Generate output
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text[0])

Example: Disease Localization with Bounding Boxes

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "chest_xray.png"},
            {"type": "text", "text": "Detect and localize all abnormalities in this image."},
        ],
    }
]
# Output: "Fractures <box>[[156, 516, 231, 607], [240, 529, 296, 581]]</box>"

Example: Report Generation

messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "ct_scan.png"},
            {"type": "text", "text": "Generate a detailed radiology report for this CT scan."},
        ],
    }
]
# MedMO generates comprehensive clinical reports with findings and impressions

πŸ—οΈ Model Architecture

MedMO is built on Qwen3-VL-8B-Instruct and trained through a 4-stage progressive pipeline:

  1. Stage 1 - General Medical SFT: Large-scale training on 18.5M image-text pairs for foundational medical understanding
  2. Stage 2 - High-Resolution & Grounding: Training on 3M curated samples at 1280Γ—1280 resolution for spatial localization
  3. Stage 3 - Instruction Tuning: Fine-tuning on 4.3M instruction-response pairs for task-specific alignment
  4. Stage 4 - Reinforcement Learning: GRPO training with verifiable rewards (label accuracy, bbox IoU) for enhanced grounding

Total Training Data: 26M+ samples from 45 medical datasets spanning diverse modalities and anatomical systems.

πŸ“Š Evaluation

Evaluation Framework

MedMO was comprehensively evaluated using multiple frameworks and metrics:

Standard Medical Benchmarks

We used MedEvalKit for systematic evaluation across all QA taks:

LLM-as-a-Judge Evaluation

For VQAs question answering and report generation tasks, we employed an LLM-as-a-Judge methodology:

  • Judge Model: gpt-5-mini-2025-08-07

This approach provides robust, scalable assessment of the model's ability to generate detailed, medically accurate long-form responses.


πŸ“¦ Model Family

Model Parameters Best For
MedMO-8B-Next 8B SOTA highest accuracy, all tasks β€” recommended
MedMO-4B-Next 4B 2nd SOTA, high accuracy in resource-constrained environments
MedMO-8B 8B Previous generation
MedMO-4B 4B Resource-constrained environments

For detailed benchmark results, please refer to our paper.

πŸ“„ Citation

If you use MedMO in your research, please cite our paper:

@article{deria2026medmo,
  title={MedMO: Grounding and Understanding Multimodal Large Language Model for Medical Images},
  author={Deria, Ankan and Kumar, Komal and Dukre, Adinath Madhavrao and Segal, Eran and Khan, Salman and Razzak, Imran},
  journal={arXiv preprint arXiv:2602.06965},
  year={2026}
}

πŸ™ Acknowledgments

We gratefully acknowledge the following:

  • Base Architecture: Built on Qwen3-VL by Alibaba Cloud
  • Evaluation Framework: MedEvalKit by Alibaba DAMO Academy
  • Training Framework: TRL (Transformer Reinforcement Learning) by Hugging Face
  • LLM-as-a-Judge: Evaluation powered by gpt-5-mini-2025-08-07 from OpenAI
  • Compute Resources: Training conducted on 64Γ— AMD Instinct MI210 GPUs
  • Open-Source Datasets: We thank the medical imaging community for providing high-quality public datasets including MedTrinity, MIMIC-CXR, CheXpert, PathVQA, and many others that made this work possible

πŸ“œ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

πŸ“§ Contact

For questions, issues, or collaborations:

website counter

About

Medical Foundation Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages