MONAI Label: AI-assisted medical annotation and integration with 3D Slicer, OHIF and QuPath

MONAI Label as MONAI sub-project for AI-in-the-loop medical imaging annotation, integration with 3D Slicer, OHIF Viewer/Cornerstone3D and QuPath, the DeepGrow and DeepEdit paradigms, and active learning.

Digital HealthR&DOpen SourceAI MONAI Label3D SlicerOHIFCornerstone3DQuPathAnnotationActive LearningOpen SourceDigital HealthAI

The annotation bottleneck

The main constraint on medical AI is not network architecture nor hardware availability, but availability of quality clinical annotations. Training a tumour segmentation network requires 3D volumes with pixel-wise segmentations done by a qualified radiologist — work taking minutes to hours per volume, multiplied by hundreds or thousands of cases.

The cost is unsustainable for many projects. An emerging solution is AI-in-the-loop assisted annotation: rather than annotating each case from scratch, the radiologist corrects initial AI model proposals. As the model improves (retrained on corrected data), correction effort decreases — ideally until proposals are accepted unchanged.

MONAI Label is the open source framework implementing this paradigm, part of the MONAI ecosystem.

MONAI Label

Announced in 2021 as a Project MONAI sub-project, MONAI Label is a Python server that exposes medical AI models via REST API and integrates as plug-in with the main open source viewer clients. Licence is Apache 2.0; repository at github.com/Project-MONAI/MONAILabel.

Architecture:

  • MONAI Label server — Python application (Flask-based) that loads one or more “apps” (AI pipelines) and exposes them as HTTP endpoints
  • Client plug-ins — extensions to viewers (3D Slicer, OHIF/Cornerstone3D, QuPath) that send inference requests and receive results to display
  • Active learning backend — the server tracks user corrections, proposes which cases to annotate next (active learning strategy), and can retrain the model on new annotations

Annotation paradigms

MONAI Label supports several interaction modes:

DeepGrow

Point-based interaction: the user clicks a few points inside and outside the region of interest. The model (trained previously to solve this task) generates the segmentation. Useful when no specific model exists for the structure to segment — a “generic” DeepGrow model adapts to many situations.

DeepEdit

The model proposes an initial full segmentation, the user adds correction clicks (missing areas, areas to exclude). The model updates the segmentation taking user input into account. A human-in-the-loop pattern ideal for producing annotated datasets.

Auto-segmentation

The model already has sufficient performance to produce segmentations without user intervention. Useful for well-characterised structures (major organs, large lesions). MONAI Label hosts pre-trained models for various organs (liver, kidney, lung, spleen, prostate, etc.).

Scribbles-based

The user draws scribbles (free strokes) to indicate foreground/background; the model completes the segmentation. Faster than point-clicks in some scenarios.

Integration with 3D Slicer

For 3D Slicer, MONAI Label provides an extension installable via the ExtensionManager:

  • The plugin adds interaction panels in the Slicer sidebar
  • The user points to the MONAI Label server (URL + port)
  • Selects an open volume and a server app
  • Runs AI segmentation
  • Corrects interactively; corrections are sent back to the server for active learning
  • Exports final segmentations in DICOM SEG or NIfTI format

The pattern is plug-and-play: a clinical centre can deploy MONAI Label on its own GPU hardware (even a single workstation), and radiologists already using 3D Slicer don’t need to switch tools.

Integration with OHIF/Cornerstone3D

For the browser-based viewer, MONAI Label has an extension for OHIF Viewer (based on Cornerstone3D from 2022-2023). The pattern is analogous: browser → MONAI Label server → segmentation rendered back in OHIF. No desktop installation required; suited to remote review scenarios, multiple sites accessing the same pipelines.

Integration with QuPath (digital pathology)

QuPath (University of Edinburgh, open source) is the main open source software for whole slide image analysis in digital pathology. MONAI Label has a QuPath integration bringing the AI-assisted annotation paradigm also to pathology: annotation of nuclear, glandular structures, tumour ROIs on WSIs, with the dimensions characterising this domain (thousands of pixels per side).

Active learning

A distinctive aspect of MONAI Label is the built-in active learning cycle:

  1. The server maintains a pool of unannotated cases
  2. After an annotation, the server can retrain the model (incremental fine-tuning)
  3. Computes a prioritisation strategy for next cases to annotate — typically based on model uncertainty (cases the model is less sure about are more informative to annotate)
  4. Presents the next case to the user

This cycle enables reaching production performance with dramatically fewer annotations than the traditional pattern (annotate X cases, train, verify). Annotation work becomes a managed effort, not a fixed cost.

Deployment

MONAI Label is designed to be deployed close to the user:

  • Single workstation — sufficiently powerful GPU (RTX 3090/4090, A6000) hosts server + local models
  • On-premise hospital server — a centralised GPU server exposes MONAI Label services to multiple radiologists using Slicer/OHIF
  • Cloud deployment — MONAI Label as Docker container on Kubernetes in private/public cloud

The architecture is deployment-agnostic: the same API works regardless of where the server runs.

Real use cases

As of May 2023 MONAI Label is in use across several scenarios:

  • Dataset preparation for clinical trials — when an institution must annotate thousands of scans for a multicentre study
  • Trainees training — radiology residents use MONAI Label models to pre-segment, learning by reflecting on corrections
  • Radiotherapy planningorgans at risk segmentation pre-planning with physicist/physician manual correction
  • Anatomical atlas construction — segmentation of hundreds of volumes for cartographic atlases
  • Teaching and research in university imaging

Relationship with TotalSegmentator

An alternative emerging in the same period is TotalSegmentator (see upcoming dedicated article), an nnU-Net trained on 104 anatomical structures in whole-body CT. TotalSegmentator produces excellent-quality automatic segmentations without user input. The two tools are complementary:

  • TotalSegmentator — fixed auto-segmentation on a predefined taxonomy (104 CT structures)
  • MONAI Label — assisted annotation platform for any structure, with context adaptation cycle

In real deployments the two coexist: TotalSegmentator for standardised segmentations, MONAI Label for structures or tasks not covered by pre-trained models.

In the Italian context

Italian groups are starting to adopt MONAI Label:

  • Oncology IRCCS — IEO, IRST, INT — for internal dataset preparation
  • Universities — Politecnico di Milano, Turin, Bologna, Trento, Verona — in research projects
  • Radiotherapy — Italian radiotherapy centres explore MONAI Label for OAR (organs at risk) autosegmentation
  • Tech spin-offs — MONAI Label integration in third-party certified solutions

2023 limits

  • Model management — model versioning and reproducibility of incremental training require operational discipline
  • Performance on specialised tasks — generic MONAI Label models may underperform custom models trained on local data; the typical pattern still requires a fine-tuning phase
  • Documentation — broad but fragmented; learning curve steeper than dedicated-support commercial tools
  • Regulatory certification — base MONAI Label is not certified; building certified products requires the usual qualification work

Outlook

Expected directions:

  • Integration with medical foundation models — the arrival of Segment Anything (Meta, April 2023) and MedSAM (June 2023) will open the prompt-based segmentation pattern where a pre-trained generalist model specialises via prompts rather than fine-tuning
  • Multimodal annotation — integrated support for heterogeneous data (imaging + clinical text + omics) in a single environment
  • Uncertainty visualisation — exposing uncertainty maps to the user, fundamental for clinical trust
  • Integration with clinical PACS — the transition from “research tool” to “component of clinical workflow” will require connectors to enterprise PACS systems

MONAI Label in 2023 is an essential piece of the open source medical AI ecosystem: it bridges a research model to its application on real clinical data, where annotations are expensive and pipelines must adapt to context. Its maturation dramatically lowers the entry cost of a healthcare organisation into internal AI.


References: MONAI Label, Project MONAI (github.com/Project-MONAI/MONAILabel). Apache 2.0 licence. Integration with 3D Slicer, OHIF Viewer + Cornerstone3D, QuPath. DeepGrow, DeepEdit paradigms. Active learning backend. NVIDIA and MONAI community.

Need support? Under attack? Service Status
Need support? Under attack? Service Status