This workshop is part of the MICCAI 2026 conference.

Overview

Machine learning (ML) systems in medical imaging have evolved from task-specific networks to massive, general-purpose Foundation Models and Vision-Language Models (VLMs). While these systems achieve unprecedented performance, they introduce a new layer of complexity and opacity. The "black-box" nature of these billion-parameter models makes their behavior increasingly unpredictable, raising critical concerns about hallucinations, bias amplification, and robustness.

Developing methodologies for explaining model predictions is no longer just a "nice-to-have" feature for user trust; it is an imperative for AI safety and regulatory compliance. With frameworks like the EU AI Act classifying medical AI as high-risk, there is a legal mandate for traceability, transparency, and human oversight. Methodologies that allow physicians to validate model reasoning, identify failure cases, and quantify uncertainty are essential for the ethical deployment of these systems in clinical workflows.

Despite the urgency, the MICCAI community faces a gap between algorithmic performance and clinical interpretability. Standard post-hoc visualization techniques (e.g., saliency maps) are often insufficient for high-dimensional 3D/4D data, failing to provide the causal or concept-based insights required for informed medical decision-making.

The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2026 addresses these challenges by focusing on the next generation of XAI. We need to move beyond "where the model looked" to explaining "why the model decided." This includes inspecting if models align with pathophysiological domain knowledge, detecting shortcuts in training data, and handling the complexity of multimodal and longitudinal patient data. Ultimately, interpretability is the key to transforming raw predictive power into reliable, legally sound, and clinically actionable intelligence.

This workshop aims to foster discussion and presentation of ideas to tackle the many challenges and identify opportunities related to the interpretability of ML systems in the context of MICCAI. This marks the 9th edition of iMIMIC, and to our knowledge it remains the only forum at MICCAI dedicated exclusively to the interpretability and explainability of machine learning models.

The primary purposes of this workshop are:

  1. To introduce the unique challenges of interpreting Generative AI and Foundation Models in the context of MICCAI, distinguishing medical XAI from general computer vision.
  2. To move the state-of-the-art toward quantitative, causal, and mechanistic interpretability.
  3. To join researchers, clinicians, and regulatory experts to discuss the gap between technical explanations and human-centric clinical needs.
  4. To propose objective benchmarks and metrics for measuring the fidelity, robustness, and utility of explanations.

Covered topics include but are not limited to:

Preliminary program — This edition, iMIMIC is merged with the UNSURE Workshop and the Uncertainty Tutorial. Exact times are subject to change.

To be confirmed.

Authors should prepare a manuscript of 8–10 pages, including references. The manuscript should be formatted and anonymized according to the Lecture Notes in Computer Science (LNCS) style. Proceedings will follow MICCAI Springer's publication model.

All submissions will be reviewed by three reviewers. Authors will be asked to disclose any potential conflicts of interest. The selection of papers will be based on their relevance to medical image analysis, the significance of the results, technical and experimental merit, and clear presentation.

Submission details and link will be announced soon.

The iMIMIC 2026 workshop will take place as part of the MICCAI 2026 conference, held October 4–8, 2026 in Strasbourg, France.

More information regarding the venue can be found at the MICCAI 2026 conference website.

Interested in participating and being a sponsor? Email us