This workshop is part of the MICCAI 2026 conference.
Overview
Machine learning (ML) systems in medical imaging have evolved from task-specific networks to massive, general-purpose Foundation Models and Vision-Language Models (VLMs). While these systems achieve unprecedented performance, they introduce a new layer of complexity and opacity. The "black-box" nature of these billion-parameter models makes their behavior increasingly unpredictable, raising critical concerns about hallucinations, bias amplification, and robustness.
Developing methodologies for explaining model predictions is no longer just a "nice-to-have" feature for user trust; it is an imperative for AI safety and regulatory compliance. With frameworks like the EU AI Act classifying medical AI as high-risk, there is a legal mandate for traceability, transparency, and human oversight. Methodologies that allow physicians to validate model reasoning, identify failure cases, and quantify uncertainty are essential for the ethical deployment of these systems in clinical workflows.
Despite the urgency, the MICCAI community faces a gap between algorithmic performance and clinical interpretability. Standard post-hoc visualization techniques (e.g., saliency maps) are often insufficient for high-dimensional 3D/4D data, failing to provide the causal or concept-based insights required for informed medical decision-making.
The Workshop on Interpretability of Machine Intelligence in Medical Image Computing (iMIMIC) at MICCAI 2026 addresses these challenges by focusing on the next generation of XAI. We need to move beyond "where the model looked" to explaining "why the model decided." This includes inspecting if models align with pathophysiological domain knowledge, detecting shortcuts in training data, and handling the complexity of multimodal and longitudinal patient data. Ultimately, interpretability is the key to transforming raw predictive power into reliable, legally sound, and clinically actionable intelligence.
Scope
This workshop aims to foster discussion and presentation of ideas to tackle the many challenges and identify opportunities related to the interpretability of ML systems in the context of MICCAI. This marks the 9th edition of iMIMIC, and to our knowledge it remains the only forum at MICCAI dedicated exclusively to the interpretability and explainability of machine learning models.
The primary purposes of this workshop are:
- To introduce the unique challenges of interpreting Generative AI and Foundation Models in the context of MICCAI, distinguishing medical XAI from general computer vision.
- To move the state-of-the-art toward quantitative, causal, and mechanistic interpretability.
- To join researchers, clinicians, and regulatory experts to discuss the gap between technical explanations and human-centric clinical needs.
- To propose objective benchmarks and metrics for measuring the fidelity, robustness, and utility of explanations.
Covered topics include but are not limited to:
- Interpretability of Foundation Models: Explaining Vision-Language Models (VLMs) and Generative Medical AI.
- Beyond Heatmaps: Concept-based interpretability, prototype learning, and mechanistic interpretability.
- Multimodal XAI: Explaining decisions derived from heterogeneous data (Images + EHR + Genomics).
- Longitudinal XAI: Interpreting disease progression and temporal dynamics in patient trajectories.
- Quantitative Evaluation: Metrics and benchmarks for assessing the fidelity and robustness of explanations.
- Uncertainty & Reliability: Disentangling aleatoric vs. epistemic uncertainty as a proxy for interpretability.
- Human-AI Collaboration: Conversational XAI, interactive explanations, and their impact on clinical workflow.
- Ethical & Regulatory: XAI for bias detection, fairness, and compliance with the EU AI Act.
Program
Preliminary program — This edition, iMIMIC is merged with the UNSURE Workshop and the Uncertainty Tutorial. Exact times are subject to change.
- 08:00 – 10:00: Uncertainty Tutorial – Part I
- 10:00 – 10:30: Coffee Break
- 10:30 – 11:30: Uncertainty Tutorial – Part II
- 11:30 – 12:30: UNSURE – Part I
- 12:30 – 13:30: Lunch
- 13:30 – 14:30: UNSURE – Part II
- 14:30 – 15:30: iMIMIC – Part I
- 15:30 – 16:00: Coffee Break + Joint Poster Session (UNSURE + iMIMIC)
- 16:00 – 17:00: iMIMIC – Part II
- 17:00 – 18:00: Joint Panel Discussion (Uncertainty Tutorial + UNSURE + iMIMIC)
Keynote Speaker
To be confirmed.
Submission
Authors should prepare a manuscript of 8–10 pages, including references. The manuscript should be formatted and anonymized according to the Lecture Notes in Computer Science (LNCS) style. Proceedings will follow MICCAI Springer's publication model.
All submissions will be reviewed by three reviewers. Authors will be asked to disclose any potential conflicts of interest. The selection of papers will be based on their relevance to medical image analysis, the significance of the results, technical and experimental merit, and clear presentation.
Submission details and link will be announced soon.
Important Dates
- Opening of submission system: TBD
- Paper submission due: TBD
- Reviews due: TBD
- Notification of paper decisions: TBD
- Camera-ready papers due: TBD
- Workshop: October 2026 (exact date TBD)
Venue
The iMIMIC 2026 workshop will take place as part of the MICCAI 2026 conference, held October 4–8, 2026 in Strasbourg, France.
More information regarding the venue can be found at the MICCAI 2026 conference website.
Organizing Team
General Chairs
- Mauricio Reyes, University of Bern, Switzerland.
- Jaime Cardoso, INESC Porto, Universidade do Porto, Portugal.
- Jayashree Kalpathy-Cramer, University of Colorado, USA.
- Shangqi Gao, University of Cambridge, United Kingdom.
- Dwarikanath Mahapatra, Khalifa University, Abu Dhabi, United Arab Emirates.
- Nguyen Le Minh, Japan Advanced Institute of Science and Technology, Japan.
- Mara Graziani, IBM Research Europe, Switzerland.
- Pedro Abreu, CISUC and University of Coimbra, Portugal.
- Hao Chen, Hong Kong University of Science and Technology, Hong Kong.
- Wilson Silva, Utrecht University and the Netherlands Cancer Institute, The Netherlands.
- José Amorim, CISUC and University of Coimbra, Portugal.
Interested in participating and being a sponsor? Email us