Bram van Ginneken
Radboud University, The Netherlands
Why medical image analysis should be open. Really open, not open as OpenAI
Thursday | April 9, 2026 | 9:00 – 10:00
Abstract: We have passed a tipping point in medical image analysis. Interpreting medical images for the direct benefit of patients, once the exclusive domain of human specialists, can now be done by computers, thanks to the breakthrough of deep learning.
This should affect our choice of research topics. This new agenda is the topic of this talk. I will identify topics that we should focus less on, such as developing yet another segmentation method that is shown to be superior to a poorly implemented baseline, and topics that should receive more attention, such as developing methods for efficient data annotation and benchmarking.
Above all, I will argue that science should be much more open to have more impact. I will show the devastating effect of keeping results closed, leading to excessive research waste. Our publications nowadays contain sections on ‘Code availability’ and ‘Data availability’ and this is where authors write down creative excuses why they do not share their code and data. This should change.
Biography: Bram van Ginneken studied physics and earned a PhD on detecting tuberculosis from chest radiographs. He has led medical image analysis research groups at University Medical Center Utrecht and Radboud UMC in Nijmegen, the Netherlands. He also spent time at the University of Chicago and the University of Iowa, and worked for 15 years at the Fraunhofer Institute for Digital Medicine MEVIS in Bremen, Germany. His PhD research resulted in CAD4TB, a medical device software now installed in over 85 countries worldwide, making it the most widely used autonomous AI solution for medical image interpretation. In 2014, he founded Thirona, a company that develops CT lung image analysis software. He pioneered the concept of challenges in medical image analysis and created grand-challenge.org. In 2024, he founded Plain Medical, a company developing AI solutions to reduce the workload of radiologists.
Faisal Mahmood
Harvard Medical School; Brigham and Women’s Hospital; Massachusetts General Hospital, USA
Multimodal, Generative, and Agentic AI for Healthcare
Thursday | April 9, 2026 | 14:00 – 15:00
Abstract: Advances in digital pathology and artificial intelligence have presented the potential to build models for objective diagnosis, prognosis and therapeutic-response and resistance prediction. In this talk we will discuss our work on: (1) Data-efficient methods for weakly-supervised whole slide classification with examples in cancer diagnosis and subtyping (Nature BME, 2021), identifying origins for cancers of unknown primary (Nature, 2021) (2) Discovering integrative histology-genomic prognostic markers via interpretable multimodal deep learning (Cancer Cell, 2022; IEEE TMI, 2020; ICCV, 2021; CVPR, 2024; ICML, 2024). (3) Building unimodal and multimodal foundation models for pathology, contrasting with language and genomics (Nature Medicine, 2024a, Nature Medicine 2024b, CVPR 2024). (4) Developing a universal multimodal generative co-pilot and chatbot for pathology (Nature, 2024). (5) 3D Computational Pathology (Cell, 2024)
Biography: Dr. Faisal Mahmood is an Associate Professor at Harvard Medical School and the Division of Computational Pathology at Brigham and Women’s Hospital and Massachusetts General Hospital. He is a full member of the Dana-Farber Cancer Institute / Harvard Cancer Center ; an Associate Member of the Broad Institute of Harvard and MIT. His laboratory works on developing foundation models, generative and agentic AI algorithms, methods, and techniques for healthcare with a particular focus on disease diagnosis, prognosis and therapeutic response prediction. Dr. Mahmood’s lab has developed several widely used methods and algorithms for digital and computational pathology, and his labs works has been published in major scientific journals. He is also a principal investigator on several large NIH and ARPA-H grants. Dr. Mahmood is also the Scientific Co-Founder of Modella AI a company with a focus on developing generative and agentic AI tools for healthcare.
Mauricio Reyes
University of Bern, Switzerland
Robustness by Design: Clinical Metrics for Imaging AI
Friday | April 10, 2026 | 9:00 – 10:00
Abstract: Progress in medical imaging AI is often measured by improvements in benchmark performance. However, in clinical practice, average or peak accuracy is rarely what determines real-world impact. What matters instead is how systems fail, how they behave over time, and how effectively humans can detect, understand, and correct these failures.
In this keynote, I argue that clinical value in AI should be designed and evaluated around three principles: robustness, reliability, and resilience. Together, these dimensions shift the focus from static performance metrics to the dynamics of failure, adaptation, and long-term system behavior.
Using examples from medical imaging research and deployed AI systems, I will illustrate how clinically meaningful metrics emerge from analyzing failure patterns, temporal drift, and human-in-the-loop interactions. I will further show how interpretability can act as a learning signal, enabling actionable model improvement rather than post-hoc explanation, and discuss the implications of this perspective for the next generation of multimodal and foundation models in healthcare.
Biography: Professor Mauricio Reyes began his academic journey in 2003 when he was awarded a French-Chilean scholarship to conduct his PhD studies at INRIA, France. His research work focused on developing AI technologies to improve the workflow for cancer patients.
His PhD thesis resulted in the development of novel image reconstruction techniques for respiratory motion compensation during PET imaging. This research work significantly improved the assessment of treatment response in lung cancer patients, demonstrating the potential of medical image computing technologies applied to medicine.
Early in his postdoctoral years, Professor Reyes developed innovative high-throughput computational analysis methodologies for in-silico orthopaedic implant design. This research work was highlighted by the Swiss National Science Foundation as an example of front-end Swiss research. He was able to further translate this technology, changing the way orthopedic implants are designed nowadays using computational tools. In 2008, he co-founded Crisalix, a successful Swiss company offering AI-based technologies for simulation of reconstructive breast surgery.
His work in computational modeling continued towards the challenging aspects of modelling pathology. Through EU-funded projects, he started in 2010 to develop AI solutions for neuro-oncology, with a particular focus on automated brain tumor burden assessment. His seminal research work, led to be the recipient of the MICCAI Young Scientist Publication award in 2016.
His translational research efforts led to the first-of-its-kind FDA-approved AI technology for brain tumor patients, and a Boston-based startup with a subsidiary branch in Switzerland to further develop AI technologies. Throughout his career, Professor Reyes has raised over 10M EUR acquired funds, over 350 authored articles, with over 22000 citations, and a H-index of 60. His work has pioneered in the important aspects of robust AI, and explainable AI (XAI), in the field of medical image analysis.
Polina Golland
MIT, USA
AI for Image-Guided Navigation
Friday | April 10, 2026 | 14:00 – 15:00
Abstract: Machine learning has brought major improvements in image registration and segmentation accuracy in the context of large medical image datasets. In this talk, I will discuss our recent work that aims to similarly advance real-time image intervention guidance. We have developed a novel approach to rapid 2D/3D registration that is specifically designed to support image-guided interventions where a 3D volume (CT or MRI) is acquired preoperatively and 2D images (such as X-ray) are used to support navigation during the procedure. XVR is a fully automated framework for training patient-specific neural networks for 2D/3D registration. XVR uses physics-based simulation to generate virtually infinite training data from a patient’s own preoperative volumetric imaging, avoiding the algorithmic bias inherent to supervised models. Furthermore, XVR requires about 5 min of training per patient, making it suitable for emergency interventions as well as planned procedures. We demonstrate the benefits of our approach on a wide range of 2D/3D registration tasks, demonstrating dramatic improvements in accuracy and speed of image alignment. XVR is an open-source software, freely released with the goal of eliminating 2D/3D registration as a bottleneck in the advancement of intraoperative image guidance.
Joint work with Vivek Gopalakrishnan, Neel Dey, David-Dimitris Chlorogiannis, Andrew Abumoussa, Darren B. Orbach, and Sarah Frisken.
Biography: Polina Golland is a Sunlin (1966) and Priscilla Chou professor of Electrical Engineering and Computer Science at MIT and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Polina’s primary research interest is in developing novel machine learning and geometric techniques for medical image analysis and understanding. With her students, Polina has demonstrated novel approaches to image segmentation, shape analysis, functional image analysis and population studies. She has served as an associate editor of the IEEE Transactions on Medical Imaging and of the IEEE Transactions on Pattern Analysis. Polina is currently on the editorial board of the Journal of Medical Image Analysis. She is a Fellow of the International Society for Medical Image Computing and Computer Assisted Interventions (MICCAI) and of the American Institute for Medical and Biological Engineering (AIMBE).
Greg Slabaugh
Queen Mary University, London
From Interpretable Multimodal Models to Foundation Models in Biomedical Imaging
Saturday | April 11, 2026 | 9:00 – 10:00
Abstract: Biomedical imaging exists within a high-dimensional space of modalities, tasks, and anatomies — a challenge that can be conceptually framed as a tensor. Addressing this space requires models that are not only accurate, but also interpretable and capable of generalization.
In this keynote, I will explore the challenges and opportunities of multi-modal, multi-anatomy, and multi-task biomedical imaging, tracing a trajectory from specialized, interpretable models toward generalist, modality-aware foundation models. I will begin with examples of domain-specific architectures, including graph-based models for multistain pathology and multimodal fusion networks that integrate histology with molecular data. I will then describe recent work on a foundation model for ultrasound imaging, designed to learn transferable representations across anatomy and diagnostic tasks.
The talk will culminate with a vision for healthcare digital twins — computational representations of individual patients that integrate imaging, physiology, and molecular data over time. Drawing on recent work in cardiology, including aortic stenosis, I will illustrate how digital twins represent the clinical realization of scalable, interpretable AI. Together, these developments point toward a future in which data-driven models enable robust, personalized, and biologically grounded healthcare.
Biography: Greg Slabaugh is Director of the Digital Environment Research Institute and Professor of Computer Vision and AI at Queen Mary University of London. His career spans academia and industry including roles at Huawei, Medicsight, and Siemens, where he researched technologies deployed in smartphones, FDA-cleared medical devices and ultrasound machines. He has authored over 200 peer-reviewed publications and holds 44 patents. At Queen Mary, Greg leads a multidisciplinary AI research team with a focus on healthcare, digital twins, and multimodal modeling.