ISBI 2024 Tutorials
TUTORIAL 1: AI tools for computational neuroanatomy
Primary Contact
- Eleftherios Garyfallidis (Indiana University)
<elef@iu.edu>
Co-Organizers
- Bramsh Chandio (University of Southern California)
- Shreyas Fadnavis (Harvard University)
- Ariel Rokem (University of Washington, Seattle)
- Jaroslaw Harezlak (Indiana University)
Abstract
The ever-increasing size of the datasets, analysis practices and comparisons of these new approaches with current state-of-the-art methods requires access to advanced computational resources and methods. The Diffusion Imaging in Python (DIPY) community has developed an established software ecosystem for analyzing structural and diffusion MRI data.This tutorial is tailored to enable ISBI attendees by teaching the latest AI tools available in DIPY that can boost processing. Given that DIPY provides one of the largest APIs for medical imaging we will focus on methods that are currently a bottleneck for most researchers such as segmentation, artifact correction and AI-driven statistical analysis. Hands-on tutorials in Python and Jupyter notebooks will be provided to all attendees.
- Eleftherios Garyfallidis (Indiana University)
TUTORIAL 2: Brain Connectome Analysis with Graph Neural Networks
Primary Contact
- Carl Yang (Emory University)
<j.carlyang@emory.edu>
Co-Organizers
- Hejie Cui (Emory University)
- Xuan Kan (Emory University)
Abstract
Mapping the connectome of human brains using structural or functional connectivity has become one of the most pervasive paradigms for neuroimaging analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep learning have attracted broad interest due to their established power for modeling complex networked data. Despite their superior performance in many fields, there has not yet been a systematic tutorial on practical GNNs for brain network analysis. In this tutorial, we will cover (1) the summarization of brain network construction pipelines for both structural and functional neuroimaging modalities; (2) the modularization of fundamental GNN designs for brain networks and a set of recommendations on general effective recipes based on empirical observations; (3) hands-on instructions on our out-of-box Python package BrainGB, which is available at https://braingb.us with models, tutorials, and examples; (4) more advanced GNN designs and training strategies for brain network analysis and future directions.
- Carl Yang (Emory University)
TUTORIAL 3: Computational Pathology Tutorial: Clinical Insights and Methodological Advances
Primary Contacts
- Maria Vakalopoulou (CentraleSupélec; Archimedes Unit, Greece)
<maria.vakalopoulou@centralesupelec.fr> - Stergios Christodoulidis (CentraleSupelec)
<stergios.christodoulidis@centralesupelec.fr>
Co-Organizers
- Dimitris Samaras (Stony Brook University)
- Ioannis Mountzios (Henry Dunant Hospital Center)
- Siddhesh Thakur (Indiana University)
- Kun Huang (Indiana University)
Abstract
Digital pathology has revolutionized histopathological analysis by leveraging sophisticated computational techniques to augment disease diagnosis and prognosis. Among other methods, recent deep learning methods provide a very good direction for the processing of these data towards different tasks and endpoints. This tutorial aims to provide a thorough presentation of the clinical problems as well as recent methodological advances in computational pathology. Within its scope, participants will be introduced to the clinical and biological questions as well as to the practicalities of utilizing digitized histopathological tissue slides. Furthermore, comprehensive presentations of the state-of-the-art methods will be given, covering topics of analysis in multiple magnifications (cell-level, WSI-level) as well as different methodological formulations, including discriminative and generative formulations. The tutorial will consist of a theoretical review of the topics with hands-on demonstrations. We recommend the audience to bring their own laptop in order to run the provided codes during the tutorial. The tutorial will be self-contained, covering all aspects of digital pathology, from the basics to the current state-of-the-art methods as well as more advanced methods in the field towards their use in clinical settings.
- Maria Vakalopoulou (CentraleSupélec; Archimedes Unit, Greece)
TUTORIAL 4: DiMEDIA: Diffusion Models in Medical Imaging and Analysis
Primary Contact
- Sotirios Tsaftaris (University of Edinburgh; Archimedes Unit, Greece)
<s.tsaftaris@ed.ac.uk>
Co-Organizers
- Julia Wolleb (University of Basel)
- Yuyang Xue (University of Edinburgh)
- Maria Nefeli Gkouti (Archimedes Unit, Greece)
Abstract
There has been an explosion of developments in generative models in machine learning (including Variational Auto-Encoders or VAEs, Generative Adversarial Networks or GANs, Normalizing Flows or NFs) that enable us to generate high-quality, realistic synthetic data such as high-dimensional images, volumes, or tensors. Recently a (re)newed breed of generative models, Diffusion Models have shown impressive ability in generating high-quality imaging data. Applications of diffusion models in medical image analysis are already appearing in the context of image reconstruction, denoising, anomaly detection, segmentation, generation of data, and causality. This tutorial presents an overview of generative modelling, focusing on diffusion models (theory and learning tricks). We will discuss applications in the medical imaging field and overview existing open-ended challenges. It builds on the highly successful and sold-out tutorial at MICCAI 2023.
- Sotirios Tsaftaris (University of Edinburgh; Archimedes Unit, Greece)
TUTORIAL 5: Explainable Αrtificial Ιntelligence in Biomedical Imaging
Primary Contact
- Kalliopi V. Dalakleidi (National Technical University of Athens)
<kdalakleidi@biosim.ntua.gr>
Co-Organizers
- Nicolas Karakatsanis (Cornell University)
- Ioanna Chouvarda (Aristotle University of Thessaloniki)
- Theofanis Ganitidis (National Technical University of Athens)
- Dimitris Fotopoulos (Aristotle University of Thessaloniki)
Abstract
Artificial Intelligence in Biomedical Imaging though increasingly popular, has had so far limited clinical impact, since robust and generalizable models that provide the end user with decisions that can be explained are still sparse. An approach to tackle this challenge is applying Explainable Αrtificial Ιntelligence (XAI) methods in Biomedical Imaging. Visual-based XAI methods, where the explanation can be provided directly on the input-image, are of special importance in medical imaging. For visual-based approaches, the main idea is to analyse which parts of the image led to a resulting decision. When there are no visually meaningful ways of existing computer-aided diagnosis approaches, non-visual based XAI methods for biomedical imaging can be used instead, such as case-based, textual and auxiliary explanations. Case-based explanations provide explanations based on specific examples, such as using similar input images or counterfactuals. Textual XAI approaches aim at depicting additional information through textual explanations represented by natural language. Auxiliary measures mainly provide additional information and can be illustrated in tabular or graphical form. The first part of the tutorial will be a lecture introducing XAI methods taxonomy (post-hoc vs ad-hoc, visual vs. non-visual, local vs global, model specific vs model-agnostic, high-resolution vs low resolution) with examples on their applications on biomedical imaging modalities (CT, MRI, PET/SPECT, Ultrasound). The second and third part of the tutorial will be hands-on sessions on visual-based and non-visual based methods, accordingly. A key challenge that remains in Explainable Artificial Intelligence in Biomedical Imaging is ensuring that XAI methods are robust and reliable. Recent research efforts in computer vision that have investigated whether XAI methods are robust to small perturbations in the data, different model architectures, different cross fold validation approaches or different hyperparameter tuning settings will be presented during the fourth part of the tutorial. XAI research in Biomedical Imaging should also address the lack of standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders in clinical decision making. Towards this goal, the fifth part of the tutorial will aim at identifying and proposing measures of explanation effectiveness in the clinical setting.
- Kalliopi V. Dalakleidi (National Technical University of Athens)
TUTORIAL 6: Fairness of AI in Medical Imaging (FAIMI)
Primary Contact
- Aasa Feragen (Technical University of Denmark)
<afhar@dtu.dk>
Co-Organizers
- Andrew King (King’s College London)
- Enzo Ferrante (CONICET / Universidad Nacional del Litoral)
- Melanie Ganz (Copenhagen University)
- Eike Petersen (Technical University of Denmark)
- Veronika Cheplygina (IT University of Copenhagen)
- Esther Puyol-Antón (HeartFlow)
- Ben Glocker (Imperial College London)
- Daniel Moyer (Vanderbilt University)
- Tareen Dawood (King’s College London)
- Nina Weng (Technical University of Denmark)
Abstract
During the last 10 years, the research community of fairness, equity and accountability in machine learning has highlighted the potential risks associated with biased systems in various application scenarios, ranging from face recognition to criminal justice and job hiring assistants. A large body of research has shown that such machine learning systems can be biased with respect to demographic attributes like gender, ethnicity, age or geographical distribution, presenting unequal behavior on disadvantaged or underrepresented subpopulations. This bias can have a number of sources, ranging from database construction, modeling choices, training strategies and even lack of diversity in team composition, but can also stem from differences in data quality, prevalence, or other hidden correlations. This tutorial will introduce the audience to the standard practices within algorithmic fairness through the lens of medical imaging, and provide case discussions, current research status, potential pitfalls, as well as data resources to enable medical imaging researchers to get started working on bias and fairness in medical imaging. The tutorial is rooted in the FAIMI community (https://faimi-workshop.github.io), an initiative dedicated to promoting knowledge and research about bias and fairness in the medical imaging community.
- Aasa Feragen (Technical University of Denmark)
TUTORIAL 7: Federated Learning in Healthcare
Primary Contact
- Sarthak Pati (Indiana University)
<patis@iu.edu>
Co-Organizers
- Spyridon Bakas (Indiana University)
- Walter Riviera (Intel)
- Hasan Kassem (MLCommons)
Abstract
This tutorial provides a comprehensive introduction to the practical applications of Deep Learning (DL) in the context of Federated Learning (FL), a form of collaborative learning where data is not shared between collaborators. It delves into the deployment of DL models in low-resource environments and FL pipelines in large-scale healthcare settings. The tutorial introduces the Comprehensive Open Federated Ecosystem (COFE), an open-source collection of tools developed for DL in clinical settings. Key contributions of COFE include the graphical interface provided by the Federated Tumor Segmentation (FeTS) Tool, the DL algorithmic core provided by Generally Nuanced Deep Learning Framework (GaNDLF), the Open Federated Learning (OpenFL) library, governance and orchestration provided by MedPerf, and model optimization provided by OpenVINO. Attendees will learn to build models using GaNDLF, adapt existing centralized algorithms to a federated architecture, understand privacy and security considerations in collaborative learning, perform post-training optimization of trained models for low-resource environments, and distribute models securely through the Hugging Face Hub. The tutorial emphasizes the importance of building models that can generalize well in the real world, particularly in healthcare, where resource access inequities are prevalent. It also highlights the increasing importance of FL in overcoming the challenges of sharing data across institutions. The tutorial aims to equip researchers to adapt their existing centralized algorithms to a federated architecture or build new models following the FL principle, and offers non-data scientists an opportunity to learn and discuss these topics.
- Sarthak Pati (Indiana University)
TUTORIAL 8: National Cancer Institute Imaging Data Commons as a resource to support transparency, reproducibility, and scalability in imaging AI
Primary Contact
- Andrey Fedorov (Brigham and Women’s Hospital / Harvard Medical School)
<fedorov@bwh.harvard.edu>
Co-Organizers
- Daniela Schacherer (Fraunhofer MEVIS, Bremen)
- David Clunie (DICOM Standards Committee)
- André Homeyer (Fraunhofer MEVIS, Bremen)
- Ulrike Wagner (Frederick National Laboratory for Cancer Research)
- Erika Kim (US National Cancer Institute [NCI] Data Ecosystems Branch)
- Ron Kikinis (Brigham and Women’s Hospital / Harvard Medical School)
Abstract
NCI Imaging Data Commons (IDC) (https://imaging.datacommons.cancer.gov/) is a cloud-based environment containing publicly available cancer imaging data co-located with analysis and exploration tools and resources. IDC is a node within the broader NCI Cancer Research Data Commons (CRDC) (https://datacommons.cancer.gov/) infrastructure that provides secure access to comprehensive, diverse, and expanding multi-modality collections of cancer research data, including genomics, proteomics, and clinical trial data. As of January 2024, IDC hosts over 50 TB of public radiology and digital pathology images and image-derived data, all in standard Digital Imaging and Communications in Medicine (DICOM) representation, side-by-side with the tools to support search, visualization, and analysis of the data. Recent studies demonstrated the utility of IDC for facilitating reproducible studies in imaging AI, its value in enabling development and evaluation of new AI methods and in applying AI tools to enrich existing imaging collections with annotations and other analysis results. In this tutorial, participants will be familiarized with the IDC through a combination of lectures and hands-on exercises. They will learn the basic skills of how to use the IDC for searching, accessing and visualizing image data as well as the development of IDC-based AI workflows for radiology and digital pathology applications.
- Andrey Fedorov (Brigham and Women’s Hospital / Harvard Medical School)