Digital Twins for Optimizing Oncological Interventions
Monday | April 14, 2025 | 8:30 – 12:00
Abstract: Digital twins are much more than modeling and simulation. Digital twins are a framework for continuous bi-directional communication between a physical counterpart (i.e., the patient) and their virtual representation (i.e., tumor growth model) with predictive capability that informs decision-making [1]. There is a significant motivation and opportunity to apply a digital twin framework to challenges in oncology [2], [3] to improve clinical outcomes on a patient-specific basis [4]. Despite the potential of digital twins, their application is largely confined to proof-of-concepts that require substantial expertise and resources to develop, deploy, and maintain. A fundamental reason is the absence of standardized methods and software tools for creating and scaling patient-specific digital twins in a generalizable way. This tutorial will present mathematical foundations, conceptual frameworks, and software tools recently developed at the Oden Institute for Computational Engineering and Sciences to design and deploy practical patient-specific digital twins.

Anirban Chaudhuri
The University of Texas at Austin, USA

David A Hormuth, II
The University of Texas at Austin, USA

Michael Kapteyn
The University of Texas at Austin, USA

Chengyue Wu
The MD Anderson Cancer Center, USA
MHub.ai – Making AI in Medical Imaging simple and Reproducible
Monday | April 14, 2025 | 8:30 – 12:00
Abstract: Machine learning (ML) holds great promise for medical imaging but faces challenges in clinical adoption due to the lack of accessible and well-documented models. Even available models often require significant effort for implementation, resulting in an “implementation gap” between research and clinical practice. Many models are developed independently, using varying datasets and technologies, complicating direct comparisons. Moreover, input compatibility issues arise when scaling these models for larger datasets, with differing formats like DICOM and NIFTI further complicating usage.
Method: We have developed Mhub, a one command framework for running ML models with a focus on medical imaging. MHub models are containerized using Docker, including inference code, dependencies, and model weights. Native integration for medical imaging formats allows for zero-configuration execution on DICOM data. Each model follows a structured template with Dockerfiles, MHub-IO modules, workflows, and metadata. The platform includes the model collection, online documentation, and a self-service contribution system. MHub-IO offers tools for data organization and conversion, making it framework-agnostic and compatible with DICOM-based workflows.
Results: MHub has successfully deployed 26 models supporting various image modalities like CT, MR, and more. These models work directly with DICOM data and generate harmonized outputs, enabling easy comparison. Sample input and output data are provided for each model.
Discussion: MHub offers a simple, DICOM-compatible, reproducible solution for the community. While it introduces some requirements for developers, the ease of use for end users justifies these efforts, promoting broader model adoption.

Leonard Nurnberg
AIM, USA

Suraj Pai
Manipal Institute of Technology, India

Granger Sutton
National Cancer Institute (NCI), USA

Jue Jiang
Memorial Sloan Kettering Cancer Center, USA

Curtis Lisle
KnowledgeVis, LLC, USA

Pei Linmin
Frederick National Laboratory for Cancer Research, USA

Andrey Fedorov
Harvard Medical School and Brigham & Women’s Hospital, USA

Hugo Aerts
Harvard Medical School and Brigham & Women’s Hospital, USA
Multi-Site Medical Imaging Data Adaptation and Harmonization
Monday | April 14, 2025 | 8:30 – 12:00
Abstract: Multi-site data pooling is increasingly used in medical imaging to boost sample sizes, enhance subject cohorts, and improve the statistical power of findings. Learning-based models, like machine learning (ML) and deep learning (DL), require extensive training data for optimal performance. However, combining data from different sites can introduce non-biological variations, especially in MRI data, due to differences in scanner vendors, acquisition protocols, field strengths, and software/hardware upgrades.
Despite efforts to standardize acquisition protocols and use imaging phantoms for calibration, some variations, such as software and hardware upgrades, remain challenging. Consequently, ML and DL models trained on multi-site data often suffer from site-related variations, leading to suboptimal performance.
To address these challenges, various methods have been proposed to harmonize data before model training or analysis. Traditional image processing methods normalize raw image data to a predefined intensity range, making images from different sites more comparable. Recently, more refined data-driven methods have emerged, including feature-level methods that harmonize pre-extracted image features and image-level methods that harmonize 3D image volumes or 2D slices.
There is a growing trend to use ML and DL generative models, such as GANs, VAEs, flow-based models, and diffusion models, for image-level harmonization. These data-driven methods have shown superior performance compared to traditional methods in multi-site MRI harmonization.
This tutorial will introduce foundational and state-of-the-art approaches to multi-site data harmonization, demonstrate relevant datasets, off-the-shelf toolboxes, pre-trained models, and evaluation metrics for assessing harmonization quality.

Mingxia Liu
University of North Carolina at Chapel Hill, USA

Gang Li
University of North Carolina at Chapel Hill, USA

Mengqi Wu
University of North Carolina at Chapel Hill, USA
Graph Learning for Dynamic Brain Network Analysis
Monday | April 14, 2025 | 13:30 – 17:00
Abstract: This tutorial delves into the emerging field of graph learning for brain network analysis, integrating concepts from neuroscience, machine learning, and network science. Participants will learn to understand the brain as a graph, with nodes representing brain regions or neurons and edges depicting their connections. The tutorial covers essential graph theory methods, including node centrality, community detection, and graph clustering, which serve as foundational tools for analyzing brain connectivity. In addition, it addresses brain network representation and dynamic modeling techniques, such as spectral graph methods and neural ordinary differential equations (ODEs), to capture temporal changes in brain connectivity. The tutorial also explores the application of graph-based approaches for disease diagnosis and biomarker discovery, focusing on how these methods can reveal connectivity patterns linked to neurological and psychiatric disorders. Lastly, it examines the potential of personalized medicine by leveraging individual brain connectivity profiles to inform tailored interventions. Through this exhaustive exploration, the tutorial highlights the transformative role of graph learning in enhancing our understanding of both typical and atypical brain functions.

Tingting Dan
University of North Carolina at Chapel Hill, USA

Guorong Wu
University of North Carolina, USA

Xiaowei Yu
University of Texas at Arlington, USA
Introduction to Cortical 3-Hinge Gyral Patterns
Monday | April 14, 2025 | 13:30 – 17:00
Abstract: In this tutorial, we present the discovery and development of the 3-hinge and gyral net. We will also introduce recent efforts to identify 3-hinge correspondences across different subjects and discuss the corresponding achievements. Additionally, we will provide related materials, including pre-trained models and sample 3-hinge examples. The tutorial is intended for people interested in novel cortical folding patterns of 3-hinge. It will cover the basic and important concepts of 3-hinge, the rationale of the development of 3-hinge as novel cortical landmarks compared to traditional region of interest (ROI) atlas, and the methods and algorithms of identifying 3-hinges and finding correspondence of 3-hinges across various subjects. There will be some hands-on computer work for those who are interested in viewing the 3-hinges across the cortical surface. Slides will be provided in PDF format, as well as the links to relevant papers available online.

Alireza Baghai-Wadji
University of Cape Town, South Africa
Customized Multiresolution Analysis: Theory and Applications
Monday | April 14, 2025 | 13:30 – 17:00
Abstract: The “Customized Multiresolution Analysis: Theory and Applications” tutorial aims to dissect the structural complexity of well-established multiresolution algorithms, discuss current trends and challenges, and allude to emerging topics in modern biomedical imaging. The idea of the customized development of multiscale analysis and synthesis tools must interest aspiring students, creative early career scientists, professionals seeking continued education, and established researchers alike. By an intricate amalgamation of mathematical physics, signal processing, and computer science, this tutorial proposes a novel approach to algorithm design for software development in clinical applications in the ISBI community. The first two hours will be devoted to the fundamentals of wavelets, frames, curvelets, edgelets, and allied topics. The third hour will discuss applications in biomedical imaging, codifying and compression of images, and AI’s possible role in processing and refining images. The tutorial is self-contained and promises to be marked by clarity of thought without compromising rigor. After completing the formal tutorial session, the instructor will be available to the interested tutorial participants to initiate collaborative ties and offer them the possibility to join his voluntary mentorship and supervision program.