TUTORIALS
- Title:
- Artificial Intelligence in Ultrasound Imaging
- Organizers:
- Ruud J.G. van Sloun, Yonina C. Eldar
- Abstract:
-
This course will start with a brief introduction of deep learning, and its impact across many domains, including medical imaging in general. We will then shortly outline the strong opportunities for ultrasound imaging, moving from workflow enhancement and image analysis to image formation and acquisition. To that end, we will also discuss the basic principles of ultrasound image acquisition and image formation, along with the specific challenges that may well be addressed using deep learning in the coming years.
We will then briefly recall fundamentals of deep learning, ranging from understanding the relevance of sequential nonlinear transformations for representation learning to log-likelihood based optimization of neural network parameters. Optimization aspects such as the impact of local minima and saddle points in the solution space will also be discussed. We will then elaborate on the design of effective neural network architectures in the context of ultrasound imaging. In this last part, we place a particular emphasis on model-based deep learning methods, i.e. deep networks that leverage known signal structure by integrating models into deep networks (deep unfolding methods), and deep networks that are integrated into known model-based algorithms (data-driven hybrid algorithms).
The last part of this tutorial will focus on the wealth of opportunities that deep learning brings for ultrasound imaging. We will discuss neural networks for front-end receive processing, including beamforming, image compounding, clutter suppression, and advanced applications such as super-resolution imaging. We will also discuss the power of end-to-end optimization of entire signal processing chains in ultrasound imaging, from the upstream sensor to the final downstream analysis.
- Title:
- Vision-Language Multi-modal Learning for Biomedical Images
- Organizers:
- Joonseok Lee, Edward Choi
- Abstract:
-
Vision and language are two prominent modalities in biomedical data (e.g. radiology images paired with free-text reports), and it is essential for machines to understand the two modalities properly in order to enable next-generation clinical support. In this tutorial, we share multi-modal understanding frameworks in the general domain as well as recent interesting results in the medical domain. The first 1.5-hour will be conducted by Prof. Joonseok Lee from Seoul National University (co-affiliated with Google Research), introducing recent multi-modal modeling focused on vision-language, including Transformer-based pre-training approaches. In the second 1.5-hour session, Prof. Edward Choi from KAIST will introduce recent multi-modal learning studies specifically in the medical domain including applications of pre-trained models and zero-shot diagnosis/generation.
- Title:
- Mapping neural correlations of cognitive processes through task based functional Magnetic Resonance Imaging (fMRI)
- Organizers:
- Yesika Alexandra Agudelo Londoño
- Abstract:
-
Imaging of the brain has become a growing field in biomedical imaging. Recent advances have demonstrated its ability to map cognitive functions and improve understanding of the functioning of the nervous system. In clinical practice, neuroimaging is currently used as a tool for the pre-surgical preparation of patients who are going to undergo neurosurgery for epilepsy, malformation or any type of tumor. Being able to map areas of the brain related to cognitive processes can ensure that the patient suffers minimal damage to structures and that functions can be preserved with better surgical outcomes. Therefore, it is proposed in the tutorial, the approach of several concepts and practices for the study of task-based functional magnetic resonance images, such as brain anatomy, their respective roles and functioning of the different functional neural networks; MRI physical principles and functional sequences; preprocessing and quality analysis; design of experiments, application in Psychopy; and statistical analysis of results.
- Title:
- Diffusion models: foundations and applications in biomedical imaging
- Organizers:
- Jiaming Song, Hyungjin Chung, Jong Chul Ye
- Abstract:
-
Diffusion models have been successfully applied to various applications such as text-to-image generation, natural language generation, audio synthesis, motion generation, and time series modeling. The ability to model complex, high-dimensional distributions also makes diffusion models strong candidates for solving inverse problems and inferring the underlying signal from measurements. As such, diffusion models have been used in the medical imaging domain, such as MRI reconstruction and CT reconstruction, achieving state-of-the-art results in many cases.
The tutorial will consist of three parts. In the first part, we will first provide a brief overview of the fundamental mathematical ideas behind diffusion models developed since summer 2020. We will start with Denoising Diffusion Probabilistic Models (DDPMs), one of the earliest successful diffusion models, explaining the variational objective function, and its connections to denoising autoencoders. We will then discuss the stochastic differential equations perspective of diffusion models, which generalizes DDPMs to continuous time. We will end this part by discussing some advanced techniques that make diffusion models useful, such as how to perform accelerating sampling, how to perform conditional generation, and how to combine diffusion models with existing generative models.
In the second part, we will focus on using diffusion models as a flexible prior for solving inverse problems, which has more direct connections to medical imaging. We categorize most existing methods into two paradigms: replacement-based and reconstruction-based. We will first introduce the high-level idea behind the two paradigms, which will require knowledge from the last part (sampling techniques and conditional generation, respectively). We will also list individual publications on this topic, and discuss how they are connected to one (or both) of the paradigms. We will conclude the part by discussing how the applicability of diffusion models can be further broadened to blind inverse problems, and 3D reconstruction problems.
In the third and last part, we will discuss applications of diffusion models to the biomedical imaging domain. We start by introducing methods to solve CS-MRI and SV/LA-CT as special cases of inverse problems discussed in the former section, categorizing them into two: replacement-based, and reconstruction-based. We move on to discuss denoising methods applied to various modalities, e.g. MRI, PET, and CT. Finally, we cover methods that are used for medical image translation and anomaly detection.
- Title:
- Topological Data Analysis for Biomedical Imaging Data
- Organizers:
- Moo K. Chung
- Abstract:
-
Topological data analysis (TDA) is a fast growing field proving many powerful tools for biomedical imaging data. TDA characterizes topological changes of multivariate representations of imaging data in multidimensional scales. In doing so TDA reveals the persistent topological patterns in data only visible on a multiscale level. The overall topological changes hold more significance in TDA features over fleeting structures also makes the approach particularly robust at the presence of imaging noise and artifacts. This is the first TDA tutorial in ISBI. The tutorial is aimed at educating both basics and state of the arts in TDA for students and researchers attending ISBI. The expected audiences are graduate students and researchers trying to learn TDA for the first time. However, existing TDA researchers will also benefit. The knowledge in TDA or topology is not needed.
The tutorial will consist of three major topics covered for one hour each for total three-hour duration. Three professors Moo K. Chung (University of Wisconsin-Madison), Soheil Kolouri (Vanderbilt University) and Hernando Ombao (KAUST) will give one hour lecture each. Chung will give an introductory overview of basic concepts in TDA (filtrations, persistent diagrams, barcodes), Kolouri will explain how to compute the Wasserstein distance between persistent diagrams using existing baseline method and the scalable sliced-Wasserstein distance. Ombao will explain how to transform functional biomedical images such as MEG, EEG, fMRI and fNIR into topological descriptors through time delay embedding and slide window embedding.
The tutorial will consist of a brief theoretical review on the topics with hand-on computer demonstrations. We recommend audience to bring their own laptop to run the provided codes during the tutorial. All the tutorial lecture slides, relevant tutorial paper and computer codes will be provided at https://github.com/laplcebeltrami/ISBI2023TDA. After the tutorial, at minimum, audiences will understand how to extract topological features, compute the topological distances, and convert functional imaging data into topological descriptors. Any administrative issues related to the tutorial should be addressed to mkchung[at]wisc.edu
- Title:
- Computational MRI in the Deep Learning Era: The two facets of acquisition and image reconstruction
- Organizers:
- Philippe Ciuciu, Jeffrey Fessler
- Abstract:
-
This tutorial aims to summarize recent learning-based advances in MRI, concerning both accelerated data acquisition and image reconstruction strategies. It is specifically tailored to graduate students, researchers and industry professionals working in the medical imaging field who want to know more about the radical shift machine learning has introduced for MRI during the last few years. As MRI is the most widely used medical imaging technique for non-invasively probing soft tissues in the human body (brain, heart, breast, liver, etc), training PhD students, postdocs and researchers in electrical and biomedical engineering is strategic for cross-fertilizing the fieldsand for understanding the ML-related needs and expectations from the MRI side.
In the last decade, the application of Compressed Sensing (CS) theory to MRI has received considerable interest and led to major improvements in terms of accelerating data acquisition without degrading image quality in low acceleration regimes. Two recent complementary research directions are starting to supplant this classical CS setting to reach highly accelerated regimes: First, the design of optimization and learning-based under-sampling schemes and second, the advent of machine learning tools (e.g., deep learning) for MR image reconstruction and eventually their combination. This course therefore focuses on these new trends in CS MRI.
- Title:
- Bayesian Inference for Inverse Problems: From Sparsity-Based Methods to Deep Neural Networks
- Organizers:
- Michael Unser, Pakshal Bohra
- Abstract:
-
Inverse problems are central to biomedical imaging, examples being deconvolution microscopy, computed tomography, magnetic resonance imaging, or optical diffraction tomography. This tutorial is centered on Bayesian inference, which is a powerful method for the resolution of such problems. Our goal is to present different flavors of this approach, from classical methods to more recent deep-learning-based methods, in a concise and digestible way.
The tutorial is divided into two parts: model-based approaches and learning-based approaches. We begin the tutorial by briefly describing variational approaches that are in common use for the resolution of ill-posed problems. In particular, we discuss Tikhonov regularization and the transition to more sophisticated (and better-performing) sparsity-based regularizers (TV denoising, wavelet shrinkage). We then introduce the Bayesian formulation of inverse problems in a general setting and we highlight the potential of such an approach. We detail the different components of the framework such as the likelihood function, the prior distribution, the posterior distribution (the main quantity of interest), and the different ways one can use the posterior distribution to perform inference. Next, we dive into the world of stochastic signal models for the specification of the prior distribution. We look at the classical Gaussian processes and their non-Gaussian counterparts which admit a sparse expansion in wavelet-like bases and are thus termed sparse processes. We then focus on the maximum a posteriori (MAP) estimators which we show are compatible with the commonly used variational techniques that we described in the beginning of the tutorial. We outline some optimization algorithms (e.g., forward-backward splitting, alternating direction method of multipliers) that compute such estimators. We also illustrate their use on image-reconstruction tasks such as deconvolution and computed tomography. Finally, we talk about the notion of sampling from the posterior distribution which allows one to perform more advanced Bayesian inference as compared to MAP estimation. We give a short primer on Markov chain Monte Carlo methods as they enable efficient posterior sampling. To make things clearer, we provide two concrete applications. First, we derive sampling schemes to compute the minimum mean-square error estimators for sparse stochastic processes (SSPs) which we use to develop a statistical benchmarking framework for signal-reconstruction algorithms. Then, we look at an example of uncertainty quantification in simple imaging tasks (e.g., deblurring).
We begin the second part of the tutorial by discussing the advent of deep-learning-based methods for the solution of inverse problems (the learning revolution). Specifically, we review the first two generations of neural-network-based methods which can be viewed as the learned counterparts of MAP estimators (Tikhonov regularization and sparsity-promoting techniques). We then mention pitfalls associated with such methods, which are especially relevant in sensitive applications such as biomedical imaging. We also present benchmarking results for CNNs based on the statistical framework for SSPs described in the first part of the tutorial. Next, we move on to the development of posterior-sampling schemes that involve neural-network-based priors. We focus on two kinds of learned priors: implicit priors defined through denoising CNNs and deep generative priors that involve variational autoencoders, generative adversarial networks, and score-based diffusion models. We detail efficient sampling schemes for many of them. We illustrate the power of one such GAN-based method by looking at nonlinear inverse problems such as phase retrieval and optical diffraction tomography.
- Title:
- Recent Advances in Machine Learning for Image Reconstruction: From Sparse Modeling to Deep Networks
- Organizers:
- Saiprasad Ravishankar, Bihan Wen
- Abstract:
-
Data-driven and machine learning techniques have received increasing attention in recent years for solving various problems in biomedical imaging. Data-driven models and approaches including dictionary and transform learning, deep learning, etc., provide promising performance in image reconstruction problems in magnetic resonance imaging, computed tomography, and other modalities relative to traditional approaches using hand-crafted models such as with total variation. The focus of this tutorial is to review recent advances in machine learning for image reconstruction, from modeling, algorithmic, and mathematical perspectives. In particular, the approaches based on both sparse modeling and deep neural networks will be discussed. Among data-driven and machine learning approaches, the tutorial will first survey methods inspired by sparse modeling such as dictionary learning and sparsifying transform learning-based image reconstruction, as well as methods integrating sparsity and nonlocal image models. The tutorial will then introduce modern deep learning-based methods for image reconstruction, and present numerous deep learning-based methods that have been developed in recent years including image-domain or sensor-domain convolutional neural network (CNN) denoisers, and hybrid-domain deep learning schemes. More recent advances in deep learning-based image reconstruction such as involving generative adversarial networks, diffusion models, transformers, self-supervised learning, reinforcement learning, and ensuring robustness to (e.g., adversarial) perturbations, etc., will also be briefly covered. The tutorial will further discuss the connections between sparse modeling and deep learning methods, and survey recent works that learn deep sparse models for imaging or unify sparse models and deep neural networks into a combined and effective framework. While most of the tutorial will focus on image reconstruction, some recent advances in machine learning for image acquisition will also be covered in the end. The tutorial will cover the background necessary to understand the advanced image reconstruction methods and theory, followed by providing a deeper coverage of recent advances in machine learning based image reconstruction for both academic and industry attendees.