IEEE ISBI Virtual Platform

A total of seven tutorials were selected for ISBI 2022. The exact schedule of the tutorials during the conference and various details regarding the mode of presentation will be communicated later. The selected tutorials are:

  1. Photoacoustic Imaging: Principles, Systems, and Applications
    • Presenter: Chulhong Kim, Pohang University of Science and Technology, South Korea

    • Abstract

      Optical visualization deep within biological tissues is challenged by the significant amount of light scattering. Existing optical imaging methods, e.g. confocal or two-photon microscopy, optical coherence tomography, and diffuse optical tomography, suffer from either a shallow imaging depth (i.e., ~ 1 mm) or poor spatial resolution. Alternatively, conventional medical imaging modalities, such as magnetic resonance imaging, X-ray computed tomography, ultrasound imaging, and nuclear imaging, have been intensively investigated and widely used in clinics. However, none of these can envisage what our eyes can see because these modalities do not use the optical spectrum as a contrast mechanism. Photoacoustic imaging (PAI) is capable of overcoming these limitations by delivering high-resolution optical contrast from depths of many millimeters to centimeters in highly scattering living tissues. .

      PAI has been extensively explored for biological and medical applications during the last decade. The physical effect is based on energy transduction from light to sound, equivalent to the conversion of lightning into thunder in our daily life. Upon viewing a flash of lightning, one can hear the thunder arriving a few seconds later. If multiple observers (i.e., at least three) listen to the thunder at different locations, the exact origin of the lightning can be calculated by considering the temporal delays with a simple triangulation method. PAI adapts a similar reconstruction method to form multidimensional (i.e., 1-, 2-, or 3-D) images of biological tissues. More importantly, because both scattered and unscattered light can generate photoacoustic (PA) waves, the imaging depth of PAI can be greatly enhanced in biological tissues up to more than 5 cm. The spatial resolution of PAI is mainly determined by the acoustic detection parameters, and thus it is not directly affected by light scattering and can maintain high resolution in deep tissues. Extensive advances in laser, computer, and ultrasound technologies have facilitated development of PAI imaging systems throughout the 1990s with the first non-invasive structural and functional images acquired in 2003 from brains of living mice. Since then, PAI has gained tremendous popularity as a new and powerful addition to the arsenal of biological and medical imaging modalities.

      Preclinical applications of PAI have rapidly developed with imaging scanners, both experimental and commercial, found in many laboratories around the globe. PAI has been applied to image (1) single cells in vivo, (2) vascular and lymphatic networks, (3) angiogenesis, (4) oxygen saturation of hemoglobin in micro blood vessels, (5) blood flows, (6) metabolic rates, (7) functional brain activity, (8) drug delivery and treatment responses, (9) molecular targeting with biomarkers and contrast agents, and (10) gene expressions. Current clinical explorations mainly focus on imaging breast and melanoma cancers and guiding sentinel node biopsy for breast cancer staging. However, significant expansion of potential clinical applications is expected in the near future, including (1) prostate, thyroid, head and neck cancer imaging; (2) diagnosis of peripheral- and cardio-vascular disease; (3) monitoring early responses of neoadjuvant therapy; (4) functional human neuroimaging; (5) gastrointestinal tract imaging using endoscopic probes; (6) intravascular imaging using catheters; (7) monitoring of arthritis and inflammation; (8) label-free histology; and (9) in vivo flow cytometry.

  2. Quantitative functional and molecular contrast imaging
    • Presenters: Simona Turco and Massimo Mischi, Eindhoven Univeristy of Technology, Netherlands

    • Abstract
      Since the time it was introduced for invasive measurement of blood flow and volumes in the central circulation, the use of indicators has experienced tremendous advances. In particular, the possibility of combining indicators with fast-developing imaging solutions has opened up an entirely new spectrum of possibilities for minimally invasive, contrast-enhanced imaging. Dedicated indicators, referred to as contrast agents, have been developed for the different imaging modalities, starting from iodine for X-ray (and computed tomography) imaging, to radionuclides for nuclear imaging, up to paramagnetic agents for magnetic resonance imaging and microbubbles for ultrasound imaging. Besides their qualitative use, often limited by subjective and complex interpretation of the images, advanced methods for quantitative interpretation of contrast-enhanced images and videos have shown an exceptional growth in the past decades. Since the introduction of the first indicators, accuracy and complexity of the adopted models have shown terrific development, supported by increasing computing capabilities. Several models have been developed to interpret the transport kinetics of the different contrast agents in the vascular bed, also including complex effects in relation to vascular permeability and contrast extravascular leakage. The establishment of these quantitative methods in clinical practice is nowadays showing progress, based on extensive clinical validation, and many quantitative applications have already evidenced clinical value. Assessment of myocardial perfusion and characterization of the microvascular architecture are clinical applications where analysis of the contrast kinetics by advanced modeling has opened important diagnostic perspectives, especially in cardiology and oncology. This tutorial provides a comprehensive overview of all the pharmacokinetic models adopted for quantitative interpretation of contrast-enhanced imaging, discussing the related technical/methodological aspects in relation to their practical use. All the imaging technologies are treated, including ultrasound (US), magnetic resonance imaging (MRI), X-ray and computed tomography (CT), and nuclear imaging. Problems related to calibration of the imaging system and accuracy of the estimated physiological parameters are also discussed. The broad spectrum of diagnostic possibilities provided by quantitative contrast-enhanced imaging is presented with a focus on cardiology and oncology. Novel developments in the area of quantitative molecular imaging are also presented along with their potential clinical applications.

  3. Self-supervised Learning: Overview and Application to Medical Imaging
    • Presenters: Pavan Annangi,  GE Global Research, India; Deepa Anand GE Healthcare, India; Hemant Kumar Aggarwal, Wipro GE Healthcare, India; Hariharan Ravishankar, GE Healthcare, India; Rahul Venkataramani, GE Global Research, India

    • Abstract
      Supervised learning has achieved tremendous progress making it the ubiquitous tool of choice in nearly all learning applications. However, the success of supervised learning largely depends on the quantity and quality of labelled datasets, which is prohibitively expensive in healthcare settings. A recent technique, termed ‘self-supervised learning’ (SSL) aims to exploit the vast amounts of relatively inexpensive unlabeled data to learn meaningful representations that reduce the annotation burden. Self-supervised learning is a form of unsupervised learning that extracts latent information encoded inside the input dataset to train a neural network for the end task. Self-supervised learning relies on input dataset to obtain the target for the training loss estimation (self-supervision). Self-supervision is particularly relevant for researchers from the medical community for several reasons including: 1) cost and feasibility of annotating large datasets 2) limitations of transfer learning – (data type (2D+t, 3D), data distribution shift (grayscale images limited to anatomies), problem types (segmentation, reconstruction). Through this special session, we will attempt to introduce self-supervised learning, popular architectures and successful use case particularly in the medical imaging domain. The initial successes in self-supervised learning followed a template of designing pretext tasks (tasks with labels derived from data itself, e.g., colorization, jigsaw etc.) followed by utilizing the learnt representations on the downstream task of interest. However, in recent years, these methods have largely been replaced by contrastive learning and regularization-based methods (virtual target embeddings, high entropy embedding vectors). In this talk, we will review the most popular methods to perform self-supervised learning and its applications. Despite the obvious need for SSL, the application of self-supervised learning poses a challenge due to the differences in problem type. We will discuss methods developed in-house to extend the SSL techniques to classification and segmentation use-cases. The subsequent section of the talk would focus on Self Supervised techniques for compressed sensing (CS) problems. The classical CS-based methods rely only on noisy and undersampled measurements to reconstruct the fully sampled image. These methods exploit the imaging physics to reconstruct a data-consistent image utilizing an iterative algorithm but are comparatively slow. Model-based deep learning methods combine the power of classical CS-based methods and deep learning. These methods are extended for SSL using Ensembled Stein Unbiased Risk Estimator (ENSURE) that can approximate the projected mean-square-error (MSE) as true MSE. We will also discuss some of the empirical rules that have aided in our experiments on training SSL methods.

  4. Graph Signal Processing Opens New Perspectives for Human Brain Imaging
    • Presenters: Maria Giulia Preti, EPFL, Switzerland and Thomas Bolton, Centre Hospitalier Universitaire Vaudois (CHUV), Switzerland

    • Abstract
      State-of-the-art magnetic resonance imaging (MRI) provides unprecedented opportunities to study brain structure (anatomy) and function (physiology). Based on such data, graph representations can be built where nodes are associated to brain regions and edge weights to strengths of structural or functional connections. In particular, structural graphs capture major physical white matter pathways, while functional graphs map out statistical interdependencies between pairs of regional activity traces. Network analysis of these graphs has revealed emergent system-level properties of brain structure or function, such as efficiency of communication and modular organization. In this tutorial, graph signal processing (GSP) will be presented as a novel framework to integrate brain structure, contained in the structural graph, with brain function, characterized by activity traces that can be considered as time-dependent graph signals. Such a perspective allows to define novel meaningful graph-filtering operations of brain activity that take into account the anatomical backbone. In particular, we will show how activity can be analyzed in terms of being coupled versus decoupled with respect to brain structure. This method has recently showed for the first time how regions organized in terms of their structure-function coupling form a macrostructural gradient with behavioural relevance, spanning from lower level functions (primary sensory, motor) to higher-level cognitive domains (memory, emotion). In addition, we will also describe how the derived structure-function relationships can be considered more in depth, in terms of their temporal dynamic properties, and at the finer-grained scale of individual sub-networks. From the methodological perspective, the well-known Fourier phase randomization method to generate surrogate data can also be adapted to this new setting. We will show how to generate surrogate data of graph signals in this way, which allows a non-parametric evaluation of the statistical significance of the observed measures.

  5. From U-Net to Transformers: Navigating through key advances in Medical Image Segmentation
    • Presenters: Vishal Patel and Jeya Maria Jose Valanarasu, Johns Hopkins University, USA

    • Abstract
      Medical image segmentation plays a pivotal role in computer-aided diagnosis systems which are helpful in making clinical decisions. Segmenting a region of interest like an organ or lesion from a medical image or a scan is critical as it contains details like the volume, shape and location of the region of interest. Recently, the state of the art methods for medical image segmentation for most modalities like magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) are based on deep learning. These deep learning based methods proposed for medical image segmentation help in aiding radiologists for making fast and labor-less annotations. In this Tutorial, we will go through the key advances in both convolution networks till transformers and understand why and how these advances have impacted medical image segmentation. CNN Based Methods: The introduction of U-Net in 2015 caused a revolution in medical image segmentation as it surpassed the previous segmentation methods by a large margin and was easy to train for specific tasks. U-Net used a encoder-decoder based architecture using convolutional neural networks that takes in a 2D image as input and outputs the segmentation map. Later, 3D U-Net was proposed for volumetric segmentation. Following that, a lot of methods were proposed improving the key architecture of U-Net/3D U-Net. U-Net++ was proposed using nested and dense skip connection for further reducing the semantic gap between the feature maps of the encoder and decoder. UNet3+ proposed using full-scale skip connections where skip connections are made between different scales. V-Net proposes processing the input volumes slice-wise and uses volumetric convolutions instead. KiU-Net combines feature maps of both under-complete and overcomplete deep networks such that the network learns to segment both small and large segmentation masks effectively. nnU-Net shows how just tuning U-Net properly can achieve a good performance. Transformer Based Methods: TransUNET proposed a methodology for multi-organ segmentation by using a transformer as an additional layer in the bottleneck of a U-Net architecture. It encodes tokenized image patches from a convolution neural network (CNN) feature map as the input sequence for extracting global contexts. Medical Transformer introduces a transformer-based gated axial attention mechanism for 2D medical image segmentation to train transformers in the low data regime. UNETR introduces a transformer based method for 3D volumetric segmentation. Multi-Compound Transformer (MCTrans) incorporates rich feature learning and semantic structure mining into a unified framework embedding the multi-scale convolutional features as a sequence of tokens, and performing intra- and inter-scale self-attention, rather than single-scale attention in previous works. Swin TransUNet uses a shifted window and window attention to extract hierarchical features from the input image.

  6. Federated Learning in Medical Imaging
    • Presenters: Jayashree Kalpathy-Cramer, Massachusetts General Hospital, USA; Holger Roth and Michael Zephyr, NVIDIA, USA

    • Abstract

      Artificial Intelligence (AI) and machine learning (ML) are transformative technologies for healthcare. They are being used across the healthcare spectrum from improvement of image acquisitions to workflows, diagnosis and detection and assessment of response. Recent technical advances in deep learning have come about due to a confluence of advances in hardware, computational algorithms and access to large amounts of (annotated) data. These algorithms have demonstrated extraordinary performance for the analysis of biomedical imaging data including in radiology, pathology, ophthalmology and oncology. Despite such success, deep learning algorithms in medical imaging have also been shown to be brittle and not work as well on data that is different from what they were trained on. Data heterogeneity can arise due to differences in image acquisition, patient populations, geography and disease prevalence and presentations. Such heterogeneity poses challenges for building robust algorithms. One way to address this challenge is to ensure that the training dataset is diverse and representative, ideally from multi-institutional data sources. However, in healthcare access to such large amounts of multi-institutional data can be challenging due to concerns around patient privacy and data sharing, regulatory affairs and technical considerations around data movement, replication and storage. Recently distributed learning approaches such as federated learning have been proposed to address some of these challenges. Federated learning allows for learning from multi-institutional datasets with the need for data sharing. In classical federated learning, data reside in a consortium of sites, each with compute capabilities. Model architectures and common data elements are agreed to ahead of time. Training occurs in rounds where each site (client) trains a model locally and updates the model weights to a central server. The central server performs the aggregation of model weights and sends an updated model down to all clients. This process continues until convergence is achieved. Federated learning has shown to improve global and local model performances. Other configurations to federated learning including split learning, swarm learning and cyclical weight transfer.

      In this tutorial, we will begin with a very brief review of the literature around some of the successes of machine learning for biomedical imaging, describe some of the challenges including brittleness and generalizability, and highlight the need for federated learning. We will then review in detail the various aspects of a modular federated learning pipeline including trainers, secure communication and aggregation. This will be followed by hands-on activities to set up and evaluate federated learning on public datasets. The talks and the tutorial will be delivered by Dr. Jayashree Kalpathy (MGH/Harvard medical school, Dr. Holger Roth and Michael Zephyr (NVIDIA). We will be using the open-source MONAI infrastructure for the hands-on portion. Project MONAI is a freely available, community-supported, PyTorch-based framework for deep learning in healthcare imaging. It provides domain-optimized foundational capabilities for developing healthcare imaging training workflows in a native PyTorch paradigm and has been downloaded over 90,000 times. Dr. Kalpathy-Cramer is on the steering committee for Project MONAI and Dr. Kalpathy-Cramer and Dr. Roth co-lead the federated learning working group within Project MONAI.