5 tutorials are proposed at IEEE ISBI’21. They will be held during live sessions on Tuesday 13 April, in parallel, from 1pm to 4pm (CET time). They will be recorded and available on the platform in replay from 15 April and during 6 months for those who have registered. A chat room will be available to interact with the speakers throughout the conference.

Tutorials are:

#1. Neurological Disease Progression Modelling

by Sara Garbarino (UNIGE Genoa, Italy), Marco Lorenzi (INRIA Sophia Antipolis, France), Vikram Venkatraghavan (EMC, Rotterdam, Netherlands)

Summary / Presentation

Discrete models represent disease progression as a cumulative sequence in which biomarker abnormality occurs (disease “events”), together with uncertainty (positional variance) in that sequence. The most mature discrete DPM is the event-based model (Fonteijn-NeuroImage-2012; Young-Brain-2014; Venkatraghavan-NeuroImage-2019), which is able to infer a sequence from cross-sectional cohort data. Conceptually, this longitudinal picture of neurological disease progression is estimable because earlier events will have commensurately higher prevalence in a cohort containing a spectrum of clinical cases. Mathematically this is evaluated as more individuals having a higher data-driven likelihood of abnormality in the earlier events. With sufficient representation across combinations of abnormal and normal observations, the likelihood of any full ordered sequence can be estimated, and thus the most likely sequences can be revealed. The probabilistic sequence estimated by an event-based model is useful for state-of-the-art precision in fine-grained staging of individuals — assessment of an individual’s disease progression stage — by calculating the likelihood of their biomarker data, given the sequence. Software for the event-based model is part of a suite of models available from https://github.com/EuroPOND/europond-software, and in this talk we will intriduce the model, then demonstate it’s utility in application to Alzheimer’s disease progression. Our Jupyter notebook will be made available to workshop participants who wish to participate.

#2. Artificial Intelligence in Ultrasound Imaging

by Yonina C. Eldar (Weizmann Institute of Science, Rehovot, Israel ) and Ruud J.G. van Sloun (Eindhoven University of Technology, Netherlands)

Summary / Presentation

This course will start with a brief introduction of deep learning, and its impact across many domains, including medical imaging in general. We will then shortly outline the strong opportunities for ultrasound imaging, moving from workflow enhancement and image analysis to image formation and acquisition. To that end, we will also discuss the basic principles of ultrasound image acquisition and image formation, along with the specific challenges that may well be addressed using deep learning in the coming years. We will then recall fundamentals of deep learning, ranging from understanding the relevance of sequential nonlinear transformations for representation learning to log-likelihood based optimization of neural network parameters. Optimization aspects such as the impact of local minima and saddle points in the solution space will also be discussed. We will then elaborate on the design of effective neural network architectures. In this last part, we place a particular emphasis on model-based deep learning methods, i.e. deep networks that leverage known signal structure by integrating models into deep networks (deep unfolding methods), and deep networks that are integrated into known model-based algorithms (data-driven hybrid algorithms). The last part of this tutorial will focus on the wealth of opportunities that deep learning brings for ultrasound imaging. Beyond image-level classification and segmentation, we will discuss neural networks for front-end receive processing, including beamforming, image compounding, clutter suppression, and advanced applications such as super-resolution imaging. We will also discuss the power of end-to-end optimization of entire signal processing chains in ultrasound imaging, from the upstream sensor to the final downstream analysis.

#3. A review of image annotation, augmentation and synthesis approaches for accelerating supervised machine learning in bioimaging

by D. Rousseau (LARIS, Université d’Angers, France), A. Ahmad (CREATIS, INSA Lyon, France) and N. Debs (CREATIS, Université de Lyon, France)

Summary / Presentation

In the era of machine learning driven computer vision, unequaled performances are accessible with advanced algorithms such as deep learning. The bottleneck is no more the design of the algorithms but the creation of the ground truth associated to the images to be processed. In this tutorial, we review the strategies to accelerate the creation of such ground truth. This includes an overview of the annotation tools but also the automatic generation of pairs of fake images and ground truth mimicking reality (augmentation, simulation, synthesis). Because they are developed by different sub communities of computer sciences (computer graphics for annotation tools, machine learning for augmentation and synthesis, image processing and physics for simulation), these methods are usually presented separately while they, in the end, all contribute to speeding up the deployment of supervised machine learning algorithms. We provide a unified presentation with illustration in biomedical imaging examples of specific interest to the ISBI audience. We also provide practical advice and guidelines on the comparative choice of these strategies.

#4. Implementing a head-motion correction algorithm for diffusion MRI in Python, using Dipy and NiTransforms

by Erin W. Dickie (University of Toronto, Canada), Oscar Esteban (University of Lausanne, Switzerland)

Summary / Presentation

The recent work of Botvinik-Nezer et al. has brought into the foreground one relevant issue on the reproducibility of neuroimaging studies — the methodological variability in our research workflow. With fMRIPrep, we proposed to minimize the researcher’s degrees-of-freedom by standardizing the preprocessing of functional MRI data. The standardization not only minimizes the methodological variability of preprocessing (i.e., those processing steps making the original data generated by the scanning device ready for statistical modeling and analysis), it also allows researchers to focus on the analysis step. We then generalized fMRIPrep principles to other modalities with the NiPreps (NeuroImaging PREProcessing toolS) framework. In the development of dMRIPrep, “a NiPrep” for diffusion MRI data replicating the fMRIPrep pattern, we have found that an open and community-driven implementation of a head-motion
correction algorithm was missing. This tutorial demonstrates how to leverage the general Python for science framework and two specific packages (Dipy and NiTransforms) to implement a 4D image registration algorithm from the ground-up.

#5. Vector motion estimation by high-frame-rate plane-wave ultrasound imaging

by D. Garcia (CREATIS, INSERM, France)

Summary / Presentation

Motion estimation by ultrasound imaging plays an important role in clinical diagnosis. It mainly involves the estimation of blood velocity and tissue displacements. The recent advent of high-frame-rate ultra-sound imaging has made it possible to easily evaluate 2-D vector motion maps. This tutorial will give an overview of standard techniques for motion estimation, including speckle tracking, color Doppler and vector Doppler. It will be supported by a short theoretical lecture, followed by practical hands-on sessions with MATLAB based on simulations and experimental data. Basic knowledge of ultrasound imaging and MATLAB programming will be required. The following topics will be covered:

• Fast ‘n easy processing using MUST (www.biomecardio.com/MUST)

• I/Q demodulation & beamforming

• Doppler & vector Doppler

• Speckle tracking by block-matching