Six tutorials are scheduled during ISBI. These tutorials, 4h long, will be scheduled on the morning of Thursday April 16th and the afternoon of Sunday April 19th. Registration to these tutorials is done via the regular registration website.
1. Machine Learning for Neuroimaging – B. Thirion – April 16th
Head of the PARIETAL INRIA Saclay Research Team
NeuroSpin, Gif-sur-Yvette, France.
Co-presenter
John Ashburner,
Professor of Imaging Science
Wellcome Trust Centre for Neuroimaging,
University College London Institute of Neurology
Tutorial Topic
In recent years, the application of machine learning techniques to neuroimaging data has increased substantially and lead to many new analytic procedures, and sometimes new neuroscientific concepts. Pattern recognition approaches consist of a whole family of tools coming from the machine learning community, that borrow from statistics and engineering, which have been adapted to investigate neuroscience questions, but also in medical settings, to address diagnosis problems. Depending on the research question, experimental design and imaging modality, it is important to know how to draw reliable conclusions. The set of relevant machine learning techniques for neuroimaging is conditioned by neuroimaging data constraints, such as the small sample size or the relatively low signal-to-noise ratio of the data in the case of functional neuroimaging. Another noticeable characteristic of the application of machine learning tools to neuroimaging problems is that black-box approaches are not well suited, since the practitioner ultimately wants to confirm some hypotheses on the brain structures involved in a given cognitive process or a disease.
The course will focus both on subject and/or patient classification (for cognitive and clinical applications) and on regression issues. The usual functional and structural MRI modalities will be covered but the presentations will consider other types of data such as PET, EEG/MEG and network metrics. After introducing the theoretical foundations of pattern recognition in neuroimaging, the subsequent lectures will introduce methodological aspects specific to applying the approaches to anatomical and functional imaging modalities. All the concepts introduced will be illustrated with actual examples rom neuroimaging.
The course is organized such that the participant acquire knowledge about
1. the main concepts of machine learning in the light of neuroimaging constraints.
2. the problem of discriminative feature identification and its link to estimation problems.
3. the crucial impact of quality check to perform meaningful analysis of the data.
4. the implementation of relevant priors to compensate for the shortage of data.
5. the usefulness and issues regarding the use of recent computer vision approaches (e.g. deep neural nets)
6. some of the various pattern recognition software available
Tutorial Outline
The course is organized as follows:
- Part I Pattern recognition for Neuroimaging: BT + JA <\li>
- Part II Machine Learning for anatomical Neuroimaging: JA <\li>
- Part III Machine Learning for functional Neuroimaging: BT <\li>
Target Audience
It is expected that participants already have passing knowledge of machine learning/pattern recognition. At the end of the course, participants should have a broad understanding of some core pattern recognition approaches, how to apply these tools to their data to address neuroscientific questions, and how to interpret the outcomes of these analyses and draw reliable conclusions.
Assistant Professor,
Department of Chemistry and Department of Biochemistry & Cell Biology,
Stony Brook University, USA
Co-presenter
Bi-Chang Chen
Assistant Research Fellow,
Research Center for Applied Science
Academia Sinica, Nankang, Taipei, Taiwan
Tutorial Topic
In the tutorial, a brief overview of selective plane illumination microscopy (SPIM) will be given, and the fundamental working principle of the Bessel beam and the optical Lattice plane illumination microscopy as well as their relationships will be discussed. The design and construction of both microscopes will be described in details. The sample preparation, microscope operation, imaging condition optimization and image data analysis will also be covered with examples from cell mitosis, cell membrane and actin cytoskeleton dynamics on single cells cultured on coverslips to cell migration in 3D matrices and embryonic development of C. elegans and zebrafish.
Tutorial Outline
The following topics will be covered in the tutorial:
• Introduction of the Selective Plane Illumination Microscopy
• Bessel beam plane illumination microscopy
(a) Introduction of the linear Bessel beam plane illumination microscopy, two-photon Bessel beam plane illumination microscopy and Bessel beam structured illumination microscopy
(b) Design and construction of the Bessel beam plane illumination microscope;
(c) Selection of Bessel beams and optimization of the imaging condition;
• 2D Optical Lattice plane Illumination Microscopy
(a) Introduction of optical lattice and the relationship between Bessel beam and 2D optical lattice;
(b) Design and construction of the 2D optical lattice light sheet microscope;
(c) Calculation and optimization of the desired optical lattice light sheet.
• Sample preparation and image analysis
Target Audience
The tutorial is designed for physicists who want to have an in-depth understanding of the fundamental working principle of SPIM, cell biologists with prior experiences in fluorescence microscopy who wish to implement state-of-the-art 3D live fluorescence imaging techniques in their research, and computer scientists who are interested in quantitative analysis of large size bio-images.
3. Point-of-care Imaging Systems – R. Conroy, V. Pai – April 16th
Directors, Division of Applied Science and Technology
National Institute of Biomedical Imaging and Bioengineering
National Institutes of Health, Bethesda, USA
co-presenters
Angela M. Mills, MD ,
Department of Emergency Medicine
University of Pennsylvania, USA
Penny Carleton, RN, MS, MPA, MSc,
Program Leader, Clinical Systems Innovation
Center for Integration of Medicine and Innovative Technology CIMIT)
CIMIT/Boston Simulation Consortium, MA USA
Stephen Boppart, MD PhD ,
Professor, Head of the Biophotonics Imaging Laboratory
Beckman Institute for Advanced Science and Technology
University of Illinois at Urbana-Champaign, IL USA
Aydrogan Ozcan, PhD,
Chancellor’s Professor, Howard Hughes Medical Institute Professor, Head of the Bio- and Nano-Photonics Laboratory, Associate Director of the California NanoSystems Institute (CNSI)
Electrical Engineering and Bioengineering Departments
UCLA, CA USA
Tutorial Topic
With an increasing interest in healthcare outcomes and costs, there is a growing shift in focus from utilization of specialized care for the treatment of late-stage disease to an emphasis on patient-centered approaches and coordinated care teams that promote wellness and effective disease management. New delivery models have emerged where primary care physicians and nurses are assuming more significant roles, with the patient more involved in decision-making and self-care. These changes require the development of inexpensive and easy-to-use medical devices, imaging systems and information sharing tools that provide timely health status information at the point of care.
Tutorial Outline
This tutorial session will bring together leaders in the development of point of care imaging to discuss the process of designing and developing these technologies in an appropriate way. The following topics will be included:
• An overview of current point-of-care imaging platforms
• How to carry out a clinical needs assessment
• Engaging healthcare professionals in the development of appropriate technology
• How to tackle the challenges of low resources when developing technologies
• Approaches for setting up partnerships and pathways to commercialization
Target Audience
This tutorial session will be attractive to researchers interested in developing imaging technologies for use in a non-clinical setting, healthcare professionals interested in emerging approaches to monitoring, diagnosing and treating conditions at the point of care, and people interested in appropriate technologies for global health.
Nanobiophotonics Professor for Physical Chemistry, Institute of Physical Chemistry,
Friedrich-Schiller-Universität Jena, Germany
Head of the Microscopy Reseach Unit, Leibniz Institute of Photonic Technology, Germany
Head of the Biological Nanoimaging research group, Randall Division, King’s College London, UK
Tutorial Topic
This tutorial covers computational techniques used in super-resolution microscopy. First an overview of the mathematical description of optical imaging is given. Then ways to recover the sample information are described such as Wiener filtering and maximum-likelihood deconvolution.
Modern super-resolution techniques such as structured illumination imaging and photoactivated localizationh microscopy (PALM) and direct stochastic optical reconstruction microscopy (dSTORM) and the associated data reconstruction routines are covered,
Tutorial Outline
- Theory of image formation, detectors, noise characteristics
- Inverse filtering: Wiener and generalized Wiener filtering
- Maximum likelihood deconvolution
- Blind deconvolution with unknown point spread functions
- A posteriori likelihood and the role of priors
- Structured illumination microscopy (SIM)
- Recovering unknown position and grating constants
- Recovering unknown patterns via blind deconvolution
- Weighted averaging in Fourier space
- Non-linear SIM
- Pointillistic imaging
- Separating and localizing
- Methods of separation: single sources, Fitting with Poisson statistics
- The statistical way: Higher-order moments
- Methods of separation: overlapping sources
- Independent component analysis, non-negative matrix factorization, 3B
Targeted Audience
Both, physicists working on microscopy methods and interested in advanced in data reconstruction techniques and computer scientists interested in understanding the reconstruction techniques underlying superresolution imaging. More generally anybody interested in inverse problems.
5. Image-Based Measurements – C. Luengo – April 19th
Associate Professor
Centre for Image Analysis,
Uppsala University and Swedish University of Agricultural Sciences, Sweden
Tutorial Topic
Modern imaging technology provides the ability to examine a broad array of subject properties at scales from nanometers to light-years. We can look inside the human body, and inside sub-cellular compartments. We can create images depicting x-ray density, fluorophore concentration, or water diffusivity. However, simply looking at images is often no longer sufficient, and quantification is required. In this tutorial we will show how to obtain quantitative measures of physical properties from images. Measurement precision and avoidance of bias will receive special atten- tion. We will cover manual methods to estimate surface area, volume, length, and density, as well as algorithms to obtain precise measurements of those same properties given a segmentation of the objects in the image. We will also describe algorithms to estimate size distributions without requiring a segmentation. It is important to properly design the experiment before starting the imaging. We will discuss sampling of the population, selecting number and location of slices and fields of view to image, choosing imaging resolution, etc.
The tutorial assumes some basic knowledge of digital images, including concepts such as smoothing and segmentation. No advanced mathematics or programming skills will be needed.
One of the motivations for this tutorial is that software still compute the perimeter of a 2D binary object as the sum of distances between neighboring pixels along the boundary. This procedure is known to yield a biased estimate of the perimeter. It is even common to come across publications that use the simplistic boundary pixel count as an estimate of the perimeter, with even worse results. Several publications from the 1990s show how to compute unbiased estimates of perimeter. Such an unbiased estimator is not much more complex to implement than the biased estimators commonly used, and their benefit is obvious. Nonetheless, unbiased estimators are not used often enough in practice; I want to correct this situation.
Tutorial Outline
- Basics
- Accuracy vs. precision
- Sources of bias
- Coefficient of variation
- Systematic uniform random sampling
- Stereology
- The Delesse principle (what 3D properties can be derived from 2D slices?)
- Buffon’s needle problem and probe-based measurement
- Point counting for volume estimation
- The Cavalieri principle
- Estimating boundary length, surface area, and line length
- Unbiased object counting (density estimation)
- Measurement algorithms in segmented images
- Intensity and density
- Estimating volume, boundary length, and surface area
- Feret diameters and central moments for size estimation
- Using image intensity to improve precision
- Measurement without segmentation
- The granulometry for size distribution
- Path openings for length distribution
Target Audience
People using image analysis in their work who want to learn how to avoid bias in their results or people developing imaging systems who want to learn how to get more out of their equipment or people running imaging platforms who want to expand their knowledge.
6. Slice Timing and Motion Correction for fMRI Data – R. Razlighi – April 19th
Assistant Professor in Neurology
Adjunct Assistant Professor in Biomedical Engineering
Columbia University, USA
emailCo-presenter
Christian Windischberger
Associate Professor
Deputy Head MR Physics
RMedical University of Vienna, Vienna, Austria
Tutorial Topic
Rather embarrassing discoveries on the effects of motion on resting-state fMRI (rs-fMRI) data, which invalidated some major scientific findings, provoked the old and challenging motion problem again in fMRI pre-processing. Motion is also a problem in task-based fMRI data; however its artifactual effects are not as significant as they are for rs-fMRI data. Even though this is an active area of research and there exist many recent works on controlling for the effect of head motion in rs-fMRI, a consensus for the optimal correction/removal algorithm is still missing in the field. Slice acquisition timing correction, on the other hand, might be considered as a resolved issue even though the existing proposed methods are all sub-optimal. Nevertheless, its complex interactions with motion have not been studied. The issue of interaction between motion correction and slice timing correction is so challenging that even today two of the most dominant fMRI data analysis software packages (FSL, SPM) do not agree on the order of their executions in the pre-processing pipeline.
Tutorial Outline
The following topics will be covered in the tutorial:
1. Slice Timing
a) Why slice timing correction is required?
b) Existing sub-optimal methods
c) When it can be eliminated?
d) What is the interaction with motion?
2. Motion Correction
a) What are the effects of motion on fMRI signal?
b) Why resting-state fMRI are more sensitive to motion?
c) The existing method of motion correction
d) What needs to be done?
Target Audience
The expected audience of the tutorial will primarily consist of fMRI data analysts who are faced with the need for pre-processing before embarking on group-level analysis to answer substantive research questions of basic and diagnostic neuroscience. We hope to attract both novices, who are just becoming familiar with fMRI data analysis, as well as more seasoned practitioners who already have experience with standard pre-processing implementations in common software packages (i.e. FSL, SPM).