Special Sessions

ISBI 2024 Special Sessions

    • Lead Organizers:
      • Moo K. Chung, University of Wisconsin-Madison, USA
    • Speakers:
      • Yalin Wang, Arizona State University, USA
      • Anqi Qiu, Hong Kong Polytechnic University, China
      • Viljay Anand, University College London, UK
      • Mustafa Hajj, University of San Francisco, USA
    • Brief Description:

    The era of big data in biology and medicine brings exciting opportunities for new scientific discoveries and challenges in biomedical image processing and analysis. Yet, valuable information in the sheer amount of complex imaging data may be hidden in patterns that cannot be decoded easily with standard tools. Recently, simplical complex data structure has been a promising new avenue of research in extracting such hidden patterns in biomedical images. Simplicial complex is a collection of vertices, edges, triangles, and their n-dimensional counterparts. These components are called simplices. They collectively form a structure that can capture complex, multi-scale interactions among data points. Unlike traditional graph-based data structures, which can only capture pairwise relationships, simplicial complexes are capable of encoding higher-order interactions among vertices, thereby enriching the data representation. Simplicial complexes can incorporate both geometric and topological information, offering a more comprehensive understanding of the imaged structures. Persistent homology, a method from topological data analysis that studies the topological features of a space at various spatial resolutions, can be naturally applied to simplicial complex representations. This new data structure and accompanying geometric and topological tools promises to significantly advance the field by providing new tools for the extraction and interpretation of intricate unaddressed patterns in biomedical.

    • Lead Organizers:
      • Antonio Martínez Sánchez, University of Murcia, Spain
      • Tingying Peng, Helmholtz Institute Munich, Germany
    • Speakers:
      • Carlos Oscar Sorzano Sánchez, Centro Nacional de Biotecnología, Spain
      • Harold Phelippeau, Thermo Fisher Scientific, USA
      • Ricardo D. Righetto, Biozentrum – University of Basel, Switzerland
      • Daniel Baum, Zuse Institute Berlin, Germany
    • Brief Description:

    The cellular environment is characterized by the presence of many different molecular species, where macromolecular complexes, stable or transient, underlie critical cellular functions. Current 3D electron microscopy techniques such as cryo-electron microscopy (cryo-EM) allows to reconstruct near atomic resolution molecular complexes, cryo-electron tomography (cryo-ET) enables an accurate three-dimensional visualization and analysis of the subcellular architecture at molecular resolution and in situ, i.e. under native conditions and preserving functional interactions. In addition, recent advances in serial-section electron tomography possibilities to elucidate tissues ultrastructure. 3D electron microscopy (3DEM) relies greatly on computing as cellular structures are highly heterogeneous and the interpretation of volumes (tomograms) is severely hampered by several factors like noise, low contrast, and anisotropic distortions. Consequently, the development of specific image processing algorithms is required to analyze data within tomograms. In addition, recent advances in hardware have brought electron tomography to a new era by incrementing dramatically data quantity and quality, but at the same time has converted data analysis into a major bottleneck of this technique. This special session gathers experts in different application domains of the 3DEM coming from academia and industry. They are going to talk about the new approaches and challenges in the development of image analysis methods for 3DEM. The aim is to bring new ideas to automatically interpret the reconstructed tomograms and derive quantitative information about cellular processes at macromolecular level.

    • Lead Organizers:
      • Laleh Golestani Rad, Northwestern University, USA
      • Kyoko Fujimoto, GE Healtchare, USA
    • Speakers:
      • Lawrence L. Wald, Harvard Medical School, USA
      • Lucia Navarro de Lara, Harvard Medical School, USA
      • Rosalind Sadleir, Arizona State University, USA
      • Ulas Bagci, Northwestern University, USA
    • Brief Description:

    Magnetic resonance imaging (MRI) has undergone remarkable advancements in recent years, spanning a spectrum of out-of-the-box approaches designed to enhance patient safety, novel hardware facilitating concurrent imaging and neuromodulation, and cutting-edge applications of machine learning in pre- and post-image processing. These innovative strategies have shattered the conventional boundaries of MRI, reshaping the imaging landscape and unlocking new potentials. This session seeks to introduce and explore these pioneering advances in the field of MR engineering, safety, and image processing. While such topics are conventionally featured and well-received at large MRI-focused conferences like the International Society of Magnetic Resonance in Medicine (ISMRM), they typically emphasize medical physics and clinical applications. However, MRI is inherently interdisciplinary and constantly in need of innovative engineering solutions. This session will foster collaboration between the engineering-focused audience and the magnetic resonance imaging community.

    • Lead Organizers:
      • Hamid Behjat, EPFL, Switzerland
      • Selin Aviyente, Michigan State University, USA
      • Dimitri Van De Ville, EPFL & University of Geneva
    • Speakers:
      • Nicolas Farrugia, IMT Atlantique, France
      • James Pang, Monash University, Australia
      • Julie Coloigner, IRISA, France
      • Selin Aviyente, Michigan State University, USA
      • Hamid Behjat, EPFL, Switzerland
    • Brief Description:

    Modern brain imaging techniques provide us with unique views on brain structure and function; i.e., how the brain is wired, and where and when activity takes place. In particular, brain signals and images of multi-modal nature are acquired at exquisite high spatial and/or temporal resolution. Traditional signal processing has to date provided invaluable insights into brain structure and function but is limited in that it cannot provide a direct means to integrate the two classes. Thus, given the complex, inhomogeneous nature of the brain, novel algorithms that exploit both structural and functional activity of brain signals are needed to improve our understanding of the brain.

    This special session involves novel research in understanding the underpinnings of multi-modal brain signal and imaging data via leveraging principles from the recently emerged field of graph signal processing (GSP). In GSP, signals acquired at the nodes of a given graph are studied atop the underlying graph structure. Generalisations of traditional signal processing notions are then leveraged to study graph signals. GSP is thus of particular interest in applications in which besides the available signal/image, complementary data is available that can be used to define the domain of the signals at hand. In the past decade, an increasing number of fundamental signal processing operations, such as Fourier transform, filtering, and convolution, have been generalised to the graph setting, allowing to analyse graph signals from a novel viewpoint. GSP thus provides a unique set of tools for modeling, analyzing, and interpreting brain functional data as it enables representing the observed activity in such a way that is informed by the underlying neuroanatomical architecture of the brain.

    • Lead Organizers:
      • Lorenza Brusini, University of Verona, Italy
      • Vince Calhoun, Georgia State & Georgia Tech University, USA
      • Paolo Provero, University of Torino, Italy
    • Other Organizers:
      • Ilaria Boscolo Galazzo, University of Verona, Italy
      • Gloria Menegaz, University of Verona, Italy
    • Speakers:
      • Fabrizio Pizzagalli, University of Turino, Italy
      • Marco Lorenzi, INRIA, France
      • Yu-Ping Wang, Tulane University, USA
      • Li Shen, University of Pennsylvania, USA
      • Sergey Plis, Georgia State University, USA
    • Brief Description:

    The integration of imaging- and genetics-derived data primarily aims to assess the genetic determinants underlying complex structural and functional phenotype variations, like those occurring in brain along ageing. In particular, the recent research in this field has revealed that focusing on -omics data such as transcriptomics would represent a more direct link than genomics with the phenotypes represented by images. Indeed, the birth of novel technologies like single-cell RNA sequencing allowed to collect datasets and atlases enabling the so-called image-based spatial transcriptomics, that is the identification of gene expression’s spatial distribution and regulation on the image. This leads to the uncovering of the molecular mechanisms moving the brain physiological as well as pathological processes.In the last decades, large multidisciplinary collaborations and long-term multimodal studies as, e.g., ADNI, ENIGMA, and UK Biobank, made possible to access big repositories of different type of data, including images and genetics information. Such an availability, together with the exquisite heterogeneity of the data itself, designates deep learning as particularly attractive to represent the imaging genetics interplay thanks to its model-free nature. However, despite the undeniable advantages of deep learning models like deep neural networks, the complexity of their architecture makes mandatory to obtain explanations favoring the interpretability, especially in medicine, healthcare and neuroscience fields. For this reason, the eXplainable Artificial Intelligence (XAI) is fundamental to explain how the model reached a specific outcome, how the features contributed, and to what extent the model is confident about the decision.

    • Lead Organizers:
      • Shandong Wu, University of Pittsburg, USA
    • Speakers:
      • Shandong Wu, University of Pittsburgh Medical Center, USA
      • Young-Gon Kim, Seoul National University Hospital, South Korea
      • Matthew W. Pease, School of Medicine, Indiana University, USA
      • Riccardo Lattanzi, NYU Langone, USA
    • Brief Description:

    Research in artificial intelligence (AI)/machine learning (ML) is gaining explosive growth. Numerous computational models, algorithms, prototypes, systems have been developed in the medical imaging domain, with evidence of early success. Medical imaging is a major data modality in healthcare yet it is also complicated with high-resolution and multi- dimensional information that requires sophisticated methods for analysis. A strong focus of today’s medical imaging AI research is in the methodological aspects where researchers devote efforts to pursue algorithm innovation. Medical AI, however, is more an applied science and the ultimate goal of research is to augment clinicians and to benefit patients. AI algorithm research is unlikely to be useful in clinical practice if they lack appropriate clinical/medical contexts. There is an imperative need to calibrate valuable research towards clinical significance and medical demands. Clinical and translational AI research focuses more on the application side of computational AI techniques to address unmet clinical needs and to develop and evaluate practically useful AI solutions. How to conduct clinical and translational medical imaging AI research is a gap to fill for many computationally trained researchers, who are nowadays eager to learn the mindsets, approaches, and skills on adapting and translating technique-focused research to the clinical side. In this special session, we will elucidate this exact topic of applied medical imaging AI research, where we will deliver a few talks to bring the clinical and translational perspectives to the attendees.