Challenge 1: Fuse My Cells Challenge From Single View to Fused Multiview Lightsheet Imaging
Challenge Link: https://fusemycells.grand-challenge.org/
Authors:
- Dorian Kauffmann, France Bio-Imaging, French National Centre for Scientific Research (CNRS) & Laboratoire d’Informatique, de Robotique et de Microelectronique de Montpellier (LIRMM), France
- Emmanual Faure, Researcher, France Bioimaging, University of Montpellier, CNRS, France
- Guillaume Gay, Research Engineer, CNRS, University of Montpellier, France BioImaging, France
Abstract:
France-BioImaging’s Fuse My Cells challenge aims to advance new methods for 3D image-to-image fusion using deep learning in the fields of biology and microscopy.
In multi-view microscopy, to create a fused 3D image stack, it is necessary to capture several views (typically 4) of the sample from different angles and then align them in the same spatial reference frame to generate a 3D fusion of these images. The fused image compensates for point spread function anisotropy and signal degradation for acquisitions deep in the sample, at the expense of an increased photon exposition, damaging to the sample.
The main objective of the Fuse My Cells challenge is to predict a fused 3D image using only one or two available 3D views, providing a practical solution to the limitations of current microscopy techniques, such as improving image quality, extending the duration of live imaging, saving on the photon budget, and facilitating image analysis.
Challenge 2: Pap Smear Cell Classification Challenge
Challenge Link: https://www.kaggle.com/competitions/pap-smear-cell-classification-challenge
Authors:
- Dávid Kupás, Department of Data Science and Visualization, University of Debrecen
- Balázs Harangi, Department of Data Science and Visualization, University of Debrecen
- Nicolai Spicher, Department of Medical Informatics, Universitätsmedizin Göttingen
- Péter Kovács, Department of Numerical Analysis, Eötvös Loránd University
- András Hajdu, Department of Data Science and Visualization, University of Debrecen
- Ilona Kovács, Department of Pathology, Kenezy Gyula University Hospital and Clinic
Abstract:
This challenge aims to advance the development of algorithms for the classification of cervical cell images extracted from Pap smears. This classification plays a crucial role in aiding cervical cancer screening by identifying abnormal cells that may indicate pre-cancerous conditions. The challenge seeks to attract innovative methods from the biomedical image analysis community, with a focus on improving both the accuracy and efficiency of automated cervical cell classification. From a technical perspective, the challenge addresses issues of data variability, feature extraction, and machine learning model performance, with the goal of reducing false positives and false negatives in cervical cancer detection. The envisioned impact is to improve screening processes and provide tools that can enhance early diagnosis of cervical cancer, thereby improving patient outcomes.
Challenge 3: Fetal Ultrasound Grand Challenge: Semi-Supervised Cervical Segmentation
Challenge Link: https://www.codabench.org/competitions/4781/
Authors:
- Lead Organizers
Jieyun Bai, the University of Auckland, Auckland, New Zealand and Jinan University, Guangzhou, China
Zihao Zhou, Jinan University, Guangzhou, China
Yitong Tang, Jinan University, Guangzhou, China
- Associate organizing committee
Ziduo Yang, Jinan University, Guangzhou, China
Md. Kamrul Hasan, Imperial College London, London, UK
Jie Gan, University of Sydney, Sydney, Australia
Zhuonan Liang, University of Sydney, Sydney, Australia
Weidong Cai, University of Sydney, Sydney, Australia
Tao Tan, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands and Macao Polytechnic University, Macao SAR, China
Jing Ye, Monash University Monash, Australia and Shanghai AI Laboratory, Shanghai, China
Mohammad Yaqub, Mohamed bin Zayed University, Abu Dhabi, United Arab Emirates
Dong Ni, Shenzhen University, China
Saad Slimani, Ibn Rochd University Hospital, Hassan II University, Casablanca, Morocco
Benard Ohene-Botwe, Department of Radiography, School of Biomedical and Allied Health Sciences, College of Health Sciences, University of Ghana, Accra, Ghana
Campello Roman Victor Manuel, Artificial Intelligence in Medicine Lab (BCN-AIM), Barcelona, Spain
Karim Lekadir, Artificial Intelligence in Medicine Lab (BCN-AIM), Barcelona, Spain
Abstract:
Premature birth remains one of the leading causes of neonatal mortality in the U.S. and other developed nations, posing significant healthcare challenges. Despite numerous efforts, strategies to prevent preterm labor and birth have yielded only limited success. However, recent advancements have improved the ability to predict preterm birth, offering a crucial opportunity for early intervention. Among these advancements, ultrasound imaging of the uterine cervix has emerged as a key clinical tool in predicting spontaneous preterm labor and birth. Early identification of high-risk pregnancies enables healthcare professionals to focus obstetric and neonatal care on those most in need, improving outcomes.
From a biomedical perspective, transvaginal ultrasound is the preferred method for visualizing the cervix in most patients, offering detailed insight into cervical anatomy and structure. Accurate segmentation of ultrasound (US) images of the cervical muscles is essential for analyzing deep muscle structures, assessing their function, and monitoring treatment protocols tailored to individual patients.
From a technical standpoint, the manual annotation of cervical structures in transvaginal ultrasound images is labor-intensive and time-consuming, limiting the availability of large labeled datasets required for robust machine learning models. In response to this challenge, semisupervised learning approaches have shown potential by leveraging both labeled and unlabeled data, enabling the extraction of useful information from unannotated cases. This method could reduce the need for extensive manual annotation while maintaining accuracy, thus accelerating the development of automated cervical image segmentation systems. The envisioned impact of this challenge is twofold: improving clinical decision-making through more accessible and accurate diagnostic tools and advancing machine learning techniques for medical image analysis, particularly in resource-constrained environments.
We extend the MICCAI PSFHS 2023 Challenge and the MICCAI IUGC 2024 Challenge from fully supervised settings to a semi-supervised setting that focuses on how to use unlabeled data.
- Lead Organizers
Challenge 4: Advancing Automated Detection and Classification of Mitotic Figures in Glioma for Enhanced Prognosis and Treatment (Glioma-MDC 2025)
Challenge Link: https://www.kaggle.com/competitions/glioma-mcd-2025
Authors:
Yinyan Wang, Bejing Tiantan Hospital Capital Medical University, China
Jia Wu, MD Anderson, USA
Hongbo Bao, Bejing Tiantan Hospital & Harbin Medical University Cancer Hospital, China
Rongjie Wu, Bejing Tiantan Hospital & Capital Medical University, China
Wentao Li, UT MD Anderson Cancer Center, USA
Xiang yu Pan, University of Testing, USA
Dongao Zhang, Sanbo Brain Hospital, Capital Medical University, China
Chao Li, Cambridge University, USA
Wagas Muhammad, UT MD Anderson, USA
Rukhmini Bandyopadhyay, UT MD Anderson, USAAbstract:
The Glioma-MDC 2025 challenge aims to advance the field of digital pathology by developing robust algorithms for the detection and classification of mitotic figures in glioma tissue samples. Gliomas, the most common and aggressive primary brain tumors, have a dire prognosis, making accurate and timely diagnosis crucial for effective treatment. A key indicator of glioma aggressiveness is the rate of cellular proliferation, which is often measured by identifying and counting mitotic figures in histopathological images.
From a biomedical perspective, detecting mitotic figures, especially those that are abnormal, provides essential information for tumor grading and prognostication. Traditional methods rely on manual counting by pathologists, which can be time-consuming and prone to variability. Automating this process using advanced image analysis techniques could significantly enhance diagnostic accuracy, consistency, and efficiency, ultimately leading to better patient outcomes.
From a technical perspective, the challenge involves the development and validation of machine learning and computer vision algorithms capable of accurately identifying mitotic figures in high-resolution images of H&E-stained glioma tissue. Participants will work with a dataset of annotated image patches, focusing on the detection and classification of mitotic figures. The challenge will test the ability of these algorithms to generalize across different image patches and maintain high accuracy in detecting mitoses, especially in challenging cases with abnormal mitotic figures.
Challenge 5: Beyond FA
Challenge Link: https://bfa.grand-challenge.org/
Authors:
Elyssa M McMaster, Vanderbilt University, USA
Nancy Rose Newlin, Vanderbilt University & Microsoft Research. USA
Chloe Cho, Vanderbilt University, USA
Gaurav Rudravaram, Vanderbilt University, USA
Karthik Ramadass, Vanderbilt University, USA
Adam Saunders, Vanderbilt University, USA
Jongyeon Yoon, Vanderbilt University, USA
Yehyun Suh, Vanderbilt University & Alphatec Spine, USA
Trent Schwartz, Vanderbilt University, USA
Michael Kim, Vanderbilt University, USA
Yihao Liu, Vanderbilt University, USA
Lianrui Zuo, Vanderbilt University, USA
Kurt G Schilling, Vanderbilt University, USA
Talia M. Nir, University of South California, USA
Neda Jahanshad, University of South California, USA
Daniel Moyer, Vanderbilt University, USA
Eleftherios Garyfallidis, Indiana University, USA
Bennett Landman, Vanderbilt University, USAAbstract:
The inclusion of diffusion weighted magnetic resonance imaging (DW-MRI) in major national studies has allowed for further study of white matter structure. Fractional anisotropy (FA) is a metric frequently used to interpret white matter integrity, but it exhibits high sensitivity and low specificity in pathological interpretation. This makes it difficult to use FA as a reliable biomarker as it obscures distinctions between ages, sexes, cognitive statuses, and pathologies. With the development of other white matter models and metrics, such as complex network measures, tractography bundle analysis, and NODDI metrics, there are thousands of combinations to analyze for white matter integrity analysis, especially in the context of lower-quality clinical data that FA’s voxel-wise computation can be sensitive to. The widespread collection of DW-MRI data necessitates an analysis of these metrics to interpret white matter integrity beyond what FA can capture.
The key innovations are (1) we crowdsource preferred diffusion metrics from teams around the world in the context of biomarker development for the first time, (2) we evaluate and compare the potential to use these biomarkers for empirical study in the same context for the first time, and (3) we evaluate the sensitivity of diffusion MRI and its models to secret sources of variability to be revealed to teams at ISBI 2025.
All challenge papers are accepted only via the following link:
https://edas.info/N32831
- Authors should submit their challenge to the EDAS system, and select the correct challenge topic.
- Authors are expected to prepare their paper according to ISBI full paper guidelines: https://biomedicalimaging.org/2025/contributors/4-page-paper-guidelines/ . Specifically, a valid challenge paper should include the following information: Introduction, Data description, Method description, Cross-validation results on training data, and results on validation data, Discussion, Conclusion. (Of course results on the final testing set will not be included in the challenge papers)
Schedule
Dec 9 – Initiation of Challenge
– Challenge website, Training data and contact information should be publicly available
– Challenge participants develop their method and can submit for evaluation
– Evaluation platform/approach is taken care by the challenge organizers
February 6 – Paper submission deadline
– Challenge participants submit their papers on the EDAS platform (challenge track)
– Review period is open for 2 weeks
– Reviewers are assigned by the organizers of each challenge
– Metareview is carried out by the organizers of each challenge
– Method evaluations are still on-going (i.e., challenge is still running)
February 20 – Challenge paper authors receive reviews through EDAS platform
– We recommend to organizers to stop accepting method evaluations
– Organizers of each challenge should notify challenge paper authors that they have 1 week to address comments and to add final methodological changes
– Organizers of each challenge invite every valid* paper submitter to:
– Present their method at the conference (oral or poster will be decided by March 26)
– Be co-authors of the meta-analysis journal manuscript
– Organizers of each challenge involve Challenge chairs for the author invitation to facilitate “Letter for VISA” to challenge participants
February 27 – (FINAL – Camera-ready) Revised paper submissions deadline.
– 1 week for final review/acceptance by the organizers of each challenge
– Organizers of each challenge do their ranking
March 1 – Notification of oral versus poster presenters
– Note that Oral papers are top-performers & interesting methods
April 14-17 – Conference – Announcement of winners