Challenges

ISBI 2024 Challenges

Registration waivers will be offered to underrepresented student participants. Cloud credits and/or monetary awards will be provided to challenge winning teams.

  • Challenge Link: https://www.synapse.org/brats_goat

    Authors:

    • Lead Organizers
      Gian Marco Conte, MD, PhD Department of Radiology, Mayo Clinic, Rochester, MN, USA
      Ujjwal Baid, PhD Indiana University
      Spyridon Bakas, PhD Indiana University
    • Associate organizing committee (alphabetical)
      Mariam Aboian, Department of Radiology and Biomedical Imaging, Yale University
      Maruf Adewole, Medical Artificial Intelligence (MAI) Lab, Crestview Radiology Ltd., Lagos, Nigeria
      Jake Albrecht, Sage Bionetworks
      Udunna Anazodo, Montreal Neurological Institute, McGill University, Montreal, Canada / Medical Artificial Intelligence (MAI) Lab, Crestview Radiology Ltd., Lagos, Nigeria
      Evan Calabrese, Duke Center for Artificial Intelligence in Radiology (DAIR), Department of Radiology, Division of Neuroradiology, Duke University Medical Center
      Verena Chung, Sage Bionetworks
      Anastasia Janas, Department of Radiology and Biomedical Imaging. Yale University

    Abstract:

    The International Brain Tumor Segmentation (BraTS) challenge has been focusing, since its inception in 2012, on the generation of a benchmarking environment and a dataset for the delineation of adult brain gliomas. The focus of BraTS2023 challenge remained the same in terms of generating the common benchmark environment, while the dataset expands into explicitly addressing 1) the same adult glioma population, as well as 2) the underserved sub-Saharan African brain glioma patient population, 3) brain/intracranial meningioma, 4) brain metastasis, and 5) pediatric brain tumor patients. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. That is, each segmentation method was evaluated exclusively on the patients population it was trained on in each sub-challenge. In this challenge, we aim to organize the Generalizability Assessment of Segmentation Algorithms Across Brain Tumors. The hypothesis is that a method capable of performing well on multiple segmentation tasks will generalize well on unseen tasks. Specifically, in this task, we will be focusing on assessing the algorithmic generalizability beyond each individual patient population and focus across all of them. Importantly, although each MR exams will undergo the same preprocessing pipeline, including an intensity normalization step, there are characteristics of each exam that will not be affected (I.e., different number of lesions per exam, different location within the brain, etc.) preserving the generalizability aspect of the challenge.

  • Challenge Link: https://celltrackingchallenge.net

    Authors:

    • Michal Kozubek, Main Coordinator, Masaryk University, Brno (Czech Republic)
    • Alexandre Cunha, California Institute of Technology, Pasadena, CA (USA)
    • Martin Maška, Masaryk University, Brno (Czech Republic)
    • Erik Meijering, University of New South Wales, Sydney (Australia)
    • Arrate Muñoz-Barrutia, Universidad Carlos III de Madrid, Madrid (Spain)
    • Carlos Ortiz de Solórzano, Center for Applied Medical Research, Pamplona (Spain)
    • Tammy Riklin Raviv, Ben-Gurion University of the Negev, Beer-Sheva (Israel)
    • Johannes Stegmaier, RWTH Aachen University, Aachen (Germany)
    • Virginie Uhlmann, BioVisionCenter, University of Zurich (Switzerland) and European
    • Bioinformatics Institute (EMBL-EBI), Hinxton (United Kingdom)

    For a full list including collaborators, see http://celltrackingchallenge.net/organizers/ 

    Abstract:

    The Cell Tracking Challenge (CTC) was launched in 2012, with the aim of fostering the development of novel, robust cell segmentation and tracking algorithms, and helping the developers with the evaluation of their new algorithmic developments. Over its more than a decade long existence, six fixed-deadline ISBI challenge editions have been organized, and since February 2017, the challenge is open for online submissions that are monthly evaluated, ranked, and posted on the challenge website. So far, two benchmarks have been offered, namely segmentation-and-tracking benchmark (evaluating segmentation and tracking performance) and segmentation-only benchmark (evaluating purely segmentation performance, no tracking part is required). A detailed description of the focus and history of the CTC can be found at http://celltrackingchallenge.net/ and in the new open-access Nature Methods summary of the 10 years of its existence. The CTC is in constant evolution, and – as we did in the previous six editions attached to ISBI 2013-2015 and ISBI 2019-2021 – we plan to introduce some novelties in this new ISBI-sponsored challenge event.
    Specifically, in this new 7th edition, the participants will be encouraged to submit further solutions to the recently opened generalizability tasks – either in the frame of the segmentation-and-tracking benchmark (Task 1) or the segmentation-only benchmark (Task 2). The generalizability tasks focus on the development of methods that exhibit better generalizability and work across most, if not all, of the existing datasets, instead of being optimized for one or a few datasets only. These tasks were established for the ISBI 2021 edition, and their first results were reported in the above-mentioned paper, but no further results have been received since then. Furthermore, a new tracking-only – more precisely linking-only benchmark (Cell Linking Benchmark) will be introduced to complement the segmentation-only benchmark for those who want to evaluate purely the object linking methods without having to supply segmentation results. Such a benchmark has been missing in the CTC portfolio and it is demanded by the CTC participants and the scientific community at large. Participants will be encouraged to supply ideally generalizable solutions (Task 3) working across 13 preselected datasets but will also be able to submit dataset-specific solutions (Task 4) for datasets of their choice.

  • Challenge Link: https://lightmycells.grand-challenge.org/

    Authors:

    • Dorian Kauffmann – Challenge Project engineer, at France BioImaging Infrastructure (FBI)
    • Emmanuel Faure – CNRS researcher (CNRS-UM) & FBI.data mission officer
    • Guillaume Gay – Research engineer for the FBI.data project
    • Edouard Bertrand – Research Director and Scientific Director of France BioImaging Infrastructure (FBI)
    • Thomas Walter – Professor at Mines Paris and Director of the Centre for Computational Biology (CBIO)
    • Christophe Zimmer – Research Director at Institut Pasteur

    Abstract:

    The Light My Cells France-Bioimaging challenge aims to contribute to the development of new image-to-image ‘deep-label’ methods in the fields of biology and microscopy. The main task is to predict the best focus image of multiple fluorescently labelled organelles from label-free transmitted-light images. In order to make them usable, the aim of this challenge is to produce new open source methods which can handle a large acquisition variability: Z-focus, multiple channels, acquisition sites, input-modalities (Bright Field, Phase Contrast & Differential Interference Contrast or DIC), instruments, magnifications, cells and markers. The high variability of the database is possible thanks to the structuring role of the France-Bioimaging national infrastructure, which federates 23 imaging acquisition sites distributed all over France.

    Biomedical point of view :

    In order to obtain fluorescence microscopy images, it is necessary to perform a manual biochemical labelling treatment — time-consuming and costly — over cells with specific fluorescent probes and dyes. But, the cells studied may themselves be perturbed by the fluorescence microscopy process, both by exposure to excitation light (phototoxicity) and by the probes themselves. As phototoxicity increases with light exposure, it impairs long term imaging. Similarly, fluorophore dimming through photobleaching limits the signal-to-noise ratio of the images. Furthermore, adding markers is an invasive method. The fluorophore might hinder its target’s molecular interactions and protein overexpression increases its concentration in the cytoplasm, disrupting regulation processes. Worse, the fluorophores themselves can be cytotoxic. As fluorescence microscopy induces temporal and functional perturbations, it is thus crucial for live microscopy to limit the number of fluorescent probes used in an experiment. On the contrary, label-free transmitted light microscopy such as bright field, phase contrast and DIC is non-invasive, phototoxicity is sharply reduced, and the signal quality is conserved throughout the acquisition. The biological aim of this challenge is to recover fluorescence images in silico from bright field images.

    Technical point of view:

    We want to give a boost for multi-output deep learning methods based on a single input, when the training database is made up of images that do not always include all the required channels and have a high degree of variability (e.g. magnification, depth of focus, numerical aperture). This leads participants to develop in particular new architectures and loss functions dedicated for sparse output. The purpose is to offer a tool for biologists that can be robust on any acquisition protocol and effective for the whole community, irrespective of the size of the images, cell line, acquisition site, modality or instrument. In order to assess the generalisability of the methods developed, we will exclude one complete acquisition site from the training database and leave it for the final evaluation. For the “Light my cells” challenge, we want to evaluate the ability of the methods to predict the best Z-focus plane for any organelle even in bad acquisition conditions. To achieve this goal, participants will have the possibility to perform data augmentation provided by the acquisitions. It consists of large Z-stacks images of transmitted light microscopy containing a majority of out-focus planes.

    We defined metrics for each of the 4 organelles and for each (5) deviations of the focus plane to measure the ability to perform the task. We will evaluate each participant on this 4×5 metrics matrix, and the winners will be the ones with the best average of all the metrics. Moreover, participants will get an additional bonus for : code quality and accessibility, lightweight deep learning model, short time of training and prediction, and evaluation of the carbon footprint. Among the current state-of-the-art approaches for image-to-image tasks in bio-imaging are “DeepHCS: Bright-field to fluorescence microscopy image conversion using multi-task learning with adversarial losses for label-free high-content screening” (2021) and “Label-free prediction of cell painting from bright field images” (2022), both of which focus their methodologies solely on the use of the bright-field imaging modality, while “In Silico Labelling: Predicting Fluorescent Labels in Unlabeled Images” (2018) uses the same three modalities as our approach. While “DeepHCS” (2021) and “In Silico Labelling” (2018) use a wide range of metrics to assess image quality, “Label-free prediction of cell painting” (2022) uses a more restricted set of metrics. However, these previous works present a very low diversity of applications and do not provide an easily accessible database. In addition, “DeepHCS” (2021) faces limitations due to the fixed sizes and specific dyes of its database,”In Silico Labelling” (2018) uses fixed formats that are not typical of those used in microscopy and similarly, the authors of “Label-free prediction of cell painting” (2022) admit limitations in the size and diversity of their database.

  • Challenge Link: https://dreaming.ikim.nrw/  

    Authors:

    • Christina Gsaxner (Institute of Computer Graphics and Vision, Graz University of Technology, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Austria).
    • Shohei Mori (Institute of Computer Graphics and Vision, Graz University of Technology, Austria).
    • Gijs Luijten (Institute of Computer Graphics and Vision, Graz University of Technology, Austria; AI-guided Therapies, Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Germany).
    • Viet Duc Vu (Department of Diagnostic and Interventional Radiology, University Hospital Giessen Justus-Liebig-University Giessen, Klinikstraße 33, 35392 Giessen, Germany).
    • Timo van Meegdenburg (Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Germany).
    • Gabriele A. Krombach (Department of Diagnostic and Interventional Radiology, University Hospital Giessen Justus-Liebig-University Giessen, Klinikstraße 33, 35392 Giessen, Germany).
    • Jens Kleesiek (Medical Machine Learning, Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany).
    • Ulrich Eck (Chair for Computer Aided Medical Procedures & Augmented Reality, Technical University Munich, Munich, Germany).
    • Nassir Navab (Chair for Computer Aided Medical Procedures & Augmented Reality, Technical University Munich (TUM), Munich, Germany).
    • Yan Guo (Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China).
    • Xiaojun Chen (Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China).
    • Frank Hölzle (Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany).
    • Behrus Puladi (Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany).
    • Jan Egger (AI-guided Therapies, Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany).

    Abstract:

    While Augmented Reality (AR) is extensively studied in medicine, it represents just one possibility for modifying the real environment. Other forms of Mediated Reality (MR) remain largely unexplored in the medical domain. Diminished Reality (DR) is such a modality. DR refers to the removal of real objects from the environment by virtually replacing them with their background [1]. Combined with AR, powerful MR environments can be created. Although of interest within the broader computer vision and graphics community, DR is not yet widely adopted in medicine [2]. However, DR holds huge potential in medical applications. For example, where constraints on space and intra-operative visibility exist, and the surgeons’ view of the patient is further obstructed by disruptive medical instruments or personnel [3], DR methods can provide the surgeon with an unobstructed view of the operation site. Recently, advancements in deep learning have paved the way for real-time DR applications, offering impressive imaging quality without the need for prior knowledge about the current scene [4].
    Specifically, deep inpainting methods stand out as the most promising direction for DR [5,6,7]. The DREAM challenge focuses on implementing inpainting-based DR methods in oral and maxillofacial surgery. Algorithms shall fill regions of interest concealed by disruptive objects with a plausible background, such as the patient’s face and its surroundings. The facial region is particularly interesting for medical DR, due to its complex anatomy and variety through age, gender and ethnicity. Therefore, we will provide a dataset consisting of synthetic, but photorealistic, surgery scenes focusing on patient faces, with obstructions from medical instruments and hands holding them. These scenes are generated by rendering highly realistic humans together with 3D-scanned medical instruments in a simulated operating room (OR) setting.
    This challenge represents an initial frontier in the realm of medical DR, offering a simplified setting to pave the way for MR in medicine. In the future, the potential for more sophisticated applications is expected to unfold.

    References:

    [1] Mori, S., Ikeda, S., & Saito, H. (2017). A survey of diminished reality: Techniques for visually
    concealing, eliminating, and seeing through real objects. IPSJ Transactions on Computer Vision
    and Applications, 9(1), 1-14.
    [2] Ienaga, N., Bork, F., Meerits, S., Mori, S., Fallavollita, P., Navab, N., & Saito, H. (2016,
    September). First deployment of diminished reality for anatomy education. In ISMAR-Adjunct
    (pp. 294-296). IEEE.
    [3] Egger, J., & Chen, X. (Eds.). (2021). Computer-Aided Oral and Maxillofacial Surgery:
    Developments, Applications, and Future Perspectives. Academic Press.
    [4] Gsaxner, C., Mori, S., Schmalstieg, D., Egger, J., Paar, G., Bailer, W. & Kalkofen, D. (2023).
    DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality. arXiv preprint arXiv:
    2312.00532.
    [5] Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders:
    Feature learning by inpainting. In CVPR (pp. 2536-2544).
    [5] Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2019). Free-form image inpainting with
    gated convolution. In ICCV (pp. 4471-4480).
    [7] Kim, D., Woo, S., Lee, J. Y., & Kweon, I. S. (2019). Deep video inpainting. In CVPR (pp. 5792-
    5801).

  • Challenge Link: https://justraigs.grand-challenge.org/

    Organizers:

    • Koenraad A. Vermeer: Rotterdam Ophthalmic Institute, Rotterdam Eye Hospital, Rotterdam, The Netherlands
    • Hans G. Lemij: Rotterdam Ophthalmic Institute, Rotterdam Eye Hospital, Rotterdam, The Netherlands
    • Siamak Yousefi, Department of Ophthalmology, Department of Genetics, Genomics, and Informatics, Director of the Data Mining and Machine Learning (DM2L) Laboratory, University of Tennessee Health Science Center, Memphis, USA
    • Yeganeh Madadi, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
    • Hina Raja, Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA

    Abstract:

    Glaucoma is a leading cause of irreversible blindness and impaired vision. In its early stages, the disease is typically asymptomatic. With more advanced glaucoma, the visual field is affected; as a result, patients stumble more often, bump into objects and other people, and may be more often involved in traffic accidents and falls. Only in the late stages of the disease, are patients more aware of their visual impairment. They may experience trouble reading, suffer from night-blindness, or suffer from other symptoms of impaired vision. Once detected, glaucoma can be treated so that any disease progression be effectively stopped or slowed down, but the damage cannot be repaired. Early detection and timely treatment of this disease can therefore avoid visual impairment; early detection could be facilitated through populationbased glaucoma screening. Glaucoma affects the optic nerve, i.e., the connection between the eye and the brain; this disease is also known as glaucomatous optic neuropathy (GON). It is typically identified based on the appearance of the optic nerve head and its surroundings, for instance on color fundus photographs (CFPs) or other imaging modalities. In clinical practice, one other imaging technique is optical coherence tomography (OCT), which plays an evergrowing role in the diagnosis and follow-up of GON. For screening purposes, however, CFPs are relatively inexpensive. These photographs provide crucial information for assessing various features of glaucomatous damage. These features include neuroretinal rim thinning and notching, increased cupping and optic disc hemorrhages. In addition, glaucomatous thinning of the retinal nerve fiber layer (RNFL) may be readily visible on CFPs. Furthermore, CFPs have an additional benefit in that they provide a record of the eye’s baseline condition, serving as a reference for future follow-up. Manual identification of these features can provide higher accuracy when performed by experienced specialists. However, manual segmentation is subjective and can vary among different observers. Automated detection algorithms, on the other hand, can provide consistent and reproducible results and subsequently reduce inter-observer and intra-observer variability. Manual segmentation can be also time-consuming and labor-intensive, especially for large datasets or complex cases. On the other hand, automated algorithms can process images more rapidly, thus may provide efficient solutions for large scale screening. Artificial intelligence (AI) approaches for detecting glaucoma based on CFPs have been extensively investigated previously and have provided promising results. In the context of screening, for low prevalence diseases such as glaucoma, specificity is of primary importance and should be very high in order to prevent referring many false positive cases to the health care system. Therefore, the model should be highly dependable and provide clinically relevant outcomes. However, current AI methods merely indicate whether an individual requires to be referred to an ophthalmologist or not, but do not provide any justification for the underlying pathology. Understanding the typically glaucomatous features that the algorithm suggests for referring an individual improves trust as well as enables human experts to identify errors in the decision process due to physiological or pathological deviations. To initiate the development of such AI algorithms for glaucoma screening and to evaluate their performance, we propose the Justified Referral in AI Glaucoma Screening (JustRAIGS) challenge, for which we have provided a unique large dataset with over 110k carefully annotated fundus photographs collected from about 60,000 screenees. We have generated a training subset with 101,442 gradable fundus images (from ‘referable glaucoma ‘eyes and ‘no referable glaucoma’ eyes) and a test subset with 9,741 fundus images. Each fundus photograph thus has been labeled as either ‘referable glaucoma’ or ‘no referable glaucoma’. In addition, all fundus images of referable glaucoma eyes have been further annotated with up to ten additional labels associated with different glaucomatous features. In this challenge, participants will be tasked with analyzing the fundus images and assigning each image to one of two classes: ‘referable glaucoma’ or ‘no referable glaucoma’. ‘Referable glaucoma’ refers to eyes where the fundus image exhibits signs or features indicative of glaucoma that require further examination or referral to a specialist. In this case, visual field damage is expected. On the other hand, ‘no referable glaucoma’ refers to cases where the fundus image does not show significant indications of glaucoma and does not require immediate referral. Very early disease, in which visual field damage is not yet expected, would also be classified as ‘no referable glaucoma’. In addition to the referable glaucoma classification, participants will be further instructed to perform multi-label classification for ten additional features related to glaucoma. These features are specific characteristics or abnormalities that may be present in the fundus images of glaucoma patients. The multi-label classification task involves assigning relevant labels to each fundus image based on the presence or absence of these specific features. These additional features provide more detailed information about the specific characteristics observed in the fundus images of ‘referable glaucoma’ cases. By combining both the binary classification task (referable vs. no referable glaucoma) and the multi-label classification task (for the ten additional features), we aim to evaluate the participants’ ability to accurately identify and classify fundus images associated with referable glaucoma. The results of this classification task can provide insights into the development of automated systems or algorithms for glaucoma detection, ultimately assisting in the early identification and treatment of glaucoma patients, thereby reducing avoidable visual impairment and blindness from glaucoma.

  • Challenge Link: https://codalab.lisn.upsaclay.fr/competitions/16919 

    Authors:

    • Organizers:
      Wenxuan Li (Johns Hopkins University)
      Yu-Cheng Chou (Johns Hopkins University)
      Jieneng Chen (Johns Hopkins University)
      Qi Chen (University of Science and Technology of China)
      Chongyu Qu (Johns Hopkins University)
      Alan Yuille (Johns Hopkins University)
      Zongwei Zhou (Johns Hopkins University)
      Technical Support:
      Yaoyao Liu (Johns Hopkins University)
      Angtian Wang (Johns Hopkins University)
      Junfei Xiao (Johns Hopkins University)
      Yucheng Tang (NVIDIA)
    • Annotation Team:
    • Experts:
      Xiaoxi Chen (Shanghai Jiao Tong University)
      Jincheng Wang (The First Affiliated Hospital, Zhejiang University School of Medicine)
    • Trainees:
      Huimin Xue (The First Hospital of China Medical University)
      Yixiong Chen (Johns Hopkins University)
      Yujiu Ma (Shengjing Hospital of China Medical University)
      Yuxiang Lai (Southeast University)
      Hualin Qiao (Rutgers University)
      Yining Cao (China Medical University)
      Haoqi Han (China Medical University)
      Meihua Li (China Medical University)
      Xiaorui Lin (China Medical University)
      Yutong Tang (China Medical University)
      Jinghui Xu (China Medical University)

    Abstract:

    Variations in organ sizes and shapes can indicate a range of medical conditions, from benign anomalies to life-threatening diseases. Precise organ volume measurement is fundamental for effective patient care, but manual organ contouring is extremely time-consuming and exhibits considerable variability among expert radiologists. Artificial Intelligence (AI) holds the promise of improving volume measurement accuracy and reducing manual contouring efforts. We formulate our challenge as a semantic segmentation task, which automatically identifies and delineates the boundary of various anatomical structures essential for numerous downstream applications such as disease diagnosis, prognosis, and surgical planning. Our primary goal is to promote the development of AI algorithms and to benchmark the state of the art in this field. The BodyMaps challenge particularly focuses on assessing and improving the generalizability and efficiency of AI algorithms in medical segmentation across diverse clinical settings and patient demographics. In light of this, the innovation of our BodyMaps challenge includes the use of (1) large-scale, diverse datasets for both training and evaluating AI algorithms, (2) novel evaluation metrics that emphasize the accuracy of hard-to-segment anatomical structures, and (3) penalties for algorithms with extended inference times. Specifically, this challenge involves two unique datasets. First, AbdomenAtlas, the largest annotated dataset [Qu et al., 2023, Li et al., 2023], contains a total of 10,142 three-dimensional computed tomography (CT) volumes. In each CT volume, 25 anatomical structures are annotated by voxel. AbdomenAtlas is a multi-domain dataset of pre, portal, arterial, and delayed phase CT volumes collected from 88 global hospitals in 9 countries, diversified in age, pathological conditions, body parts, and race background. The AbdomenAtlas dataset will be made available to the public progressively during the challenge period and the participants are encouraged to use any other public/private datasets for training AI algorithms. Second, W-1K is a proprietary collection of 1,000 CT volumes, where 15 anatomical structures are annotated by voxel. The CT volumes and annotations of W-1K will be reserved for external validation of AI algorithms. The final score will be calculated on the W-1K dataset, measuring both segmentation performance and inference speed of the AI algorithms. Note that the segmentation performance will not only be limited to the average segmentation performance but also prioritize the performance of hard-to-segment structures. We hope our BodyMaps challenge can set the stage for larger-scale clinical trials and offer exceptional opportunities to practitioners in the medical imaging community.

All challenge papers are accepted only via the following link:
https://cmt3.research.microsoft.com/ISBI2024/Submission/Index

  • Authors should choose their Challenge as the primary subject area on the CMT system when submitting their paper
  • Authors are expected to prepare their paper according to ISBI full paper guidelines: https://biomedicalimaging.org/2024/authors-instructions/. Specifically, a valid challenge paper should include the following information: Introduction, Data description, Method description, Cross-validation results on training data, and results on validation data, Discussion, Conclusion.  (Of course results on the final testing set will not be included in the challenge papers)

Schedule

Jan 8 – Initiation of Challenge

  • Challenge website and training data are publicly available
  • Challenge participants develop their method and submit for evaluation
  • Evaluation is carried out by challenge organizers

Apr 6 – Paper submission deadline

  • Challenge participants submit papers on the CMT platform on challenge track
  • Review period is open for 2 weeks
  • Reviewers are assigned by the organizers of each challenge
  • Metareview is carried out by the organizers of each challenge
  • Method evaluations are on-going (i.e., “challenge is still running”)

Apr 20 – Challenge paper authors receive reviews through CMT platform

  • Method evaluations are not accepted any longer
  • Organizers of each challenge notify challenge paper authors that they have 1 week to address comments and to add final methodological changes
  • Organizers of each challenge invite submitters of accepted paper to:
    • Present their method at the conference (oral or poster)
    • Be co-authors of the meta-analysis journal manuscript
  • Organizers of each challenge involve challenge chairs for the author invitation to facilitate “Letter for VISA” to challenge participants

Apr 27 – (FINAL – Camera-ready) Revised paper submissions deadline

  • 1 week for final review/acceptance by the organizers of each challenge
  • Organizers of each challenge perform ranking of submissions

May 6 – Notification of oral versus poster presenters

  • Note that Oral papers are top-performers & interesting methods

May 27 – Conference – Announcement of winners