IEEE ISBI 2022

Challenges

IEEE ISBI Virtual Platform

We are very excited to announce that this year seven challenges were selected to be presented at ISBI 2022. The challenges were selected based on the diversity of problems, their clinical relevance, the size of the data, and the rigor of the evaluation plan. You can read about each challenge and find links to their detailed pages below.

  1. Endoscopic computer vision challenges 2.0
    • Abstract

      Computer aided systems can help to guide both expert and trainee endoscopists to obtain consistent high quality surveillance and detect, localize and segment widely known cancer precursor lesion, “polyps”. While deep learning has been successfully applied in the medical imaging, generalization is still an open problem. Generalizability issue of deep learning models need to be clearly defined and tackled to build more reliable technology for clinical translation. Inspired by the enthusiasm of participants on our previous challenges, this year we put forward a 2.0 version of two sub-challenges (Endoscopy artefact detection) EAD 2.0 and (Polyp generalization) PolypGen 2.0.

      Both the sub-challenges consists of multi-center and diverse population datasets with tasks for both detection and segmentation but focus on assessing generalizability of algorithms. In this challenge, we aim to add more sequence/video data and multimodality data from different centers. The participants will be evaluated on both standard and generalization metrics presented in our previous challenges. However, unlike previous challenges in 2.0 we will benchmark methods on larger test-set comprising of mostly video sequences as in the real-world clinical scenario.

    Publication Policy: All peer-reviewed and accepted papers will appear on the online CEUR (only invited participants for joint publication – based on method novelty and leaderboard scores).

    Organizers:

    • Noha Ghatwary (Computer Engineering Department, Arab Academy for Science and Technology, Egypt) nohaghatwary@gmail.com

    Keywords: Endoscopy, detection, segmentation, artefact, polyps

  2. BRIGHT Challenge: BReast tumor Image classification on Gigapixel HisTopathological images
    • Abstract

      The aim of the BRIGHT challenge is to provide an opportunity for the development, testing and evaluation of Artificial Intelligence (AI) models for automatic breast tumor subtyping of frequent lesions along with rare pathologies, by using clinical Hematoxylin & Eosin (H&E)stained gigapixel Whole-Slide Images (WSIs). To this end, a large annotated cohort of WSIs, which includes Noncancerous (Pathological Benign, Usual Ductal Hyperplasia), Precancerous(Flat Epithelia Atypia, Atypical Ductal Hyperplasia) and Cancerous (Ductal Carcinoma in Situ,Invasive Carcinoma) categories, will be available. BRIGHT is the first breast tumor subtyping challenge that includes atypical lesions and consists of more than 550 annotated WSIs across a wide spectrum of tumor subtypes. The Challenge includes two tasks: (1) WSI classification into three classes as per cancer risk, and (b) WSI classification into six fine-grained lesion subtypes.

    Publication Policy: We plan 2 publications. The first will be the post-workshop proceedings where short papers describing the methods and results of ALL participants will be published. The second publication will include the work of the top performers in details, as well as the aim, results and lessons learned from the BRIGHT challenge and will be submitted in a top journal. The journal will be defined based on the challenge’s outcomes and feedback from domain experts.

    Organizers:

    Keywords: Computational Pathology, WSI-Classification, Atypias, Breast tumor Subtyping

  3. AIROGS: Artificial Intelligence for RObust Glaucoma Screening Challenge
    • Abstract

      Glaucoma is a leading cause of irreversible blindness and impaired vision. Early detection of this disease can avoid visual impairment, which could be facilitated through glaucoma screening. Glaucomatous patients can be identified with the use of color fundus photography (CFP). The analysis of CFP images performed by human experts, however, is a highly costly procedure. Artificial intelligence (AI) could increase the cost-effectiveness of glaucoma screening, by reducing the need for this manual labor. AI approaches for glaucoma detection from CFP have been proposed and promising at-the-lab performances have been reported. However, large performance drops often occur when AI solutions are applied in real-world settings. Unexpected out-of-distribution data and bad quality images are major causes for this performance drop. To initiate the development of solutions that are robust to real-world scenarios, we propose the Artificial Intelligence for RObust Glaucoma Screening (AIROGS) challenge, for which we provide a large screening dataset with around 114,000 images from about 60,000 patients. We split the data in a training set with about 102,000 gradable images (from referable and non-referable glaucomatous eyes) and a closed test set with approximately 12,000 (both gradable and ungradable) images. To encourage the development of methodologies with inherent robustness mechanisms, we do not include ungradable data in the training data. To test robustness, we will evaluate the ability of solutions to distinguish gradable from ungradable images. Furthermore, glaucoma screening performance will be assessed by considering the detection performance of referable glaucoma in gradable data.

    Publication Policy: There will be a publication about the results of the challenge.

    Organizers:

    • Coen de Vente (Quantitative Healthcare Analysis (QurAI) Group, Informatics Institute, Universiteit van Amsterdam, Amsterdam, Noord-Holland, Netherlands; Department of Biomedical Engineering and Physics, Amsterdam UMC Locatie AMC, Amsterdam, Noord-Holland, Netherlands; Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, Gelderland, Netherlands) c.w.devente@uva.nl
    • Koenraad A. Vermeer (Rotterdam Ophthalmic Institute, Rotterdam Eye Hospital, Rotterdam, Netherlands) koen@vermeer.tv
    • Bram van Ginneken (Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, Gelderland, Netherlands) bram.vanginneken@radboudumc.nl
    • Clara I. Sá‎nchez (Quantitative Healthcare Analysis (QurAI) Group, Informatics Institute, Universiteit van Amsterdam, Amsterdam, Noord-Holland, Netherlands; Department of Biomedical Engineering and Physics, Amsterdam UMC Locatie AMC, Amsterdam, Noord-Holland, Netherlands) c.i.sanchezgutierrez@uva.nl

    Keywords: Artificial intelligence, color fundus photography, glaucoma, screening, robustness

  4. KNIGHT Challenge: Kidney clinical Notes and Imaging to Guide and Help personalize Treatment and biomarkers discovery
    • Abstract

      The aim of the KNIGHT challenge is to facilitate the development of Artificial Intelligence (AI) models for automatic preoperative prediction of risk class for patients with renal masses identified in clinical Computed Tomography (CT) imaging of the kidneys. The dataset, we name the Kidney Classification (KiC) dataset, is based on the 2021 Kidney and Kidney Tumor Segmentation challenge (KiTS) and extended to include additional CT phases and clinical information, as well as risk classification labels, deducted from postoperative pathology results. Some of the clinical information will also be available for inference. The patients are classified into five risk groups in accordance with American Urological Association (AUA) guidelines. These groups can be divided into two classes based on the follow-up treatment. The challenge consists of three tasks: (1) binary patient classification as per the follow-up treatment, (2) fine-grained classification into five risk groups and (3) discovery of prognostic biomarkers.

    Publication Policy: Authors of appealing discoveries will be invited to further shape their results and potentially also share their insights in a joint publication.

    Organizers:

    • Nicholas Heller (University of Minnesota, Minneapolis, United States) helle246@umn.edu
    • Resha Tejpaul (University of Minnesota, Minneapolis, United States) teipa005@umn.edu
    • Michal Rosen-Zvi (IBM Research – Haifa, Israel, The Hebrew University, Jerusalem, Israel) rosen@il.ibm.com

    Keywords: Radiology, KiTS, CT, Renal Cancer, Accelerated Discovery, KiC

  5. BRAin Tumor Sequence REGistration Challenge (BraTS-Reg): Establishing Correspondence between Pre-Operative and Follow-up MRI
    • Abstract

      Registration of Magnetic Resonance Imaging (MRI) scans containing pathologies is challenging due to tissue appearance changes, and still an unsolved problem. We organize the first Brain Tumor Sequence Registration (BraTS-Reg) challenge, focusing on estimating correspondences between baseline pre-operative and follow up scans of the same patient diagnosed with a brain glioma. The BraTS-Reg challenge intends to establish a benchmark environment for deformable registration algorithms. The dataset associated with this challenge comprises de-identified multi-institutional multi-parametric MRI (mpMRI) data, curated for each scan’s size and resolution, according to a common anatomical template. The clinical experts of our team have generated extensive annotations of landmarks points within the scans. The “training data” along with these ground truth annotations will be released to participants to design their registration methods, whereas annotations of the “validation” and “test” data will be withheld by the organizers and used to evaluate the containerized algorithms of the participants. We will conduct the quantitative evaluation of the submitted algorithms using several metrics, such as Median Absolute Error and Robustness.

    Publication Policy: Each participating team should submit a short manuscript describing their algorithm in detail. The organizers will review the paper for sufficient details required to understand and reproduce the algorithm. We intend to coordinate a journal meta-analysis manuscript in one of the reputed journals in the domain to describe the challenge design, data, clinical relevance, and summarizing the results and insights of the challenge.

    Organizers:

    Keywords: Registration, Glioma, MRI, Longitudinal

  6. CoNIC: Colon Nuclei Identification and Counting Challenge
    • Abstract

      Nuclear segmentation, classification and quantification within Haematoxylin & Eosin stained histology images enables the extraction of interpretable cell-based features that can be used in downstream explainable models in computational pathology (CPath). To help drive forward research and innovation for automatic nuclei recognition in CPath, we organise the Colon Nuclei Identification and Counting (CoNIC) Challenge. The challenge requires researchers to develop algorithms that perform segmentation, classification and counting of 6 different types of nuclei within the current largest known publicly available nuclei-level dataset in CPath, containing around half a million labelled nuclei.

    Publication Policy: There will be a joint publication submitted to a top-tier medical image analysis journal, describing the challenge and results.

    Organizers:

    Keywords: Computational Pathology, segmentation, classification, regression, histology

Challenge CHAIRS