COVID-19 Lung CT Lesion Segmentation Challenge - 2020 

(COVID-19-20)

The post challenge phase is open now. 

The data and requirements are the same as Phase I. The test data shall be kept for future challenge.


The challenge paper is online.

The manuscript giving an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge – 2020, including the data, winning approaches, and statistical analysis of results, is now published in Medical Image Analysis.

All publications/outcomes using the resources from the COVID-19-20 challenge to cite this paper:

Rapid artificial intelligence solutions in a pandemic—The COVID-19-20 Lung CT Lesion Segmentation Challenge

Holger Roth, Ziyue Xu, Carlos Tor Diez, Ramon Sanchez Jacob, Jonathan Zember, Jose Molto, Wenqi Li, Sheng Xu, Baris Turkbey, Evrim Turkbey, Dong Yang, Ahmed Harouni, Nicola Rieke, Shishuai Hu, Fabian Isensee, Claire Tang, Qinji Yu, Jan Sölter, Tong Zheng, Vitali Liauchuk, Ziqi Zhou, Jan Moltz, Bruno Oliveira, Yong Xia, Klaus Maier-Hein, Qikai Li, Andreas Husch, Luyang Zhang, Vassili Kovalev, Li Kang, Alessa Hering, João Vilaça, Mona Flores, Daguang Xu, Bradford Wood, Marius Linguraru

Medical image analysis 82 (2022): 102605.

Bibtex:

@article{roth2022rapid,
  title={Rapid artificial intelligence solutions in a pandemic—The COVID-19-20 Lung CT Lesion Segmentation Challenge},
  author={Roth, Holger R and Xu, Ziyue and Tor-D{\'\i}ez, Carlos and Jacob, Ramon Sanchez and Zember, Jonathan and Molto, Jose and Li, Wenqi and Xu, Sheng and Turkbey, Baris and Turkbey, Evrim and others},
  journal={Medical image analysis},
  volume={82},
  pages={102605},
  year={2022},
  publisher={Elsevier}
}

The final ranking and the winners of the challenge were unveiled during the the Mini-symposium organized on January 11th, 2021. Find more information under Mini-Symposium and Challenge Final Ranking. The Leaderboards for the Validation and Test Phases are also available on this website.


Important: final ranking for both phases will be based on the LAST SUBMISSION (time-wise), please choose your final submission carefully. The Leaderboard is for reference only, as it is based solely on the Dice scores, which does not reflect the multi-metric scores that will be used for the final ranking.


We have opened the post-challenge phase, currently it provides the same data / evaluation as previous Phase I.  


PLEASE CHECK THE FAQ PAGE FOR  NEWS AND ANSWERS TO COMMON QUESTIONS.


 


Background

The COVID-19 pandemic has had a devastating impact on the health of individuals worldwide. The manifestation of the viral infection in the lung has been one of the earliest indicators of disease and may play an important role in the clinical management of patients. Ground glass opacities are the most common finding in COVID-19 lung computed tomography (CT) images, usually multifocal, bilateral and peripheral. However, the type, the size and distribution of the lung lesions may vary with the age of the patients and the severity or stage of the disease.


The COVID-19-20 challenge will create the platform to evaluate emerging methods for the segmentation and quantification of lung lesions caused by SARS-CoV-2 infection from CT images. The images are multi-institutional, multi-national and originate from patients of different ages, gender and with variable disease severity.

Timeline

November 2nd, 2020 (11:59PM GMT): Launch of challenge and release of training and validation data.
December 8th, 2020 (11:59PM GMT): Release of test data.
December 11th, 2020 (11:59PM GMT): Deadline for submission of test results and abstract.
December 18th, 2020: Test results (THIS IS NOT THE FINAL RANKING) and finalist teams will be available online.
January 11th, 2021 (tentative): Final ranking will be announced and top ten ranking teams will present their methods in a live virtual meeting. Details will follow soon.

Process

Data access: It will be provided through online registration. Datasets will be accessible using APIs after data access agreement is signed and archived. We are hosting the challenge on a widely used competition website (grand-challenge.org), which provides easy and secure data access control.

Submission of results: Participants will upload the results on the validation data to the hosting website for evaluation. Only (semi-)automated methods are allowed. Submission of manual annotations is prohibited. The number of submissions for each user will be limited to once-a-day to fine tune their algorithms on the validation data. Submission of results on the test data will be collected without showing the leaderboard and the last submission will be used for final ranking. Test phase is only for participants who would have already submitted their results on the validation set. Final ranking will only include participants who also submitted the abstract and other information. The leader board will be public and hosted on the challenge website.

Notes for submission:
- Make sure the headers and size align with image, otherwise the results will be automatically rejected by the system.
- Make sure there are no blank masks of same size in the submission, otherwise the system will recognize them as duplicated files and automatically reject the submission 
- Please add the information of either "Fully-automated" or "Semi-automated" as "comment" during submission, please do not write anything else free-style like "test"/"try"/etc.
Sample submission file link added on download page.

Abstract: In order to be considered for evaluation of test set and included in the final leader-board, all teams must submit an abstract including the Title, Authors and Affiliations, and Evaluation. This needs to be submitted before the release of test data (submission instruction to be released soon). The body of the abstract (Evaluation) will be of maximum 250 words, in which participants will present methodological details, and the results of the evaluation of their method on the training and validation datasets.

Evaluation: The Normalized Surface Distance, Dice Similarity Coefficient and Absolute Volumetric Lesion Error between the (semi-)automated results and the ground truth will be used to evaluate the results on the test data for registered participants who also submitted an abstract. For the final ranking, the different metrics will be combined to provide a single performance metric following the statistical analysis methodology utilized at MSD and described here [medicaldecathlon.com].

Baseline: To allow for a quick start with the challenge data and tasks, the MONAI team has released a baseline segmentation model using MONAI, which contains simple data preprocessing, model training, validation, and inference pipelines.

Notes:
  • Teams cannot register multiple times to avoid submission limitations in any phase of the challenge. No violation will be tolerated and your participation in the challenge will be terminated.
  • Use of external data is allowed and must be disclosed in the abstract that will accompany the submission of the test results.
  • You can use a general-purpose pre-trained network for initialization (e.g., a network pre-trained for another lung disease) and this will be considered as use of external data, as above. If the pre-trained network is public (e.g., ImageNet pre-trained weights) this will not qualify as external data, but it should be disclosed. The exception is the use of public networks pre-trained for the segmentation of COVID-19 lesions, which is not allowed (e.g., Clara_train_covid19_ct_lesion_seg ).
  • To participate in evaluation, teams must join before the release of the test data. Only teams that submit results on the validation set will be able to evaluate on the test set.