COVID-19 Lung CT Lesion Segmentation Challenge - 2020
The final ranking and the winners of the challenge were unveiled during the the Mini-symposium organized on January 11th, 2021. Find more information under Mini-Symposium and Challenge Final Ranking. The Leaderboards for the Validation and Test Phases are also available on this website.
Important: final ranking for both phases will be based on the LAST SUBMISSION (time-wise), please choose your final submission carefully. The Leaderboard
is for reference only, as it is based solely on the Dice scores, which
does not reflect the multi-metric scores that will be used for the final
The challenge is currently closed to new participants, please wait for post-challenge to ope
PLEASE CHECK THE FAQ PAGE FOR NEWS AND ANSWERS TO COMMON QUESTIONS.
The COVID-19 pandemic has had a devastating impact on the health of individuals worldwide. The manifestation of the viral infection in the lung has been one of the earliest indicators of disease and may play an important role in the clinical management of patients. Ground glass opacities are the most common finding in COVID-19 lung computed tomography (CT) images, usually multifocal, bilateral and peripheral. However, the type, the size and distribution of the lung lesions may vary with the age of the patients and the severity or stage of the disease.
The COVID-19-20 challenge will create the platform to evaluate emerging methods for the segmentation and quantification of lung lesions caused by SARS-CoV-2 infection from CT images. The images are multi-institutional, multi-national and originate from patients of different ages, gender and with variable disease severity.
November 2nd, 2020 (11:59PM GMT): Launch of challenge and release of training and validation data.
December 8th, 2020 (11:59PM GMT): Release of test data.
December 11th, 2020 (11:59PM GMT): Deadline for submission of test results and abstract.
December 18th, 2020: Test results (THIS IS NOT THE FINAL RANKING) and finalist teams will be available online.
January 11th, 2021 (tentative): Final ranking will be announced and top ten ranking teams will present their methods in a live virtual meeting. Details will follow soon.
Data access: It will be provided through online registration. Datasets will be accessible using APIs after data access agreement is signed and archived. We are hosting the challenge on a widely used competition website (grand-challenge.org), which provides easy and secure data access control.
Submission of results: Participants will upload the results on the validation data to the hosting website for evaluation. Only (semi-)automated methods are allowed. Submission of manual annotations is prohibited. The number of submissions for each user will be limited to once-a-day to fine tune their algorithms on the validation data. Submission of results on the test data will be collected without showing the leaderboard and the last submission will be used for final ranking. Test phase is only for participants who would have already submitted their results on the validation set. Final ranking will only include participants who also submitted the abstract and other information. The leader board will be public and hosted on the challenge website.
Notes for submission:
- Make sure the headers and size align with image, otherwise the results will be automatically rejected by the system.
- Make sure there are no blank masks of same size in the submission, otherwise the system will recognize them as duplicated files and automatically reject the submission
- Please add the information of either "Fully-automated" or "Semi-automated" as "comment" during submission, please do not write anything else free-style like "test"/"try"/etc.
Sample submission file link added on download page.
Abstract: In order to be considered for evaluation of test set and included in the final leader-board, all teams must submit an abstract including the Title, Authors and Affiliations, and Evaluation. This needs to be submitted before the release of test data (submission instruction to be released soon). The body of the abstract (Evaluation) will be of maximum 250 words, in which participants will present methodological details, and the results of the evaluation of their method on the training and validation datasets.
Evaluation: The Normalized Surface Distance, Dice Similarity Coefficient and Absolute Volumetric Lesion Error between the (semi-)automated results and the ground truth will be used to evaluate the results on the test data for registered participants who also submitted an abstract. For the final ranking, the different metrics will be combined to provide a single performance metric following the statistical analysis methodology utilized at MSD and described here [medicaldecathlon.com].
- Teams cannot register multiple times to avoid submission limitations in any phase of the challenge. No violation will be tolerated and your participation in the challenge will be terminated.
Use of external data is allowed and must be disclosed in the abstract that will accompany the submission of the test results.
- You can use a general-purpose pre-trained network for initialization (e.g., a network pre-trained for another lung disease) and this will be considered as use of external data, as above. If the pre-trained network is public (e.g., ImageNet pre-trained weights) this will not qualify as external data, but it should be disclosed. The exception is the use of public networks pre-trained for the segmentation of COVID-19 lesions, which is not allowed (e.g., Clara_train_covid19_ct_lesion_seg ).
- To participate in evaluation, teams must join before the release of the test data. Only teams that submit results on the validation set will be able to evaluate on the test set.