SemiCOL Challenge Rules

The SemiCOL Challenge has three Arms. The Organizers will evaluate the submissions to Arms 1-3 separately. The prizes will be nominated only in Arms 1 and 2 (semi-supervised learning-based algorithms). The Arm 3 represents a benchmark where any algorithm could be tested (including commercial algorithms).

challenge rules

Figure 1. Three Arms of the SemiCOL challenge. Comment: * Additional annotations (Arm 2), should these be created by participating teams, can be made only on the publicly available data and must be submitted with the algorithm (will be made publicly available upon Challenge end).
#Must be made available to other participants (at least as a link to a dataset on the Challenge Forum till March 1st, 2023).

Registration, Teams

Anyone can participate in the Challenge. The participants should form teams. The minimum number of participants in a team is 1. Every participant might be a member of only one team. Registration is necessary for all teams. The registration should be carried out by only one team member, who will be responsible for communication with Organizers. An Agreement form should be filled out and submitted during registration. Anonymous participation is not allowed. The real names and affiliations should be provided, which will be used for ranking and presented in Leaderboards.

Participation of organizing centers

Allowed; the teams from organizing centers immediately involved in the challenge organization are not eligible for prizes.

Challenge Data access

The Challenge training data is stored on the Challenge Google Drive. Private access will be provided after team registration. Ideally, a Google Email account should be used for registration. The volume of the training data is approx. 750 Gb.

Open development phase (January 2023 - April 1st, 2023)

The open development phase includes training of algorithms using only Challenge data (Arm 1) or Challenge data with any volume of additional annotations and any publicly available datasets (Arm 2; e.g., The Cancer Genome Atlas / TCGA image data, etc.). Any additional annotations are not allowed in Arm 1. Should the publicly available data or datasets be used by participating teams (Arm 2), these must be disclosed by March 1st, 2023 on the Challenge Forum by providing a link to the data/dataset. The algorithms based on the publicly available data/datasets that were not announced on the Challenge Forum till March 1st, 2023, will not be used during the ranking of the best-performing algorithms in Arms 1 and 2.

Arm 2: Additional annotations

The organizers encourage the participants to include the pathologists into teams. Additional annotations before training an algorithm are allowed for the Challenge data or any public data included. No private datasets with or without annotations are allowed. All additional annotations made should be submitted together with the algorithm for final evaluation and will be made publicly available upon the Challenge end.

Arms 1 and 2: Using pretrained models/weights

Using the pretrained models/weights is prohibited in the Arm 1. Using the pretrained models/weights is allowed in the Arm 2 with following rules:
1) The data used for the pre-trained models/weights should be public
2) The pre-trained weights should be public/made public via Challenge forum, analogous to using any public data (see above).

Arm 3 (Benchmark)

The Arm 3 represents a Benchmark where any algorithm based on any principle, data (including private datasets), and amount of annotations might be submitted. The submitted algorithms will be evaluated using the Challenge private test data similar to Arms 1 and 2. The evaluation results will be made available publicly via Leaderboard. The algorithms submitted in Arm 3 are not eligible for prizes. The organizers also encourage commercial companies to submit their algorithms for this benchmark.

Validation phase (March 1st – March 31st 2023)

Additional data will be used for validation purposes; this data will be released on March 1st 2023. The participating teams will process validation data using trained algorithms and submit their predictions, which will be used for regular Leaderboard updates.


The deadline for algorithm submission is April 1st 2023. The participants can submit no more than 5 versions of the algorithm, to no more than 5 different algorithms. For detailed description of submission process see Submission.

Test phase (April 1st – May 1st 2023)

Submitted algorithms/versions will be evaluated using private test datasets. The precise methodology of evaluation is outlined in the Submission section.

Best algorithm evaluation

The organizers retain rights to completely retrain/reproduce the algorithm of the best-performing algorithms in Arms 1 and 2 before the announcement of winners.

Prize nomination

In both Arms 1 and 2 of the Challenge three best-performing algorithms/teams will be nominated with a prize.
1st place 2000 Euro
2nd place 1000 Euro
3rd place 500 Euro

Congress presentation

The winning teams in Arms 1 and 2 will be invited to present their algorithms during a dedicated session of the European Congress of Digital Pathology 2023 (ECDP 2023).


The winning teams in Arms 1 and 2 will be listed as authors on an upcoming journal paper summarizing the results of the Challenge.

Data using policy, Embargo

Using Challenge Data for own research projects and publications is allowed starting from Jan 1st, 2024. Challenge Data usage is allowed only for academic / research purposes.

Download the rules as PDF

Download V.1.0.