Rules

To participate, you have to agree to the following rules:

  • Submission:
    • Submission is in the form of docker containers, as we can’t allow access to the test images during the competition. You will be given a template docker container to use for this. Docker containers may not access the internet.
    • Submissions must be self-contained, fully automated, and reproducible.
    • Only one submission per team is allowed on the final test set.
    • On the preliminary evaluation set, one submission per team and day is allowed during the 14 day period preceding the final submission phase. Please note that trying to improve your algorithm iteratively on the preliminary evaluation data is likely to create overfitting to that data set and will hurt your algorithm in the final ranking. We generally advise to perform a proper train / test split on the given public data.
    • Use of excessive computational resources is prohibited and might lead to your containers being terminated and also to the exclusion from the challenge.
  • Documentation
    • Participants are asked to publish a short description (approximately 2 pages) of their method and results on a preprint server (e.g., arXiv.org, medRxiv.org) or on a general-purpose open-access repository (e.g., Zenodo.org). A link to this publication must be submitted along with the final challenge submission. Please note that we cannot accept links to repositories that allow modifications (e.g., GitHub) or to private cloud storage platforms (e.g., Google Drive).
    • We do provide a template (double column, IEEE style) for this. While there is no strict page limit, the document must include a clear and complete description of the team’s approach. The submission will undergo a single-blind review process by at least two experts. The outcome of this review will influence invitations for both the challenge proceedings and the associated workshop.
  • Tracks
    • This challenge features two tracks/tasks, and participants can decide if they participate in one of them or both.
  • Additional data
    • The use of additional data sets is permitted, given that these are publicly available without conditions to all participants. Data that is fully automatically derived from existing data is considered part of the method and thus permitted as well. The use of private data (images or labels to existing or new images) are not allowed. If you want to make use of additional manual labels that you or your organization has access to or labels generated by models that were trained on private data, this is only possible under the following condition:
      • These additional labels are made available to the general public latest until July 19th (one month prior to the start of the final test phase of the challenge). Should you want to do this, we ask you to publish the data on zenodo with an accompanying github repository that gives an insight into how to use those labels
      • You notify the organizers of the challenge briefly about it, so we can make this information available to everyone.
  • Conflicts of interest
    • Researchers belonging to the institutes of the organizers are not allowed to participate to avoid potential conflict of interest.
  • Publication policy
    • Participants may publish papers including their official performance on the challenge data set, given proper reference of the challenge. Please cite the official Challenge design (10.5281/zenodo.15077360). There is no embargo time in that regard.
    • We aim to publish a summary of the challenge in a peer-reviewed journal. Participating teams are free to publish their own results in a separate publication.