Some recommendations for the preprint

The MIDOG 2025 submission on the final test set does require a link to a preprint by the participants, detailing the method they submitted in a scientific paper format. For this, we have some recommendations:

  • The minimum size of this paper is 2 pages, IEEE double column format. We provide a template here: https://github.com/DeepPathology/MIDOG_LaTeX_Template
  • It is required to provide the link to your preprint on a public repository, such as arXiv, bioRxiv, or zenodo.
  • As an example on how such a paper could look like, we provide the preprint for the 2021 MIDOG baseline method by Frauke Wilm et al., please have a look here: https://arxiv.org/pdf/2108.11269v1
  • Please be concise but detail important machine learning pipeline details, such as:
    • Implementation details: Describe the architecture used (e.g., backbone network, detection/segmentation head, ensemble strategies) and mention any relevant modifications compared to standard implementations.
    • Training protocol: Report the number of epochs, learning rate schedules, optimizers, batch sizes, and any early stopping or regularization strategies applied.
    • Evaluation protocol: Clearly state the metrics used for validation and model selection (e.g., F1-score, mAP, Dice coefficient) and describe how these metrics were computed on your validation data.
    • External data usage: If external data was used (in addition to the MIDOG training set), please state this explicitly, including the source, size, and purpose of that data. Please note that rules apply to the use of external data.
    • Reproducibility: Provide sufficient details (hyperparameters, seed fixing, preprocessing steps) to allow others to reproduce your results. You might want to check our recent paper in Veterinary Pathology for a comprehensive list of things to consider.
  • For citing the MIDOG 2025 challenge, please use the official (peer reviewed) structured challenge design description:
    Ammeling, J., Aubreville, M., Banerjee, S., Bertram, C. A., Breininger, K., Hirling, D., Horvath, P., Stathonikos, N., & Veta, M. (2025, March). Mitosis Domain Generalization Challenge 2025. Zenodo. https://doi.org/10.5281/zenodo.15077361 [bibtex]
  • If you used or take inspiration from our other works, here is a list of the proper citations:
    • AMi-Br Dataset:
      Bertram, C.A. et al. (2025). Histologic Dataset of Normal and Atypical Mitotic Figures on Human Breast Cancer (AMi-Br). In: Palm, C., et al. Bildverarbeitung für die Medizin 2025. BVM 2025. Informatik aktuell. Springer Vieweg, Wiesbaden. [bibtex]
    • MIDOG25 Atypical Dataset:
      Weiss, V., Banerjee, S., Donovan, T., Conrad, T., Klopfleisch, R., Ammeling, J., Kaltenecker, C., Hirling, D., Veta, M., Stathonikos, N., Horvath, P., Breininger, K., Aubreville, M., & Bertram, C. (2025). A dataset of atypical vs normal mitoses classification for MIDOG – 2025 [Data set]. Zenodo. 10.5281/zenodo.15188326 [bibtex]
    • MIDOG++ Dataset:
      Aubreville, M., Wilm, F., Stathonikos, N., Breininger, K., Donovan, T. A., Jabari, S., … & Bertram, C. A. (2023). A comprehensive multi-domain dataset for mitotic figure detection. Scientific data, 10(1), 484. 10.1038/s41597-023-02327-4 [bibtex]
    • MITOS_WSI_CMC Dataset:
      Aubreville, M., Bertram, C. A., Donovan, T. A., Marzahl, C., Maier, A., & Klopfleisch, R. (2020). A completely annotated whole slide image dataset of canine breast cancer to aid human breast cancer research. Scientific data, 7(1), 417. 10.1038/s41597-020-00756-z [bibtex]
    • MITOS_WSI_CCMCT Dataset:
      Bertram, C.A., Aubreville, M., Marzahl, C. et al. A large-scale dataset for mitotic figure assessment on whole slide images of canine cutaneous mast cell tumor. Sci Data 6, 274 (2019). https://doi.org/10.1038/s41597-019-0290-4 [bibtex]
    • Atypical Classifier by Sweta Banerjee:
      Banerjee, S., Weiss, V., Conrad, T., Donovan, T. A., Ammeling, J., Fick, R. H. J., Utz, J., Klopfleisch, R., Kaltenecker, C., Bertram, C., Breininger, K., & Aubreville, M. (2025). Chromosome Mask-Conditioned Generative Inpainting for Atypical Mitosis Classification. MICCAI Workshop on Computational Pathology with Multimodal Data (COMPAYL)https://openreview.net/forum?id=cbQ4fL2Wap [bibtex]

Template for Technical Report/Abstract

For the final submission to MIDOG, the participants are required to provide a link to a preprint with a brief (approx. 2 pages) description of their method and results on a preprint site (such as arxiv.org, medrxiv.org) together with their submission. As outlined in the rules, we will not accept publications hosted on private file repositories, such as private or institutional websites or cloud file storage (e.g., google drive / baidu wangpan / etc.).

We do provide a template (double column, IEEE style) for this. There is no explicit page limit for that description, but it has to include a conclusive description of the approach of the participating team.

As in previous MIDOG editions, we plan to publish challenge proceedings. Preprints will undergo review, and participants will be invited to present at the MICCAI MIDOG workshop and contribute to the proceedings based on the review outcome.

For the proceedings, the highest-rated challenge submissions will be eligible for full-paper contributions (up to 8 pages), while all accepted workshop papers will be eligible for short-paper contributions (4 pages).

A clarification on the track 1 output format

We’ve had questions regarding the output format for track 1, which has caused some confusion amongst our participants. We want to clarify this in the following post.

Each point in the expected output format, which we detail also in our template here, looks like this:

A cause of confusion seems to be that here, the detections are either carry the “name” of “mitotic figure” or of “non-mitotic figure”.

The MIDOG challenge is a one-class object detection problem, so in terms of detection, we are only interested in mitotic figures, not in non-mitotic figures, which are also in parts of our datasets given as “hard examples”. It is also worth knowing that the “non-mitotic figure” annotations of our datasets are not complete, i.e., they do not consider every possible non-mitotic figure, and are thus not even suitable for training a two-class object detector.

Now why do we need this field then, if we are only looking for “mitotic figure”s? We use this field to decide between above threshold and below threshold detections. In all major object detection frameworks, each detection also has a confidence value. The final step in optimization is always to determine a threshold for the detection. Effectively, this threshold trades false positives for false negatives, and thus sets the operating point of the model.

Some metrics, like the average precision metric, require the complete (i.e. unthresholded) set of predictions, however.

So, in a nutshell:

  • Provide all your detections in the list
  • Indicate which ones are true detections (i.e., above threshold) with the field “name”, whereas above threshold detections need to carry the name “mitotic figure”, and below threshold detections carry the name “non-mitotic figure”.

Preliminary evaluation phase coming up

The preliminary evaluation phase for MIDOG 2025, our this year’s MICCAI challenge, is coming up in less than four days. Time to talk about why it’s there and why not:

– Docker submission is not easy, and we know. This is why, we prepared templates to fill out so everything runs smoothly. Still, there is a thousand reasons why it won’t. So this is why we have the evaluation phase: Participants can see if their solution works, or not.

– What this is explicitly not meant for, is trying to optimize algorithms on it. We know, it’s tempting to try to one-up your own score on the public leaderboard of the preliminary evaluation set – but, as the previous challenges have shown, it typically won’t help you in the final evaluation, as it leads to a dataset overfit. The challenge test set has different statistical properties: It uses different tumor types, and, in the case of track 1, even different selection criteria: The preliminary test set is all hotspot ROIs, while in the final test set, there are also random ROIs and challenging ROIs.

TL;DR: Smart participants don’t optimize on the preliminary evaluation set, but find a better proxy.

Docker templates available

With the MIDOG2025 challenge going into its hot phase, it is time for us to release the templates for docker submission. We do this for each of the two tracks of our challenge.

For each of the tracks, we also provide a reference baseline method in the same repository. A huge thanks to both Sweta Banerjee and Jonas Ammeling who worked so hard during the past days to make this possible today. You can find the performance of their respective methods in the leaderboards for track 1 and track 2.

How to submit: https://midog2025.deepmicroscopy.org/submitting/

Track 1 docker template (with baseline): https://github.com/DeepMicroscopy/MIDOG25_T1_reference_docker

Track 2 docker template (with baseline): https://github.com/DeepMicroscopy/MIDOG25_T2_reference_docker

MIDOG2025@grand-challenge

We are happy to share that we, again, partnered with grand-challenge.org to run our challenge. Already in previous years, this worked remarkably well, resulting in a great experience for the participants and organizers alike.

Please sign up for our challenge here: https://midog2025.grand-challenge.org

Welcome to MIDOG 2025

We are excited to welcome you to the 3rd MICCAI Digital Pathology Challenge on Mitotic Figure Detection (Mitosis Domain Generalization Challenge, MIDOG 2025)! This year, we continue our mission to advance the field of computational pathology by tackling one of the most challenging problems in deep learning for histopathology: the robust and generalizable detection of mitotic figures.

Building on the success of previous editions, MIDOG 2025 introduces new complexities and real-world challenges, pushing the boundaries of domain adaptation, generalization, and AI-assisted diagnostics. Whether you’re a researcher, a data scientist, or a student, this challenge provides a unique opportunity to develop and test state-of-the-art models on a diverse, carefully curated dataset.

Join us in shaping the future of AI in pathology! Explore the challenge details, access the dataset, and put your skills to the test. We look forward to your innovative solutions!

🚀 Are you ready to take on MIDOG 2025?

If you want to get in touch with us, you can join our Discord server.