It is now only hours until our MIDOG 2025 MICCAI workshop starts. We are very much looking forward to the onsite workshop event held in room IBS-3F-R1-R3 of the IBS building, starting on September 23rd, 08:00 AM. Note that the IBS building is not located on the main conference site (DCC1, DCC2)
The workshop is hybrid and we welcome all interested participants to the online session of it. We are using Zoom for the online session. Please click here to enter the meeting. Meeting-ID: 639 6157 4583 Passwort: 471884
Thanks to generous sponsorship by the MICCAI SIG-CompPath, we will be awarding not only a certificate but also a monetary prize to the winners of the MIDOG 2025 challenge.
The MICCAI special interest group (SIG) on computational pathology is a newly funded group centered around the idea of bringing clinical research into practice. Read more about them and their goals here: https://miccai.org/index.php/special-interest-groups/sig-comppath/
With the MIDOG 2025 submission not complete, we want to inform all participants about the current progress and the next steps. In the last days, we conducted a peer review, which we currently evaluate. The participants can expect a decision within the next days.
Based on the peer review by at least two experts, the decision on submissions will be either to:
Invite for an long oral presentation (12+3 minutes)
Invite for a short oral presentation (3 minutes pitch talk)
Reject the approach
Since we had a huge number of submissions this year (n=31 papers), and the workshop is strictly restricted in time, we will only invite four long oral presentations for papers that our peers found most interesting.
All participants accepted to the workshop will be permitted to contribute their paper to the proceedings. We will invite the best papers in the peer review to contribute long papers (8 pages), while all participants can contribute the reviewed preprint as submitted (pending minor corrections based on the review).
We want to highlight that as we are a scientific challenge, both, the invitation to the workshop, as well as the invitation to contribute to the proceedings, are solely based on the assessment of scientific quality, as done by the peer reviewers, not based on the scores that were achieved in the challenge. This also means that the decision on who will be invited for long oral presentations does not allow to draw conclusions about their rank in the challenge.
The MIDOG 2025 submission on the final test set does require a link to a preprint by the participants, detailing the method they submitted in a scientific paper format. Expect this to be like a short paper, and we encourage the classical paper format (Abstract, Introduction, Methods, Results, Discussion, References).
As an example on how such a paper could look like, we provide the preprint for the 2021 MIDOG baseline method by Frauke Wilm et al., please have a look here: https://arxiv.org/pdf/2108.11269v1
If you participate in both tracks, it might make sense to have individual preprints due to the limited space. However, it is not mandatory, and you can also choose to describe both approaches in a single preprint.
It is required to provide the link to your preprint on a public repository, such as arXiv, bioRxiv, or zenodo, and not use a private repository like cloud storage (google drive, etc).
We encourage to use a real preprint server (arxiv, bioxriv, …) and not zenodo, as these are indexed much better and the preprints will be a lot more visible there.
It is absolutely acceptable to have updated versions of the paper on arxiv, if you want to submit an initial version early and then provide an update later. We also recommend to update the preprint with the final metrics, once these are known.
Please be concise but detail important machine learning pipeline details, such as:
Implementation details: Describe the architecture used (e.g., backbone network, detection/segmentation head, ensemble strategies) and mention any relevant modifications compared to standard implementations.
Training protocol: Report the number of epochs, learning rate schedules, optimizers, batch sizes, and any early stopping or regularization strategies applied.
Evaluation protocol: Clearly state the metrics used for validation and model selection (e.g., F1-score, mAP, Dice coefficient) and describe how these metrics were computed on your validation data.
External data usage: If external data was used (in addition to the MIDOG training set), please state this explicitly, including the source, size, and purpose of that data. Please note that rules apply to the use of external data.
Reproducibility: Provide sufficient details (hyperparameters, seed fixing, preprocessing steps) to allow others to reproduce your results. You might want to check our recent paper in Veterinary Pathology for a comprehensive list of things to consider.
For citing the MIDOG 2025 challenge, please use the official (peer reviewed) structured challenge design description: Ammeling, J., Aubreville, M., Banerjee, S., Bertram, C. A., Breininger, K., Hirling, D., Horvath, P., Stathonikos, N., & Veta, M. (2025, March). Mitosis Domain Generalization Challenge 2025. Zenodo. https://doi.org/10.5281/zenodo.15077361 [bibtex]
If you used or take inspiration from our other works, here is a list of the proper citations:
AMi-Br Dataset: Bertram, C.A. et al. (2025). Histologic Dataset of Normal and Atypical Mitotic Figures on Human Breast Cancer (AMi-Br). In: Palm, C., et al. Bildverarbeitung für die Medizin 2025. BVM 2025. Informatik aktuell. Springer Vieweg, Wiesbaden. [bibtex]
MIDOG25 Atypical Dataset: Weiss, V., Banerjee, S., Donovan, T., Conrad, T., Klopfleisch, R., Ammeling, J., Kaltenecker, C., Hirling, D., Veta, M., Stathonikos, N., Horvath, P., Breininger, K., Aubreville, M., & Bertram, C. (2025). A dataset of atypical vs normal mitoses classification for MIDOG – 2025 [Data set]. Zenodo. 10.5281/zenodo.15188326 [bibtex]
MIDOG++ Dataset: Aubreville, M., Wilm, F., Stathonikos, N., Breininger, K., Donovan, T. A., Jabari, S., … & Bertram, C. A. (2023). A comprehensive multi-domain dataset for mitotic figure detection. Scientific data, 10(1), 484. 10.1038/s41597-023-02327-4 [bibtex]
MITOS_WSI_CMC Dataset: Aubreville, M., Bertram, C. A., Donovan, T. A., Marzahl, C., Maier, A., & Klopfleisch, R. (2020). A completely annotated whole slide image dataset of canine breast cancer to aid human breast cancer research. Scientific data, 7(1), 417. 10.1038/s41597-020-00756-z [bibtex]
MITOS_WSI_CCMCT Dataset: Bertram, C.A., Aubreville, M., Marzahl, C. et al. A large-scale dataset for mitotic figure assessment on whole slide images of canine cutaneous mast cell tumor. Sci Data 6, 274 (2019). https://doi.org/10.1038/s41597-019-0290-4 [bibtex]
AtNorM-Br Dataset: Banerjee, S., Weiss, V., Donovan, T. A., Fick, R. H., Conrad, T., Ammeling, J., … & Bertram, C. A. (2025). Benchmarking Deep Learning and Vision Foundation Models for Atypical vs. Normal Mitosis Classification with Cross-Dataset Evaluation. arXiv preprint arXiv:2506.21444. [bibtex]
Atypical Classifier by Sweta Banerjee: Banerjee, S., Weiss, V., Conrad, T., Donovan, T. A., Ammeling, J., Fick, R. H. J., Utz, J., Klopfleisch, R., Kaltenecker, C., Bertram, C., Breininger, K., & Aubreville, M. (2025). Chromosome Mask-Conditioned Generative Inpainting for Atypical Mitosis Classification. MICCAI Workshop on Computational Pathology with Multimodal Data (COMPAYL). https://openreview.net/forum?id=cbQ4fL2Wap [bibtex]
For the final submission to MIDOG, the participants are required to provide a link to a preprint with a brief (approx. 2 pages) description of their method and results on a preprint site (such as arxiv.org, medrxiv.org) together with their submission. As outlined in the rules, we will not accept publications hosted on private file repositories, such as private or institutional websites or cloud file storage (e.g., google drive / baidu wangpan / etc.).
We do provide a template (double column, IEEE style) for this. There is no explicit page limit for that description, but it has to include a conclusive description of the approach of the participating team.
As in previous MIDOG editions, we plan to publish challenge proceedings. Preprints will undergo review, and participants will be invited to present at the MICCAI MIDOG workshop and contribute to the proceedings based on the review outcome.
For the proceedings, the highest-rated challenge submissions will be eligible for full-paper contributions (up to 8 pages), while all accepted workshop papers will be eligible for short-paper contributions (4 pages).
We’ve had questions regarding the output format for track 1, which has caused some confusion amongst our participants. We want to clarify this in the following post.
Each point in the expected output format, which we detail also in our template here, looks like this:
A cause of confusion seems to be that here, the detections are either carry the “name” of “mitotic figure” or of “non-mitotic figure”.
The MIDOG challenge is a one-class object detection problem, so in terms of detection, we are only interested in mitotic figures, not in non-mitotic figures, which are also in parts of our datasets given as “hard examples”. It is also worth knowing that the “non-mitotic figure” annotations of our datasets are not complete, i.e., they do not consider every possible non-mitotic figure, and are thus not even suitable for training a two-class object detector.
Now why do we need this field then, if we are only looking for “mitotic figure”s? We use this field to decide between above threshold and below threshold detections. In all major object detection frameworks, each detection also has a confidence value. The final step in optimization is always to determine a threshold for the detection. Effectively, this threshold trades false positives for false negatives, and thus sets the operating point of the model.
Some metrics, like the average precision metric, require the complete (i.e. unthresholded) set of predictions, however.
So, in a nutshell:
Provide all your detections in the list
Indicate which ones are true detections (i.e., above threshold) with the field “name”, whereas above threshold detections need to carry the name “mitotic figure”, and below threshold detections carry the name “non-mitotic figure”.
The preliminary evaluation phase for MIDOG 2025, our this year’s MICCAI challenge, is coming up in less than four days. Time to talk about why it’s there and why not:
– Docker submission is not easy, and we know. This is why, we prepared templates to fill out so everything runs smoothly. Still, there is a thousand reasons why it won’t. So this is why we have the evaluation phase: Participants can see if their solution works, or not.
– What this is explicitly not meant for, is trying to optimize algorithms on it. We know, it’s tempting to try to one-up your own score on the public leaderboard of the preliminary evaluation set – but, as the previous challenges have shown, it typically won’t help you in the final evaluation, as it leads to a dataset overfit. The challenge test set has different statistical properties: It uses different tumor types, and, in the case of track 1, even different selection criteria: The preliminary test set is all hotspot ROIs, while in the final test set, there are also random ROIs and challenging ROIs.
TL;DR: Smart participants don’t optimize on the preliminary evaluation set, but find a better proxy.
With the MIDOG2025 challenge going into its hot phase, it is time for us to release the templates for docker submission. We do this for each of the two tracks of our challenge.
For each of the tracks, we also provide a reference baseline method in the same repository. A huge thanks to both Sweta Banerjee and Jonas Ammeling who worked so hard during the past days to make this possible today. You can find the performance of their respective methods in the leaderboards for track 1 and track 2.
We are happy to share that we, again, partnered with grand-challenge.org to run our challenge. Already in previous years, this worked remarkably well, resulting in a great experience for the participants and organizers alike.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.