Are Pathologist-Defined Labels Reproducible? Comparison of the TUPAC16 Mitotic Figure Dataset with an Alternative Set of Labels

Bertram CA, Veta M, Marzahl C, Stathonikos N, Maier A, Klopfleisch R, Aubreville M (2020)


Publication Type: Conference contribution

Publication year: 2020

Journal

Publisher: Springer Science and Business Media Deutschland GmbH

Book Volume: 12446 LNCS

Pages Range: 204-213

Conference Proceedings Title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Event location: Lima PE

ISBN: 9783030611651

DOI: 10.1007/978-3-030-61166-8_22

Abstract

Pathologist-defined labels are the gold standard for histopathological data sets, regardless of well-known limitations in consistency for some tasks. To date, some datasets on mitotic figures are available and were used for development of promising deep learning-based algorithms. In order to assess robustness of those algorithms and reproducibility of their methods it is necessary to test on several independent datasets. The influence of different labeling methods of these available datasets is currently unknown. To tackle this, we present an alternative set of labels for the images of the auxiliary mitosis dataset of the TUPAC16 challenge. Additional to manual mitotic figure screening, we used a novel, algorithm-aided labeling process, that allowed to minimize the risk of missing rare mitotic figures in the images. All potential mitotic figures were independently assessed by two pathologists. The novel, publicly available set of labels contains 1,999 mitotic figures (+28.80%) and additionally includes 10,483 labels of cells with high similarities to mitotic figures (hard examples). We found significant difference comparing scores between the original label set (0.549) and the new alternative label set (0.735) using a standard deep learning object detection architecture. The models trained on the alternative set showed higher overall confidence values, suggesting a higher overall label consistency. Findings of the present study show that pathologists-defined labels may vary significantly resulting in notable difference in the model performance. Comparison of deep learning-based algorithms between independent datasets with different labeling methods should be done with caution.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Bertram, C.A., Veta, M., Marzahl, C., Stathonikos, N., Maier, A., Klopfleisch, R., & Aubreville, M. (2020). Are Pathologist-Defined Labels Reproducible? Comparison of the TUPAC16 Mitotic Figure Dataset with an Alternative Set of Labels. In Jaime Cardoso, Wilson Silva, Ricardo Cruz, Hien Van Nguyen, Badri Roysam, Nicholas Heller, Pedro Henriques Abreu, Jose Pereira Amorim, Ivana Isgum, Vishal Patel, Kevin Zhou, Steve Jiang, Ngan Le, Khoa Luu, Raphael Sznitman, Veronika Cheplygina, Samaneh Abbasi, Diana Mateus, Emanuele Trucco (Eds.), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 204-213). Lima, PE: Springer Science and Business Media Deutschland GmbH.

MLA:

Bertram, Christof A., et al. "Are Pathologist-Defined Labels Reproducible? Comparison of the TUPAC16 Mitotic Figure Dataset with an Alternative Set of Labels." Proceedings of the IMIMIC 2020, MIL3ID 2020, LABELS 2020, Lima Ed. Jaime Cardoso, Wilson Silva, Ricardo Cruz, Hien Van Nguyen, Badri Roysam, Nicholas Heller, Pedro Henriques Abreu, Jose Pereira Amorim, Ivana Isgum, Vishal Patel, Kevin Zhou, Steve Jiang, Ngan Le, Khoa Luu, Raphael Sznitman, Veronika Cheplygina, Samaneh Abbasi, Diana Mateus, Emanuele Trucco, Springer Science and Business Media Deutschland GmbH, 2020. 204-213.

BibTeX: Download