Style-Extracting Diffusion Models for Semi-supervised Histopathology Segmentation

Öttl M, Wilm F, Steenpaß J, Qiu J, Rübner M, Hartmann A, Beckmann M, Fasching P, Maier A, Erber R, Kainz B, Breininger K (2025)


Publication Type: Conference contribution

Publication year: 2025

Journal

Publisher: Springer Science and Business Media Deutschland GmbH

Book Volume: 15133 LNCS

Pages Range: 236-252

Conference Proceedings Title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Event location: Milan IT

ISBN: 9783031732256

DOI: 10.1007/978-3-031-73226-3_14

Abstract

Deep learning-based image generation has seen significant advancements with diffusion models, notably improving the quality of generated images. Despite these developments, generating images with unseen characteristics beneficial for downstream tasks has received limited attention. To bridge this gap, we propose Style-Extracting Diffusion Models, featuring two conditioning mechanisms. Specifically, we utilize 1) a style conditioning mechanism which allows to inject style information of previously unseen images during image generation and 2) a content conditioning which can be targeted to a downstream task, e.g., layout for segmentation. We introduce a trainable style encoder to extract style information from images, and an aggregation block that merges style information from multiple style inputs. This architecture enables the generation of images with unseen styles in a zero-shot manner, by leveraging styles from unseen images, resulting in more diverse generations. In this work, we use the image layout as target condition and first show the capability of our method on a natural image dataset as a proof-of-concept. We further demonstrate its versatility in histopathology, where we combine prior knowledge about tissue composition and unannotated data to create diverse synthetic images with known layouts. This allows us to generate additional synthetic data to train a segmentation network in a semi-supervised fashion. We verify the added value of the generated images by showing improved segmentation results and lower performance variability between patients when synthetic images are included during segmentation training. The code of the method is publicly available at https://github.com/OettlM/STEDM.

Authors with CRIS profile

Additional Organisation(s)

How to cite

APA:

Öttl, M., Wilm, F., Steenpaß, J., Qiu, J., Rübner, M., Hartmann, A.,... Breininger, K. (2025). Style-Extracting Diffusion Models for Semi-supervised Histopathology Segmentation. In Aleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, Gül Varol (Eds.), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 236-252). Milan, IT: Springer Science and Business Media Deutschland GmbH.

MLA:

Öttl, Mathias, et al. "Style-Extracting Diffusion Models for Semi-supervised Histopathology Segmentation." Proceedings of the 18th European Conference on Computer Vision, ECCV 2024, Milan Ed. Aleš Leonardis, Elisa Ricci, Stefan Roth, Olga Russakovsky, Torsten Sattler, Gül Varol, Springer Science and Business Media Deutschland GmbH, 2025. 236-252.

BibTeX: Download