Mei S, Fan F, Wagner F, Thies M, Gu M, Sun Y, Maier A (2024)
Publication Language: English
Publication Type: Conference contribution
Publication year: 2024
Pages Range: 82-85
Event location: Bamberg, Germany
URI: https://arxiv.org/pdf/2404.03541
DOI: 10.48550/arXiv.2404.03541
Open Access Link: https://arxiv.org/pdf/2404.03541
Deep learning-based medical image processing algorithms require representative data during development. In particular, surgical data might be difficult to obtain, and high-quality public datasets are limited. To overcome this limitation and augment datasets, a widely adopted solution is the generation of synthetic images. In this work, we employ conditional diffusion models to generate knee radiographs from contour and bone segmentations. Remarkably, two distinct strategies are presented by incorporating the segmentation as a condition into the sampling and training process, namely, conditional sampling and conditional training. The results demonstrate that both methods can generate realistic images while adhering to the conditioning segmentation. The conditional training method outperforms the conditional sampling method and the conventional U-Net.
APA:
Mei, S., Fan, F., Wagner, F., Thies, M., Gu, M., Sun, Y., & Maier, A. (2024). Segmentation-Guided Knee Radiograph Generation using Conditional Diffusion Models. In Proceedings of the The 8th International Conference on Image Formation in X-Ray Computed Tomography (pp. 82-85). Bamberg, Germany.
MLA:
Mei, Siyuan, et al. "Segmentation-Guided Knee Radiograph Generation using Conditional Diffusion Models." Proceedings of the The 8th International Conference on Image Formation in X-Ray Computed Tomography, Bamberg, Germany 2024. 82-85.
BibTeX: Download