Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study

Kim HE, Maros ME, Miethke T, Kittel M, Siegel F, Ganslandt T (2023)


Publication Type: Journal article

Publication year: 2023

Journal

Book Volume: 11

Article Number: 1333

Journal Issue: 5

DOI: 10.3390/biomedicines11051333

Abstract

We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. Frames per second (FPS) of small models consistently surpassed their large counterparts by a factor of 1-2×. DeiT small was the fastest VT in int8 configuration (6.0 FPS). In conclusion, VTs consistently outperformed CNNs for Gram-stain classification in most settings even on smaller datasets.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Kim, H.E., Maros, M.E., Miethke, T., Kittel, M., Siegel, F., & Ganslandt, T. (2023). Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study. Biomedicines, 11(5). https://doi.org/10.3390/biomedicines11051333

MLA:

Kim, Hee E., et al. "Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study." Biomedicines 11.5 (2023).

BibTeX: Download