Heidorn C, Sabih M, Meyerhöfer N, Schinabeck C, Teich J, Hannig F (2024)
Publication Language: English
Publication Type: Journal article
Publication year: 2024
Book Volume: 52
Pages Range: 40 - 58
DOI: 10.1007/s10766-024-00760-5
Open Access Link: https://link.springer.com/article/10.1007/s10766-024-00760-5
Filter pruning of convolutional neural networks (CNNs) is a common technique to effectively reduce the memory footprint, the number of arithmetic operations, and, consequently, inference time. Recent pruning approaches also consider the targeted device (i.e., graphics processing units) for CNN deployment to reduce the actual inference time. However, simple metrics, such as the L1-norm, are used for deciding which filters to prune. In this work, we propose a hardware-aware technique to explore the vast multi-objective design space of possible filter pruning configurations. Our approach incorporates not only the targeted device but also techniques from explainable artificial intelligence for ranking and deciding which filters to prune. For each layer, the number of filters to be pruned is optimized with the objective of minimizing the inference time and the error rate of the CNN. Experimental results show that our approach can speed up inference time by 1.40× and 1.30× for VGG-16 on the CIFAR-10 dataset and ResNet-18 on the ILSVRC-2012 dataset, respectively, compared to the state-of-the-art ABCPruner.
APA:
Heidorn, C., Sabih, M., Meyerhöfer, N., Schinabeck, C., Teich, J., & Hannig, F. (2024). Hardware-Aware Evolutionary Explainable Filter Pruning for Convolutional Neural Networks. International Journal of Parallel Programming, 52, 40 - 58. https://doi.org/10.1007/s10766-024-00760-5
MLA:
Heidorn, Christian, et al. "Hardware-Aware Evolutionary Explainable Filter Pruning for Convolutional Neural Networks." International Journal of Parallel Programming 52 (2024): 40 - 58.
BibTeX: Download