Sommer J, Özkan MA, Keszöcze O, Teich J (2022)
Publication Language: English
Publication Type: Journal article, Original article
Publication year: 2022
Book Volume: 41
Pages Range: 3767 - 3778
Journal Issue: 11
DOI: 10.1109/TCAD.2022.3197512
Spiking neural networks (SNNs) compute in an event-based manner to achieve a more efficient computation than standard neural networks. In SNNs, neuronal outputs are not encoded as real-valued activations but as sequences of binary spikes. The motivation of using SNNs over conventional neural networks is rooted in the special computational aspects of spike-based processing, especially the high degree of sparsity of spikes. Well-established implementations of convolutional neural networks (CNNs) feature large spatial arrays of processing elements (PEs) that remain highly underutilized in the face of activation sparsity. We propose a novel architecture optimized for the processing of convolutional SNNs (CSNNs) featuring a high degree of sparsity. The proposed architecture consists of an array of PEs of the size of the kernel of a convolution and an intelligent spike queue that provides a high PE utilization. A constant flow of spikes is ensured by compressing the feature maps into queues that can then be processed spike-by-spike. This compression is performed at run-time, leading to a self-timed schedule. This allows the processing time to scale with the number of spikes. Also, a novel memory organization scheme is introduced to efficiently store and retrieve the membrane potentials of the individual neurons using multiple small parallel on-chip RAMs. Each RAM is hardwired to its PE, reducing switching circuitry. We implemented the proposed architecture on an FPGA and achieved a significant speedup compared to previously proposed SNN implementations (~10 times) while needing less hardware resources and maintaining a higher energy efficiency (~15 times).
APA:
Sommer, J., Özkan, M.A., Keszöcze, O., & Teich, J. (2022). Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41(11), 3767 - 3778. https://doi.org/10.1109/TCAD.2022.3197512
MLA:
Sommer, Jan, et al. "Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 41.11 (2022): 3767 - 3778.
BibTeX: Download