Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks

Sommer J, Özkan MA, Keszöcze O, Teich J (2022)


Publication Language: English

Publication Type: Conference contribution, Original article

Publication year: 2022

Series: Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS)

Event location: Shanghai CN

DOI: 10.1109/tcad.2022.3197512

Abstract

Spiking neural networks (SNNs) compute in an event-based manner to achieve a more efficient computation than standard neural networks. In SNNs, neuronal outputs are not encoded as real-valued activations but as sequences of binary spikes. The motivation of using SNNs over conventional neural networks is rooted in the special computational aspects of spike-based processing, especially the high degree of sparsity of spikes. Well-established implementations of convolutional neural networks (CNNs) feature large spatial arrays of processing elements (PEs) that remain highly underutilized in the face of activation sparsity. We propose a novel architecture optimized for the processing of convolutional SNNs (CSNNs) featuring a high degree of sparsity. The proposed architecture consists of an array of PEs of the size of the kernel of a convolution and an intelligent spike queue that provides a high PE utilization. A constant flow of spikes is ensured by compressing the feature maps into queues that can then be processed spike-by-spike. This compression is performed at run-time, leading to a self-timed schedule. This allows the processing time to scale with the number of spikes. Also, a novel memory organization scheme is introduced to efficiently store and retrieve the membrane potentials of the individual neurons using multiple small parallel on-chip RAMs. Each RAM is hardwired to its PE, reducing switching circuitry. We implemented the proposed architecture on an FPGA and achieved a significant speedup compared to previously proposed SNN implementations (~10 times) while needing less hardware resources and maintaining a higher energy efficiency (~15 times).

Authors with CRIS profile

Related research project(s)

How to cite

APA:

Sommer, J., Özkan, M.A., Keszöcze, O., & Teich, J. (2022). Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks. In Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS). Shanghai, CN.

MLA:

Sommer, Jan, et al. "Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks." Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), Shanghai 2022.

BibTeX: Download