A TTFS-based energy and utilization efficient neuromorphic CNN accelerator.

artificial neural networks (ANNs) brain-inspired networks neuromorphic hardware spiking neural networks (SNNs) time-to-first-spike

Journal

Frontiers in neuroscience
ISSN: 1662-4548
Titre abrégé: Front Neurosci
Pays: Switzerland
ID NLM: 101478481

Informations de publication

Date de publication:
2023
Historique:
received: 12 12 2022
accepted: 10 04 2023
medline: 22 5 2023
pubmed: 22 5 2023
entrez: 22 5 2023
Statut: epublish

Résumé

Spiking neural networks (SNNs), which are a form of neuromorphic, brain-inspired AI, have the potential to be a power-efficient alternative to artificial neural networks (ANNs). Spikes that occur in SNN systems, also known as activations, tend to be extremely sparse, and low in number. This minimizes the number of data accesses typically needed for processing. In addition, SNN systems are typically designed to use addition operations which consume much less energy than the typical multiply and accumulate operations used in DNN systems. The vast majority of neuromorphic hardware designs support rate-based SNNs, where the information is encoded by spike rates. Generally, rate-based SNNs can be inefficient as a large number of spikes will be transmitted and processed during inference. One coding scheme that has the potential to improve efficiency is the time-to-first-spike (TTFS) coding, where the information isn't presented through the frequency of spikes, but instead through the relative spike arrival time. In TTFS-based SNNs, each neuron can only spike once during the entire inference process, and this results in high sparsity. The activation sparsity of TTFS-based SNNs is higher than rate-based SNNs, but TTFS-based SNNs have yet to achieve the same accuracy as rate-based SNNs. In this work, we propose two key improvements for TTFS-based SNN systems: (1) a novel optimization algorithm to improve the accuracy of TTFS-based SNNs and (2) a novel hardware accelerator for TTFS-based SNNs that uses a scalable and low-power design. Our work in TTFS coding and training improves the accuracy of TTFS-based SNNs to achieve state-of-the-art results on the MNIST and Fashion-MNIST datasets. Meanwhile, our work reduces the power consumption by at least 2.4×, 25.9×, and 38.4× over the state-of-the-art neuromorphic hardware on MNIST, Fashion-MNIST, and CIFAR10, respectively.

Identifiants

pubmed: 37214405
doi: 10.3389/fnins.2023.1121592
pmc: PMC10198466
doi:

Types de publication

Journal Article

Langues

eng

Pagination

1121592

Informations de copyright

Copyright © 2023 Yu, Xiang, P., Chu, Amornpaisannon, Tavva, Miriyala and Carlson.

Déclaration de conflit d'intérêts

VM was employed by National University of Singapore during the time of the paper. VM is current employed by Advanced Micro Devices (Singapore) Pte. Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Références

Proc Natl Acad Sci U S A. 2016 Oct 11;113(41):11441-11446
pubmed: 27651489
Front Neurosci. 2021 Mar 04;15:638474
pubmed: 33746705
Neural Netw. 2017 Dec;96:33-46
pubmed: 28957730
Science. 2014 Aug 8;345(6197):668-73
pubmed: 25104385
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235
pubmed: 28783639

Auteurs

Miao Yu (M)

School of Computing, Department of Computer Science, National University of Singapore, Singapore, Singapore.

Tingting Xiang (T)

School of Computing, Department of Computer Science, National University of Singapore, Singapore, Singapore.

Srivatsa P (S)

School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, United States.

Kyle Timothy Ng Chu (KTN)

Centre for Quantum Technologies, National University of Singapore, Singapore, Singapore.

Burin Amornpaisannon (B)

School of Computing, Department of Computer Science, National University of Singapore, Singapore, Singapore.

Yaswanth Tavva (Y)

School of Computing, Department of Computer Science, National University of Singapore, Singapore, Singapore.

Venkata Pavan Kumar Miriyala (VPK)

School of Computing, Department of Computer Science, National University of Singapore, Singapore, Singapore.

Trevor E Carlson (TE)

School of Computing, Department of Computer Science, National University of Singapore, Singapore, Singapore.

Classifications MeSH