Surrogate gradients for analog neuromorphic computing.
neuromorphic hardware
recurrent neural networks
self-calibration
spiking neural networks
surrogate gradients
Journal
Proceedings of the National Academy of Sciences of the United States of America
ISSN: 1091-6490
Titre abrégé: Proc Natl Acad Sci U S A
Pays: United States
ID NLM: 7505876
Informations de publication
Date de publication:
25 01 2022
25 01 2022
Historique:
accepted:
25
11
2021
entrez:
19
1
2022
pubmed:
20
1
2022
medline:
1
3
2022
Statut:
ppublish
Résumé
To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Identifiants
pubmed: 35042792
pii: 2109194119
doi: 10.1073/pnas.2109194119
pmc: PMC8794842
pii:
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Informations de copyright
Copyright © 2022 the Author(s). Published by PNAS.
Déclaration de conflit d'intérêts
The authors declare no competing interest.
Références
J Neural Eng. 2016 Oct;13(5):051001
pubmed: 27529195
IEEE Trans Biomed Circuits Syst. 2017 Feb;11(1):128-142
pubmed: 28113678
Nature. 2020 Jan;577(7792):641-646
pubmed: 31996818
Proc Natl Acad Sci U S A. 2016 Oct 11;113(41):11441-11446
pubmed: 27651489
Front Neurosci. 2019 Jan 08;12:987
pubmed: 30670943
Nature. 2017 Oct 18;550(7676):354-359
pubmed: 29052630
Nature. 1991 Dec 19-26;354(6354):515-8
pubmed: 1661852
Front Neurosci. 2021 Apr 06;15:651141
pubmed: 33889071
Front Neurosci. 2018 Dec 03;12:891
pubmed: 30559644
Nat Commun. 2021 Oct 4;12(1):5791
pubmed: 34608134
Front Neurosci. 2018 Oct 25;12:774
pubmed: 30410432
Nat Commun. 2020 May 18;11(1):2473
pubmed: 32424184
Front Neurosci. 2015 Jul 09;9:222
pubmed: 26217169
Neural Comput. 2021 Mar 26;33(4):899-925
pubmed: 33513328
Front Neurosci. 2011 May 31;5:73
pubmed: 21747754
Sci Rep. 2021 Dec 3;11(1):23376
pubmed: 34862429
Front Neurosci. 2016 Nov 08;10:508
pubmed: 27877107
Nature. 2019 Nov;575(7784):607-617
pubmed: 31776490
Neural Comput. 2018 Jun;30(6):1514-1541
pubmed: 29652587
IEEE Trans Neural Netw Learn Syst. 2012 Sep;23(9):1426-35
pubmed: 24807926
IEEE Trans Biomed Circuits Syst. 2019 Oct;13(5):999-1010
pubmed: 31329562
Nat Commun. 2020 Jul 17;11(1):3625
pubmed: 32681001
Nature. 2018 Jun;558(7708):60-67
pubmed: 29875487
IEEE Trans Neural Netw Learn Syst. 2020 Dec 30;PP:
pubmed: 33378266
Science. 2014 Aug 8;345(6197):668-73
pubmed: 25104385
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3227-3235
pubmed: 28783639