Engineering a Less Artificial Intelligence.
artificial intelligence
generalization
inductive bias
machine learning
neuroscience
robustness
sensory systems
Journal
Neuron
ISSN: 1097-4199
Titre abrégé: Neuron
Pays: United States
ID NLM: 8809320
Informations de publication
Date de publication:
25 09 2019
25 09 2019
Historique:
received:
09
05
2019
revised:
09
08
2019
accepted:
21
08
2019
entrez:
27
9
2019
pubmed:
27
9
2019
medline:
25
3
2020
Statut:
ppublish
Résumé
Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called "inductive bias," determines how well any learning algorithm-or brain-generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.
Identifiants
pubmed: 31557461
pii: S0896-6273(19)30740-8
doi: 10.1016/j.neuron.2019.08.034
pii:
doi:
Types de publication
Journal Article
Research Support, N.I.H., Extramural
Research Support, Non-U.S. Gov't
Research Support, U.S. Gov't, Non-P.H.S.
Review
Langues
eng
Sous-ensembles de citation
IM
Pagination
967-979Informations de copyright
Copyright © 2019 Elsevier Inc. All rights reserved.