Non-smooth Bayesian learning for artificial neural networks.
Artificial neural networks
Hamiltonian dynamics
Machine learning
Optimization
Journal
Journal of ambient intelligence and humanized computing
ISSN: 1868-5137
Titre abrégé: J Ambient Intell Humaniz Comput
Pays: Germany
ID NLM: 101538212
Informations de publication
Date de publication:
25 Jun 2022
25 Jun 2022
Historique:
received:
11
11
2021
accepted:
30
05
2022
entrez:
5
7
2022
pubmed:
6
7
2022
medline:
6
7
2022
Statut:
aheadofprint
Résumé
Artificial neural networks (ANNs) are being widely used in supervised machine learning to analyze signals or images for many applications. Using an annotated learning database, one of the main challenges is to optimize the network weights. A lot of work on solving optimization problems or improving optimization methods in machine learning has been proposed successively such as gradient-based method, Newton-type method, meta-heuristic method. For the sake of efficiency, regularization is generally used. When non-smooth regularizers are used especially to promote sparse networks, such as the
Identifiants
pubmed: 35789599
doi: 10.1007/s12652-022-04073-8
pii: 4073
pmc: PMC9244188
doi:
Types de publication
Journal Article
Langues
eng
Pagination
1-19Informations de copyright
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022.
Références
IEEE Trans Cybern. 2020 Aug;50(8):3668-3681
pubmed: 31751262
J Ambient Intell Humaniz Comput. 2021 Sep 18;:1-21
pubmed: 34567277
Sensors (Basel). 2021 Mar 03;21(5):
pubmed: 33802357
Nat Commun. 2018 Jun 19;9(1):2383
pubmed: 29921910
Comput Intell Neurosci. 2016;2016:1537325
pubmed: 27375738
J Chem Inf Comput Sci. 2004 Jan-Feb;44(1):1-12
pubmed: 14741005