Conditional Deep Gaussian Processes: Empirical Bayes Hyperdata Learning.
Bayesian learning
approximate inference
deep Gaussian process
deep kernel learning
inducing points
moment matching
neural network
Journal
Entropy (Basel, Switzerland)
ISSN: 1099-4300
Titre abrégé: Entropy (Basel)
Pays: Switzerland
ID NLM: 101243874
Informations de publication
Date de publication:
23 Oct 2021
23 Oct 2021
Historique:
received:
01
10
2021
revised:
18
10
2021
accepted:
20
10
2021
entrez:
27
11
2021
pubmed:
28
11
2021
medline:
28
11
2021
Statut:
epublish
Résumé
It is desirable to combine the expressive power of deep learning with Gaussian Process (GP) in one expressive Bayesian learning model. Deep kernel learning showed success as a deep network used for feature extraction. Then, a GP was used as the function model. Recently, it was suggested that, albeit training with marginal likelihood, the deterministic nature of a feature extractor might lead to overfitting, and replacement with a Bayesian network seemed to cure it. Here, we propose the conditional deep Gaussian process (DGP) in which the intermediate GPs in hierarchical composition are supported by the hyperdata and the exposed GP remains zero mean. Motivated by the inducing points in sparse GP, the hyperdata also play the role of function supports, but are hyperparameters rather than random variables. It follows our previous moment matching approach to approximate the marginal prior for conditional DGP with a GP carrying an effective kernel. Thus, as in empirical Bayes, the hyperdata are learned by optimizing the approximate marginal likelihood which implicitly depends on the hyperdata via the kernel. We show the equivalence with the deep kernel learning in the limit of dense hyperdata in latent space. However, the conditional DGP and the corresponding approximate inference enjoy the benefit of being more Bayesian than deep kernel learning. Preliminary extrapolation results demonstrate expressive power from the depth of hierarchy by exploiting the exact covariance and hyperdata learning, in comparison with GP kernel composition, DGP variational inference and deep kernel learning. We also address the non-Gaussian aspect of our model as well as way of upgrading to a full Bayes inference.
Identifiants
pubmed: 34828085
pii: e23111387
doi: 10.3390/e23111387
pmc: PMC8618322
pii:
doi:
Types de publication
Journal Article
Langues
eng
Subventions
Organisme : Defense Advanced Research Projects Agency
ID : FA8750-17-2-0146
Références
Entropy (Basel). 2021 Nov 20;23(11):
pubmed: 34828243