Reinforcement Learning-Based Nearly Optimal Control for Constrained-Input Partially Unknown Systems Using Differentiator.
Journal
IEEE transactions on neural networks and learning systems
ISSN: 2162-2388
Titre abrégé: IEEE Trans Neural Netw Learn Syst
Pays: United States
ID NLM: 101616214
Informations de publication
Date de publication:
Nov 2020
Nov 2020
Historique:
pubmed:
28
12
2019
medline:
28
12
2019
entrez:
28
12
2019
Statut:
ppublish
Résumé
In this article, a synchronous reinforcement-learning-based algorithm is developed for input-constrained partially unknown systems. The proposed control also alleviates the need for an initial stabilizing control. A first-order robust exact differentiator is employed to approximate unknown drift dynamics. Critic, actor, and disturbance neural networks (NNs) are established to approximate the value function, the control policy, and the disturbance policy, respectively. The Hamilton-Jacobi-Isaacs equation is solved by applying the value function approximation technique. The stability of the closed-loop system can be ensured. The state and weight errors of the three NNs are all uniformly ultimately bounded. Finally, the simulation results are provided to verify the effectiveness of the proposed method.
Identifiants
pubmed: 31880567
doi: 10.1109/TNNLS.2019.2957287
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM