Output-Feedback Global Consensus of Discrete-Time Multiagent Systems Subject to Input Saturation via Q-Learning Method.


Journal

IEEE transactions on cybernetics
ISSN: 2168-2275
Titre abrégé: IEEE Trans Cybern
Pays: United States
ID NLM: 101609393

Informations de publication

Date de publication:
Mar 2022
Historique:
pubmed: 13 5 2020
medline: 13 5 2020
entrez: 13 5 2020
Statut: ppublish

Résumé

This article proposes a Q -learning (QL)-based algorithm for global consensus of saturated discrete-time multiagent systems (DTMASs) via output feedback. According to the low-gain feedback (LGF) theory, control inputs of the saturated DTMASs can avoid the saturation by utilizing the control policies with LGF matrices, which were computed from the modified algebraic Riccati equation (MARE) by requiring the information of system dynamics in most previous works. However, in this article, we first find the lower bound on the real part of Laplacian matrices' nonzero eigenvalues of directed network topologies. Then, we define a test control input and propose a Q -function to derive a QL Bellman equation, which plays an essential part of the QL algorithm. Subsequently, different from the previous works, the output-feedback gain (OFG) matrix of this article can be obtained by limited iterations of the QL algorithm without requiring the information of agent dynamics and network topologies of the saturated DTMASs. Furthermore, the saturated DTMASs can achieve global consensus rather than the semiglobal consensus of the previous results. Finally, the effectiveness of the QL algorithm is confirmed via two simulations.

Identifiants

pubmed: 32396125
doi: 10.1109/TCYB.2020.2987385
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

1661-1670

Auteurs

Classifications MeSH