An iterative Q-learning based global consensus of discrete-time saturated multi-agent systems.
Clicks: 241
ID: 66056
2019
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Emerging Content
4.2
/100
14 views
14 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
This paper addresses the consensus problem of discrete-time multiagent systems (DTMASs), which are subject to input saturation and lack of the information of agent dynamics. In the previous works, the DTMASs with input saturation can achieve semiglobal consensus by utilizing the low gain feedback (LGF) method, but computing the LGF matrices by solving the modified algebraic Riccati equation requires the knowledge of agent dynamics. In this paper, motivated by the reinforcement learning method, we propose a model-free Q-learning algorithm to obtain the LGF matrices for the DTMASs achieving global consensus. Firstly, we define a Q-learning function and deduce a Q-learning Bellman equation, whose solution can work out the LGF matrix. Then, we develop an iterative Q-learning algorithm to obtain the LGF matrix without the requirement of the knowledge about agent dynamics. Moreover, the DTMASs can achieve global consensus. Lastly, some simulation results are proposed to validate the effectiveness of the Q-learning algorithm and show the effect on the rate of convergence from the initial states of agents and the input saturation limit.
| Reference Key |
long2019anchaos
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Long, Mingkang;Su, Housheng;Wang, Xiaoling;Jiang, Guo-Ping;Wang, Xiaofan; |
| Journal | chaos (woodbury, ny) |
| Year | 2019 |
| DOI |
10.1063/1.5120106
|
| URL | |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.