Reinforcement-Learning Based Dynamic Transmission Range Adjustment in Medium Access Control for Underwater Wireless Sensor Networks
Clicks: 146
ID: 267904
2020
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Emerging Content
5.4
/100
18 views
18 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
In this paper, we propose a reinforcement learning (RL) based Medium Access Control (MAC) protocol with dynamic transmission range control (TRC). This protocol provides an adaptive, multi-hop, energy-efficient solution for communication in underwater sensors networks. It features a contention-based TRC scheme with a reactive multi-hop transmission. The protocol has the ability to adjust to network conditions using RL-based learning algorithm. The combination of TRC and RL algorithms can hit a balance between the energy consumption and network performance. Moreover, the proposed adaptive mechanism for relay-selection provides better network utilization and energy-efficiency over time, comparing to existing solutions. Using a straightforward ALOHA-based channel access alongside “helper-relays” (intermediate nodes), the protocol is able to obtain a substantial amount of energy savings, achieving up to 90% of the theoretical “best possible” energy efficiency. In addition, the protocol shows a significant advantage in MAC layer performance, such as network throughput and end-to-end delay.
| Reference Key |
dugaev2020electronicsreinforcement-learning
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Dmitrii Dugaev;Zheng Peng;Yu Luo;Lina Pu;Dugaev, Dmitrii;Peng, Zheng;Luo, Yu;Pu, Lina; |
| Journal | Electronics |
| Year | 2020 |
| DOI |
10.3390/electronics9101727
|
| URL | |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.