Two-Stage Learning for Uplink Channel Estimation in One-Bit Massive MIMO
Clicks: 35
ID: 283365
2019
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Emerging Content
3.6
/100
12 views
12 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
We develop a two-stage deep learning pipeline architecture to estimate the
uplink massive MIMO channel with one-bit ADCs. This deep learning pipeline is
composed of two separate generative deep learning models. The first one is a
supervised learning model and designed to compensate for the quantization loss.
The second one is an unsupervised learning model and optimized for denoising.
Our results show that the proposed deep learning-based channel estimator can
significantly outperform other state-of-the-art channel estimators for one-bit
quantized massive MIMO systems. In particular, our design provides 5-10 dB gain
in channel estimation error. Furthermore, it requires a reasonable amount of
pilots, on the order of 20 per coherence time interval.
| Reference Key |
andrews2019twostage
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Eren Balevi; Jeffrey G. Andrews |
| Journal | arXiv |
| Year | 2019 |
| DOI |
DOI not found
|
| URL | |
| Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.