Distributed and Parallel ADMM for Structured Nonconvex Optimization Problem.

Clicks: 227
ID: 78987
2019
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
The nonconvex optimization problems have recently attracted significant attention. However, both efficient algorithm and solid theory are still very limited. The difficulty is even pronounced for structured large-scale problems in many real-world applications. This article proposes an application-driven algorithmic framework for structured nonconvex optimization problems with distributed and parallel techniques, which jointly handles the high dimensionality of model parameters and distributed training data. The theoretical convergence of our algorithm is established under moderate assumptions. We apply the proposed method to popular multitask applications, including a multitask reinforcement learning problem. The promising performance demonstrates our framework is effective and efficient.
Reference Key
wang2019distributedieee Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Wang, Xiangfeng;Yan, Junchi;Jin, Bo;Li, Wenhao;
Journal ieee transactions on cybernetics
Year 2019
DOI
10.1109/TCYB.2019.2950337
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.