A Semismooth Newton Algorithm for High-Dimensional Nonconvex Sparse Learning.

Clicks: 259
ID: 51920
2019
The smoothly clipped absolute deviation (SCAD) and the minimax concave penalty (MCP)-penalized regression models are two important and widely used nonconvex sparse learning tools that can handle variable selection and parameter estimation simultaneously and thus have potential applications in various fields, such as mining biological data in high-throughput biomedical studies. Theoretically, these two models enjoy the oracle property even in the high-dimensional settings, where the number of predictors p may be much larger than the number of observations n. However, numerically, it is quite challenging to develop fast and stable algorithms due to their nonconvexity and nonsmoothness. In this article, we develop a fast algorithm for SCAD- and MCP-penalized learning problems. First, we show that the global minimizers of both models are roots of the nonsmooth equations. Then, a semismooth Newton (SSN) algorithm is employed to solve the equations. We prove that the SSN algorithm converges locally and superlinearly to the Karush-Kuhn-Tucker (KKT) points. The computational complexity analysis shows that the cost of the SSN algorithm per iteration is O(np). Combined with the warm-start technique, the SSN algorithm can be very efficient and accurate. Simulation studies and a real data example suggest that our SSN algorithm, with comparable solution accuracy with the coordinate descent (CD) and the difference of convex (DC) proximal Newton algorithms, is more computationally efficient.
Reference Key
shi2019aieee Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Shi, Yueyong;Huang, Jian;Jiao, Yuling;Yang, Qinglong;
Journal IEEE Transactions on Neural Networks and Learning Systems
Year 2019
DOI 10.1109/TNNLS.2019.2935001
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.