TY - JOUR
T1 - One-Bit Compressed Sensing via One-Shot Hard Thresholding
AU - Shen, Jie
N1 - Publisher Copyright:
© 2020 Proceedings of Machine Learning Research. All rights reserved.
PY - 2020
Y1 - 2020
N2 - This paper concerns the problem of 1-bit compressed sensing, where the goal is to estimate a sparse signal from a few of its binary measurements. We study a non-convex sparsity-constrained program and present a novel and concise analysis that moves away from the widely used notion of Gaussian width. We show that with high probability a simple algorithm is guaranteed to produce an accurate approximation to the normalized signal of interest under the l2-metric. On top of that, we establish an ensemble of new results that address norm estimation, support recovery, and model misspecification. On the computational side, it is shown that the non-convex program can be solved via one-step hard thresholding which is dramatically efficient in terms of time complexity and memory footprint. On the statistical side, it is shown that our estimator enjoys a near-optimal error rate under standard conditions. The theoretical results are further substantiated by numerical experiments.
AB - This paper concerns the problem of 1-bit compressed sensing, where the goal is to estimate a sparse signal from a few of its binary measurements. We study a non-convex sparsity-constrained program and present a novel and concise analysis that moves away from the widely used notion of Gaussian width. We show that with high probability a simple algorithm is guaranteed to produce an accurate approximation to the normalized signal of interest under the l2-metric. On top of that, we establish an ensemble of new results that address norm estimation, support recovery, and model misspecification. On the computational side, it is shown that the non-convex program can be solved via one-step hard thresholding which is dramatically efficient in terms of time complexity and memory footprint. On the statistical side, it is shown that our estimator enjoys a near-optimal error rate under standard conditions. The theoretical results are further substantiated by numerical experiments.
UR - http://www.scopus.com/inward/record.url?scp=85162726412&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85162726412&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85162726412
VL - 124
SP - 510
EP - 519
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 36th Conference on Uncertainty in Artificial Intelligence, UAI 2020
Y2 - 3 August 2020 through 6 August 2020
ER -