TY - JOUR
T1 - Robust Matrix Completion from Quantized Observations
AU - Shen, Jie
AU - Awasthi, Pranjal
AU - Li, Ping
N1 - Publisher Copyright:
© 2019 by the author(s).
PY - 2019
Y1 - 2019
N2 - 1-bit matrix completion refers to the prob-lem of recovering a real-valued low-rank ma-trix from a small fraction of its sign pat-terns. In many real-world applications, how-ever, the observations are not only highly quantized, but also grossly corrupted. In this work, we consider the noisy statistical model where each observed entry can be flipped with some probability after quantiza-tion. We propose a simple maximum likeli-hood estimator which is shown to be robust to the sign flipping noise. In particular, we prove an upper bound on the statistical error, showing that with overwhelming probability (Formula Presented) samples are sufficient for accurate recovery, where r and d are the rank and dimension of the un-derlying matrix respectively, and (Formula Presented) is a random variable that parameterizes the sign flipping noise. Furthermore, a lower bound is established showing that the ob-tained sample complexity is near-optimal for prevalent statistical models. Finally, we sub-stantiate our theoretical findings with a com-prehensive study on synthetic and realistic data sets, and demonstrate the state-of-the-art performance.
AB - 1-bit matrix completion refers to the prob-lem of recovering a real-valued low-rank ma-trix from a small fraction of its sign pat-terns. In many real-world applications, how-ever, the observations are not only highly quantized, but also grossly corrupted. In this work, we consider the noisy statistical model where each observed entry can be flipped with some probability after quantiza-tion. We propose a simple maximum likeli-hood estimator which is shown to be robust to the sign flipping noise. In particular, we prove an upper bound on the statistical error, showing that with overwhelming probability (Formula Presented) samples are sufficient for accurate recovery, where r and d are the rank and dimension of the un-derlying matrix respectively, and (Formula Presented) is a random variable that parameterizes the sign flipping noise. Furthermore, a lower bound is established showing that the ob-tained sample complexity is near-optimal for prevalent statistical models. Finally, we sub-stantiate our theoretical findings with a com-prehensive study on synthetic and realistic data sets, and demonstrate the state-of-the-art performance.
UR - https://www.scopus.com/pages/publications/105020713379
UR - https://www.scopus.com/pages/publications/105020713379#tab=citedBy
M3 - Conference article
AN - SCOPUS:105020713379
VL - 89
SP - 397
EP - 407
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019
Y2 - 16 April 2019 through 18 April 2019
ER -