TY - JOUR
T1 - Sparse Bayesian dictionary learning with a Gaussian hierarchical model
AU - Yang, Linxiao
AU - Fang, Jun
AU - Cheng, Hong
AU - Li, Hongbin
N1 - Publisher Copyright:
© 2016 Elsevier B.V.
PY - 2017/1/1
Y1 - 2017/1/1
N2 - We consider a dictionary learning problem aimed at designing a dictionary such that the signals admit a sparse or an approximate sparse representation over the learnt dictionary. The problem finds a variety of applications including image denoising, feature extraction, etc. In this paper, we propose a new hierarchical Bayesian model for dictionary learning, in which a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of the representation. Suitable non-informative priors are also placed on the dictionary and the noise variance such that they can be reliably estimated from the data. Based on the hierarchical model, a variational Bayesian method and a Gibbs sampling method are developed for Bayesian inference. The proposed methods have the advantage that they do not require the knowledge of the noise variance a priori. Numerical results show that the proposed methods are able to learn the dictionary with an accuracy better than existing methods, particularly for the case where there is a limited number of training signals.
AB - We consider a dictionary learning problem aimed at designing a dictionary such that the signals admit a sparse or an approximate sparse representation over the learnt dictionary. The problem finds a variety of applications including image denoising, feature extraction, etc. In this paper, we propose a new hierarchical Bayesian model for dictionary learning, in which a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of the representation. Suitable non-informative priors are also placed on the dictionary and the noise variance such that they can be reliably estimated from the data. Based on the hierarchical model, a variational Bayesian method and a Gibbs sampling method are developed for Bayesian inference. The proposed methods have the advantage that they do not require the knowledge of the noise variance a priori. Numerical results show that the proposed methods are able to learn the dictionary with an accuracy better than existing methods, particularly for the case where there is a limited number of training signals.
KW - Dictionary learning
KW - Gaussian-inverse Gamma prior
KW - Gibbs sampling
KW - Variational Bayesian
UR - http://www.scopus.com/inward/record.url?scp=84978152329&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84978152329&partnerID=8YFLogxK
U2 - 10.1016/j.sigpro.2016.06.016
DO - 10.1016/j.sigpro.2016.06.016
M3 - Article
AN - SCOPUS:84978152329
SN - 0165-1684
VL - 130
SP - 93
EP - 104
JO - Signal Processing
JF - Signal Processing
ER -