Sparse Bayesian dictionary learning with a Gaussian hierarchical model

Linxiao Yang, Jun Fang, Hong Cheng, Hongbin Li

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

We consider a dictionary learning problem aimed at designing a dictionary such that the signals admit a sparse or an approximate sparse representation over the learnt dictionary. The problem finds a variety of applications including image denoising, feature extraction, etc. In this paper, we propose a new hierarchical Bayesian model for dictionary learning, in which a Gaussian-inverse Gamma hierarchical prior is used to promote the sparsity of the representation. Suitable non-informative priors are also placed on the dictionary and the noise variance such that they can be reliably estimated from the data. Based on the hierarchical model, a variational Bayesian method and a Gibbs sampling method are developed for Bayesian inference. The proposed methods have the advantage that they do not require the knowledge of the noise variance a priori. Numerical results show that the proposed methods are able to learn the dictionary with an accuracy better than existing methods, particularly for the case where there is a limited number of training signals.

Original languageEnglish
Pages (from-to)93-104
Number of pages12
JournalSignal Processing
Volume130
DOIs
StatePublished - 1 Jan 2017

Keywords

  • Dictionary learning
  • Gaussian-inverse Gamma prior
  • Gibbs sampling
  • Variational Bayesian

Fingerprint

Dive into the research topics of 'Sparse Bayesian dictionary learning with a Gaussian hierarchical model'. Together they form a unique fingerprint.

Cite this