Sparse-Representation-Based Classification with Structure-Preserving Dimension Reduction

Jin Xu, Guang Yang, Yafeng Yin, Hong Man, Haibo He

Research output: Contribution to journalArticlepeer-review

33 Scopus citations

Abstract

Sparse-representation-based classification (SRC), which classifies data based on the sparse reconstruction error, has been a new technique in pattern recognition. However, the computation cost for sparse coding is heavy in real applications. In this paper, various dimension reduction methods are studied in the context of SRC to improve classification accuracy as well as reduce computational cost. A feature extraction method, i.e., principal component analysis, and feature selection methods, i.e., Laplacian score and Pearson correlation coefficient, are applied to the data preparation step to preserve the structure of data in the lower-dimensional space. Classification performance of SRC with structure-preserving dimension reduction (SRC-SPDR) is compared to classical classifiers such as k-nearest neighbors and support vector machines. Experimental tests with the UCI and face data sets demonstrate that SRC-SPDR is effective with relatively low computation cost

Original languageEnglish
Pages (from-to)608-621
Number of pages14
JournalCognitive Computation
Volume6
Issue number3
DOIs
StatePublished - Sep 2014

Keywords

  • Classification
  • Dimension reduction
  • Feature extraction
  • Feature selection
  • Sparse representation (coding)
  • Structure preserving

Fingerprint

Dive into the research topics of 'Sparse-Representation-Based Classification with Structure-Preserving Dimension Reduction'. Together they form a unique fingerprint.

Cite this