Adaptive DCT coding of images using entropy-constrained trellis coded quantization

N. Farvardin, X. Ran, C. C. Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

In this paper we develop an adaptive DCT based image coding scheme in which a combination of a perceptually-motivated image model, entropy-constrained trellis coded quantization (ECTCQ) and perceptual error weighting is employed to obtain good subjective performance at low bit rates. The model is used to decompose the image into: (i) strong edge, (ii) slow-intensity variations and (iii) texture components. The perceptually important strong edges are encoded essentially losslessly. The remaining components are encoded using an adaptive DCT in which the transform coefficients are quantized by ECTCQ. The contrast-sensitivity of the human visual system is used for perceptual weighting of the transform coefficients. Simulation results are provided and some comparisons are made.

Original languageEnglish
Title of host publicationImage and Multidimensional Signal Processing
PagesV-397-V-400
StatePublished - 1993
EventIEEE International Conference on Acoustics, Speech and Signal Processing, Part 5 (of 5) - Minneapolis, MN, USA
Duration: 27 Apr 199330 Apr 1993

Publication series

NameProceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Volume5
ISSN (Print)0736-7791

Conference

ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing, Part 5 (of 5)
CityMinneapolis, MN, USA
Period27/04/9330/04/93

Fingerprint

Dive into the research topics of 'Adaptive DCT coding of images using entropy-constrained trellis coded quantization'. Together they form a unique fingerprint.

Cite this