Discriminative modeling by boosting on multilevel aggregates

Jason J. Corso

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

This paper presents a new approach to discriminative modeling for classification and labeling. Our method, called Boosting on Multilevel Aggregates (BMA), adds a new class of hierarchical, adaptive features into boosting-based discriminative models. Each pixel is linked with a set of aggregate regions in a multilevel coarsening of the image. The coarsening is adaptive, rapid and stable. The multilevel aggregates present additional information rich features on which to boost, such as shape properties, neighborhood context, hierarchical characteristics, and photometric statistics. We implement and test our approach on three two-class problems: classifying documents in office scenes, buildings and horses in natural images. In all three cases, the majority, about 75%, of features selected during boosting are our proposed BMA features rather than patch-based features. This large percentage demonstrates the discriminative power of the multilevel aggregate features over conventional patch-based features. Our quantitative performance measures show the proposed approach gives superior results to the state-of-the-art in all three applications.

Original languageEnglish
Title of host publication26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
DOIs
StatePublished - 2008
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR - Anchorage, AK, United States
Duration: 23 Jun 200828 Jun 2008

Publication series

Name26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR

Conference

Conference26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR
Country/TerritoryUnited States
CityAnchorage, AK
Period23/06/0828/06/08

Fingerprint

Dive into the research topics of 'Discriminative modeling by boosting on multilevel aggregates'. Together they form a unique fingerprint.

Cite this