DCPE co-training: Co-training based on diversity of class probability estimation

Jin Xu, Haibo He, Hong Man

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Co-training is a semi-supervised learning technique used to recover the unlabeled data based on two base learners. The normal co-training approaches use the most confidently recovered unlabeled data to augment the training data. In this paper, we investigate the co-training approaches with a focus on the diversity issue and propose the diversity of class probability estimation (DCPE) co-training approach. The key idea of the DCPE co-training method is to use DCPE between two base learners to choose the recovered unlabeled data. The results are compared with classic co-training, tri-training and self training methods. Our experimental study based on the UCI benchmark data sets shows that the DCPE co-training is robust and efficient in the classification.

Original languageEnglish
Title of host publication2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
DOIs
StatePublished - 2010
Event2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010 - Barcelona, Spain
Duration: 18 Jul 201023 Jul 2010

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Conference

Conference2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 International Joint Conference on Neural Networks, IJCNN 2010
Country/TerritorySpain
CityBarcelona
Period18/07/1023/07/10

Fingerprint

Dive into the research topics of 'DCPE co-training: Co-training based on diversity of class probability estimation'. Together they form a unique fingerprint.

Cite this