TY - CONF
T1 - Rethinking Curriculum Learning with Incremental Labels and Adaptive Compensation
AU - Ganesh, Madan Ravi
AU - Corso, Jason J.
N1 - Publisher Copyright:
© 2020. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
PY - 2020
Y1 - 2020
N2 - Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum [37]. Conventional curriculum learning schemes introduce samples in their order of difficulty. This forces models to begin learning from a subset of the available data while adding the external overhead of evaluating the difficulty of samples. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels rather than the difficulty of samples while consistently using the entire dataset throughout training. In the first phase, Incremental Label Introduction, we partition data into mutually exclusive subsets, one that contains a subset of the ground-truth labels and another that contains the remaining data attached to a pseudo-label. Throughout the training process, we recursively reveal unseen ground-truth labels in fixed increments until all the labels are known to the model. In the second phase, Adaptive Compensation, we optimize the loss function using altered target vectors for previously misclassified samples. The target vectors of such samples are modified to a smoother distribution to help models learn better. On evaluating across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10, we show that LILAC outperforms all comparable baselines. Further, we detail the importance of pacing the introduction of new labels to a model as well as the impact of using a smooth target vector.
AB - Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum [37]. Conventional curriculum learning schemes introduce samples in their order of difficulty. This forces models to begin learning from a subset of the available data while adding the external overhead of evaluating the difficulty of samples. In this work, we propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels rather than the difficulty of samples while consistently using the entire dataset throughout training. In the first phase, Incremental Label Introduction, we partition data into mutually exclusive subsets, one that contains a subset of the ground-truth labels and another that contains the remaining data attached to a pseudo-label. Throughout the training process, we recursively reveal unseen ground-truth labels in fixed increments until all the labels are known to the model. In the second phase, Adaptive Compensation, we optimize the loss function using altered target vectors for previously misclassified samples. The target vectors of such samples are modified to a smoother distribution to help models learn better. On evaluating across three standard image benchmarks, CIFAR-10, CIFAR-100, and STL-10, we show that LILAC outperforms all comparable baselines. Further, we detail the importance of pacing the introduction of new labels to a model as well as the impact of using a smooth target vector.
UR - http://www.scopus.com/inward/record.url?scp=85129538764&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85129538764&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85129538764
T2 - 31st British Machine Vision Conference, BMVC 2020
Y2 - 7 September 2020 through 10 September 2020
ER -