Towards Efficient Contrastive PAC Learning

Research output: Contribution to journalArticlepeer-review

Abstract

We study contrastive learning under the PAC learning framework. While a series of recent works have shown statistical results for learning under contrastive loss, based either on the VC-dimension or Rademacher complexity, their algorithms are inherently inefficient or not implying PAC guarantees. In this paper, we consider contrastive learning of the fundamental concept of linear representations. Surprisingly, even under such basic setting, the existence of efficient PAC learners is largely open. We first show that the problem of contrastive PAC learning of linear representations is intractable to solve in general. We then show that it can be relaxed to a semi-definite program when the distance between contrastive samples is measured by the ℓ2-norm. We then establish generalization guarantees based on Rademacher complexity, and connect it to PAC guarantees under certain contrastive large-margin conditions. To the best of our knowledge, this is the first efficient PAC learning algorithm for contrastive learning.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2025-July
StatePublished - 2025

Fingerprint

Dive into the research topics of 'Towards Efficient Contrastive PAC Learning'. Together they form a unique fingerprint.

Cite this