Contrastive Learning
2178 papers with code • 1 benchmarks • 11 datasets
Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.
It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.
(Image credit: Schroff et al. 2015)
Libraries
Use these libraries to find Contrastive Learning models and implementationsDatasets
Most implemented papers
A Simple Framework for Contrastive Learning of Visual Representations
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
Momentum Contrast for Unsupervised Visual Representation Learning
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
Improved Baselines with Momentum Contrastive Learning
Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.
Supervised Contrastive Learning
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.
SimCSE: Simple Contrastive Learning of Sentence Embeddings
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.
Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination
Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so.
Contrastive Learning for Unpaired Image-to-Image Translation
Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset.
Contrastive Multiview Coding
We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.
Sigmoid Loss for Language Image Pre-Training
We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP).