Representation Learning
3690 papers with code • 5 benchmarks • 9 datasets
Representation Learning is a process in machine learning where algorithms extract meaningful patterns from raw data to create representations that are easier to understand and process. These representations can be designed for interpretability, reveal hidden features, or be used for transfer learning. They are valuable across many fundamental machine learning tasks like image classification and retrieval.
Deep neural networks can be considered representation learning models that typically encode information which is projected into a different subspace. These representations are then usually passed on to a linear classifier to, for instance, train a classifier.
Representation learning can be divided into:
- Supervised representation learning: learning representations on task A using annotated data and used to solve task B
- Unsupervised representation learning: learning representations on a task in an unsupervised way (label-free data). These are then used to address downstream tasks and reducing the need for annotated data when learning news tasks. Powerful models like GPT and BERT leverage unsupervised representation learning to tackle language tasks.
More recently, self-supervised learning (SSL) is one of the main drivers behind unsupervised representation learning in fields like computer vision and NLP.
Here are some additional readings to go deeper on the task:
- Representation Learning: A Review and New Perspectives - Bengio et al. (2012)
- A Few Words on Representation Learning - Thalles Silva
( Image credit: Visualizing and Understanding Convolutional Networks )
Libraries
Use these libraries to find Representation Learning models and implementationsDatasets
Subtasks
- Disentanglement
- Graph Representation Learning
- Sentence Embeddings
- Network Embedding
- Network Embedding
- Sentence Embedding
- Knowledge Graph Embeddings
- Document Embedding
- Learning Word Embeddings
- Multilingual Word Embeddings
- Learning Semantic Representations
- Feature Upsampling
- Learning Network Representations
- Sentence Embeddings For Biomedical Texts
- Part-based Representation Learning
- Learning Representation Of Multi-View Data
- Learning Representation On Graph
Most implemented papers
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
Neural Discrete Representation Learning
Learning useful representations without supervision remains a key challenge in machine learning.
Momentum Contrast for Unsupervised Visual Representation Learning
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
Deep High-Resolution Representation Learning for Visual Recognition
High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection.
Deep High-Resolution Representation Learning for Human Pose Estimation
We start from a high-resolution subnetwork as the first stage, gradually add high-to-low resolution subnetworks one by one to form more stages, and connect the mutli-resolution subnetworks in parallel.
High-Resolution Representations for Labeling Pixels and Regions
The proposed approach achieves superior results to existing single-model networks on COCO object detection.
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.
Improved Baselines with Momentum Contrastive Learning
Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.
Domain-Adversarial Training of Neural Networks
Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.
Bootstrap your own latent: A new approach to self-supervised Learning
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.