Disentanglement
576 papers with code • 3 benchmarks • 12 datasets
This is an approach to solve a diverse set of tasks in a data efficient manner by disentangling (or isolating ) the underlying structure of the main problem into disjoint parts of its representations. This disentanglement can be done by focussing on the "transformation" properties of the world(main problem)
Libraries
Use these libraries to find Disentanglement models and implementationsDatasets
Most implemented papers
A Style-Based Generator Architecture for Generative Adversarial Networks
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature.
Disentangling by Factorising
We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation.
Adversarial Latent Autoencoders
We designed two autoencoders: one based on a MLP encoder, and another based on a StyleGAN generator, which we call StyleALAE.
Isolating Sources of Disentanglement in Variational Autoencoders
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables.
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms.
Sigmoid Loss for Language Image Pre-Training
We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP).
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do.
Learning concise representations for regression by evolving networks of trees
We propose and study a method for learning interpretable representations for the task of regression.
LEO: Generative Latent Image Animator for Human Video Synthesis
Our key idea is to represent motion as a sequence of flow maps in the generation process, which inherently isolate motion from appearance.
On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset
Learning meaningful and compact representations with disentangled semantic aspects is considered to be of key importance in representation learning.