Unconditional Video Generation
9 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
Video Diffusion Models
Generating temporally coherent high fidelity video is an important milestone in generative modeling research.
MOSO: Decomposing MOtion, Scene and Object for Video Prediction
Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.
Latent Neural Differential Equations for Video Generation
Generative Adversarial Networks have recently shown promise for video generation, building off of the success of image generation while also addressing a new challenge: time.
CelebV-HQ: A Large-Scale Video Facial Attributes Dataset
Large-scale datasets have played indispensable roles in the recent success of face generation/editing and significantly facilitated the advances of emerging research fields.
MotionVideoGAN: A Novel Video Generator Based on the Motion Space Learned from Image Pairs
We present MotionVideoGAN, a novel video generator synthesizing videos based on the motion space learned by pre-trained image pair generators.
Video Diffusion Models with Local-Global Context Guidance
We construct a local-global context guidance strategy to capture the multi-perceptual embedding of the past fragment to boost the consistency of future prediction.
DDLP: Unsupervised Object-Centric Video Prediction with Deep Dynamic Latent Particles
We propose a new object-centric video prediction algorithm based on the deep latent particle (DLP) representation.
StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation
In this paper, we introduce a novel motion generator design that uses a learning-based inversion network for GAN.
StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN
We propose a method that can generate cinemagraphs automatically from a still landscape image using a pre-trained StyleGAN.