Video Style Transfer
14 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Video Style Transfer
Most implemented papers
A Style-Aware Content Loss for Real-time HD Style Transfer
These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.
ReCoNet: Real-time Coherent Video Style Transfer Network
Image style transfer models based on convolutional neural networks usually suffer from high temporal inconsistency when applied to videos.
AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer
Finally, the content feature is normalized so that they demonstrate the same local feature statistics as the calculated per-point weighted style feature statistics.
Layered Neural Atlases for Consistent Video Editing
We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases, each providing a unified representation of the appearance of an object (or background) over the video.
Creative Flow+ Dataset
We present the Creative Flow+ Dataset, the first diverse multi-style artistic video dataset richly labeled with per-pixel optical flow, occlusions, correspondences, segmentation labels, normals, and depth.
Consistent Video Style Transfer via Relaxation and Regularization
In this article, we address the problem by jointly considering the intrinsic properties of stylization and temporal consistency.
CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer
CCPL can preserve the coherence of the content source during style transfer without degrading stylization.
VToonify: Controllable High-Resolution Portrait Video Style Transfer
Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency.
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
We also have a better zero-shot shape-aware editing ability based on the text-to-video model.
CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealistic and video style transfer.