Space-time Video Super-resolution
13 papers with code • 2 benchmarks • 0 datasets
Libraries
Use these libraries to find Space-time Video Super-resolution models and implementationsMost implemented papers
Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network.
FISR: Deep Joint Frame Interpolation and Super-Resolution with a Multi-scale Temporal Loss
In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps.
Efficient Space-time Video Super Resolution using Low-Resolution Flow and Mask Upsampling
Input LR frames are super-resolved using a state-of-the-art Video Super-Resolution method.
Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution
A na\"ive method is to decompose it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR).
Temporal Modulation Network for Controllable Space-Time Video Super-Resolution
To well exploit the temporal information, we propose a Locally-temporal Feature Comparison (LFC) module, along with the Bi-directional Deformable ConvLSTM, to extract short-term and long-term motion cues in videos.
VRT: A Video Restoration Transformer
Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping.
STDAN: Deformable Attention Network for Space-Time Video Super-Resolution
Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts in dynamic video frames are adaptively captured and aggregated to enhance SR reconstruction.
RSTT: Real-time Spatial Temporal Transformer for Space-Time Video Super-Resolution
Space-time video super-resolution (STVSR) is the task of interpolating videos with both Low Frame Rate (LFR) and Low Resolution (LR) to produce High-Frame-Rate (HFR) and also High-Resolution (HR) counterparts.
VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution
The learned implicit neural representation can be decoded to videos of arbitrary spatial resolution and frame rate.
Enhancing Space-time Video Super-resolution via Spatial-temporal Feature Interaction
A popular solution is to first increase the frame rate of the video; then perform feature refinement among different frame features; and last increase the spatial resolutions of these features.