Video Semantic Segmentation
323 papers with code • 5 benchmarks • 8 datasets
Libraries
Use these libraries to find Video Semantic Segmentation models and implementationsMost implemented papers
Pyramid Scene Parsing Network
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes.
Fully Convolutional Networks for Semantic Segmentation
Convolutional networks are powerful visual models that yield hierarchies of features.
PReMVOS: Proposal-generation, Refinement and Merging for Video Object Segmentation
We address semi-supervised video object segmentation, the task of automatically generating accurate and consistent pixel masks for objects in a video sequence, given the first-frame ground truth annotations.
Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion
We present Modular interactive VOS (MiVOS) framework which decouples interaction-to-mask and mask propagation, allowing for higher generalizability and better performance.
Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective
To learn generalizable representation for correspondence in large-scale, a variety of self-supervised pretext tasks are proposed to explicitly perform object-level or patch-level similarity learning.
Mask2Former for Video Instance Segmentation
We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline.
Lucid Data Dreaming for Video Object Segmentation
Our approach is suitable for both single and multiple object segmentation.
YouTube-VOS: Sequence-to-Sequence Video Object Segmentation
End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i. e., even the largest video segmentation dataset only contains 90 short video clips.
CCNet: Criss-Cross Attention for Semantic Segmentation
Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage.
Interactive Video Object Segmentation Using Global and Local Transfer Modules
The global transfer module conveys the segmentation information in an annotated frame to a target frame, while the local transfer module propagates the segmentation information in a temporally adjacent frame to the target frame.