Unsupervised Video Summarization
16 papers with code • 2 benchmarks • 3 datasets
Unsupervised video summarization approaches overcome the need for ground-truth data (whose production requires time-demanding and laborious manual annotation procedures), based on learning mechanisms that require only an adequately large collection of original videos for their training. Specifically, the training is based on heuristic rules, like the sparsity, the representativeness, and the diversity of the utilized input features/characteristics.
Most implemented papers
Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness Reward
Video summarization aims to facilitate large-scale video browsing by producing short, concise summaries that are diverse and representative of original videos.
Unsupervised video summarization framework using keyframe extraction and video skimming
Video is one of the robust sources of information and the consumption of online and offline videos has reached an unprecedented level in the last few years.
Unsupervised Video Summarization With Adversarial LSTM Networks
The summarizer is the autoencoder long short-term memory network (LSTM) aimed at, first, selecting video frames, and then decoding the obtained summarization for reconstructing the input video.
Discriminative Feature Learning for Unsupervised Video Summarization
The proposed variance loss allows a network to predict output scores for each frame with high discrepancy which enables effective feature learning and significantly improves model performance.
A Stepwise, Label-based Approach for Improving the Adversarial Training in Unsupervised Video Summarization
In this paper we present our work on improving the efficiency of adversarial training for unsupervised video summarization.
ILS-SUMM: Iterated Local Search for Unsupervised Video Summarization
We consider shot-based video summarization where the summary consists of a subset of the video shots which can be of various lengths.
Unsupervised Video Summarization via Attention-Driven Adversarial Learning
Experimental evaluation on two datasets (SumMe and TVSum) documents the contribution of the attention auto-encoder to faster and more stable training of the model, resulting in a significant performance improvement with respect to the original model and demonstrating the competitiveness of the proposed SUM-GAN-AAE against the state of the art.
AC-SUM-GAN: Connecting Actor-Critic and Generative Adversarial Networks for Unsupervised Video Summarization
This paper presents a new method for unsupervised video summarization.
Unsupervised Video Summarization via Multi-source Features
Our evaluation shows that we obtain state-of-the-art results on both datasets, while also highlighting the shortcomings of previous work with regard to the evaluation methodology.
ERA: Entity Relationship Aware Video Summarization with Wasserstein GAN
This type of methods includes a summarizer and a discriminator.