Abstractive Summarization of Spoken andWritten Instructions with BERT
Summarization of speech is a difficult problem due to the spontaneity of the flow, disfluencies, and other issues that are not usually encountered in written texts. Our work presents the first application of the BERTSum model to conversational language. We generate abstractive summaries of narrated instructional videos across a wide variety of topics, from gardening and cooking to software configuration and sports. In order to enrich the vocabulary, we use transfer learning and pretrain the model on a few large cross-domain datasets in both written and spoken English. We also do preprocessing of transcripts to restore sentence segmentation and punctuation in the output of an ASR system. The results are evaluated with ROUGE and Content-F1 scoring for the How2 and WikiHow datasets. We engage human judges to score a set of summaries randomly selected from a dataset curated from HowTo100M and YouTube. Based on blind evaluation, we achieve a level of textual fluency and utility close to that of summaries written by human content creators. The model beats current SOTA when applied to WikiHow articles that vary widely in style and topic, while showing no performance regression on the canonical CNN/DailyMail dataset. Due to the high generalizability of the model across different styles and domains, it has great potential to improve accessibility and discoverability of internet content. We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
PDF AbstractCode
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Uses Extra Training Data |
Benchmark |
---|---|---|---|---|---|---|---|
Text Summarization | How2 | BertSum | ROUGE-L | 44.02 | # 2 | ||
Content F1 | 36.4 | # 2 | |||||
ROUGE-1 | 48.26 | # 1 | |||||
Text Summarization | WikiHow | BertSum | ROUGE-1 | 35.91 | # 1 | ||
ROUGE-2 | 13.9 | # 1 | |||||
ROUGE-L | 34.82 | # 1 | |||||
Content F1 | 29.8 | # 1 | |||||
Abstractive Text Summarization | WikiHow | BertSum | ROUGE-1 | 35.91 | # 1 | ||
ROUGE-L | 34.82 | # 1 | |||||
ROUGE-2 | 13.9 | # 1 | |||||
Content F1 | 29.8 | # 1 |