/plushcap/analysis/assemblyai/simcls-and-refsum-summarization-techniques

Review - SimCLS and RefSum - Summarization Techniques

What's this blog post about?

This week's Deep Learning research papers introduce SimCLS and RefSum, innovative approaches for contrastive learning of abstractive summarization in Natural Language Processing (NLP) and Automatic Speech Recognition (ASR). These models use a "generate then evaluate" approach where a generative model is stacked with a discriminative scoring model to both generate candidate summaries and score them, allowing the optimal candidate to be selected. RefSum set State-of-the-Srt (SOTA) summarization performance on CNN/Daily Mail and Xsum datasets by unifying its base and meta summarization systems, while SimCLS advanced this idea further by training its scoring model in a contrastive learning setting. These papers demonstrate that taking a basic multi-model pipelined approach to summarization can lead to the best downstream performance and could potentially be applied to other generation tasks such as speech synthesis, image generation, and question answering.

Company
AssemblyAI

Date published
Nov. 23, 2021

Author(s)
Kelsey Foster

Word count
346

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.