/plushcap/analysis/deepgram/measuring-quality-word-error-rate-explained

Measuring Quality: Word Error Rate Explained

What's this blog post about?

Word Error Rate (WER) is a commonly used metric for measuring the quality of speech recognition models, specifically automated speech recognition (ASR). It calculates the number of errors made by an ASR model in transcribing audio to text. The formula for WER involves counting the number of word insertions, deletions, and substitutions made by the model compared to a ground-truth transcript, then dividing this sum by the total number of words in the ground-truth. A lower WER indicates better performance. However, while WER is useful for comparing ASR models, it doesn't provide a comprehensive understanding of how well a model will perform on specific types of data or with certain vocabulary.

Company
Deepgram

Date published
April 27, 2023

Author(s)
Jose Nicholas Francisco

Word count
1152

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.