This article discusses benchmarking OpenAI Whisper models for non-English automatic speech recognition (ASR). It covers the basics of measuring ASR model accuracy, challenges of accurate benchmarks for non-English languages, and benchmarking Whisper for Spanish, French, German, Hindi, and Turkish using curated publicly available data. The author highlights the importance of text normalization and consistent labels in ASR benchmarking and emphasizes that results should be contextualized by understanding the type of data the model is evaluated on.