A Better Way to Measure Automatic Captioning
Saturday, May 16th, 2020 01:11 pmMany academic libraries/databases have been made world-readable in the past few months while students lack campus library access. Curiosity led me to the Association for Computing Machinery’s digital library (open access until 30 June 2020), where I was delighted to learn of the journal called ACM Transactions on Accessible Computing
Use the advanced search interface if you’re ready to go diving.
I found research explaining why automatic captioning is so unsatisfactory. "Word Error Rate" is the metric YouTube and other automatic speech recognition systems use as they trumpet their production of "automatic captions." Deaf & HoH users often call them "craptions." Total number of incorrect words divided by total number of words displayed doesn't map on to the info we need to understand spoken language visually. Some words we can easily infer; when names, locations, and crucial verbs go missing, comprehension plummets. This article explains in great detail, and proposes alternative metrics which could measure whether automatic speech recognition is good enough.
Predicting the Understandability of Imperfect English Captions for People Who Are Deaf or Hard of Hearing
SUSHANT KAFLE and MATT HUENERFAUTH, Rochester Institute of Technology
ACM Trans. Access. Comput., Vol. 12, No. 2, Article 7, Publication date: June 2019. https://dl.acm.org/doi/10.1145/3325862
abstract: Automatic Speech Recognition (ASR) technology has seen major advancements in its accuracy and speed in recent years, making it a possible mechanism for supporting communication between people who are Deaf or Hard-of-Hearing (DHH) and their hearing peers. However, state-of-the-art ASR technology is still imperfect in many realistic settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but it has been found to have little correlation with human-subject performance for many applications. This article describes and evaluates several new captioning-focused evaluation metrics for predicting the impact of ASR errors on the understandability of automatically generated captions for people who are DHH. Through experimental studies with DHH users, we have found that our new metric (based on word-importance and semantic-difference scoring) is more closely correlated with DHH user's judgements of caption quality—as compared to pre-existing metrics for ASR evaluation.
And isn’t it weird that academia is still using obscure abbreviations like ACM Trans. Access. Comput. when nothing’s printed so there’s no space to save?