Dissemin is shutting down on January 1st, 2025

Published in

Association for Computing Machinery (ACM), ACM Transactions on Software Engineering and Methodology, 2024

DOI: 10.1145/3707450

Links

Tools

Export citation

Search in Google Scholar

Prioritizing Speech Test Cases

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

As automated speech recognition (ASR) systems gain widespread acceptance, there is a pressing need to rigorously test and enhance their performance. Nonetheless, the process of collecting and executing speech test cases is typically both costly and time-consuming. This presents a compelling case for the strategic prioritization of speech test cases, which consist of a piece of audio and the corresponding reference text . The central question we address is: In what sequence should speech test cases be collected and executed to identify the maximum number of errors at the earliest stage? In this study, we introduce Prophet ( PR i O ritising s P eec H t E s T cases), a tool designed to predict the likelihood that speech test cases will identify errors. Consequently, Prophet can assess and prioritize these test cases without having to run the ASR system, facilitating large-scale analysis. Our evaluation encompasses \(6\) distinct prioritization techniques across \(3\) ASR systems and \(12\) datasets. When constrained by the same test budget, our approach identified \(15.44\%\) more misrecognized words than the leading the state-of-the-art method. We select top-ranked speech test cases from the prioritized list to fine-tune ASR systems and analyze how our approach can improve the ASR system performance. Statistical evaluations show that our method delivers a considerably higher performance boost for ASR systems compared to established baseline techniques. Moreover, our correlation analysis confirms that fine-tuning an ASR system with a dataset where the model initially underperforms tends to yield greater performance improvements.