Published in

Association for Computing Machinery (ACM), Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3(7), p. 1-24, 2023

DOI: 10.1145/3610874

Links

Tools

Export citation

Search in Google Scholar

Echo

This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Intelligent audio systems are ubiquitous in our lives, such as speech command recognition and speaker recognition. However, it is shown that deep learning-based intelligent audio systems are vulnerable to adversarial attacks. In this paper, we propose a physical adversarial attack that exploits reverberation, a natural indoor acoustic effect, to realize imperceptible, fast, and targeted black-box attacks. Unlike existing attacks that constrain the magnitude of adversarial perturbations within a fixed radius, we generate reverberation-alike perturbations that blend naturally with the original voice sample 1. Additionally, we can generate more robust adversarial examples even under over-the-air propagation by considering distortions in the physical environment. Extensive experiments are conducted using two popular intelligent audio systems in various situations, such as different room sizes, distance, and ambient noises. The results show that Echo can invade into intelligent audio systems in both digital and physical over-the-air environment.