Published in

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

DOI: 10.24963/ijcai.2022/10

Links

Tools

Export citation

Search in Google Scholar

Transparency, Detection and Imitation in Strategic Classification

Proceedings article published in 2022 by Flavia Barsotti, Ruya Gokhan Kocer, Fernando P. Santos ORCID
This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Given the ubiquity of AI-based decisions that affect individuals’ lives, providing transparent explanations about algorithms is ethically sound and often legally mandatory. How do individuals strategically adapt following explanations? What are the consequences of adaptation for algorithmic accuracy? We simulate the interplay between explanations shared by an Institution (e.g. a bank) and the dynamics of strategic adaptation by Individuals reacting to such feedback. Our model identifies key aspects related to strategic adaptation and the challenges that an institution could face as it attempts to provide explanations. Resorting to an agent-based approach, our model scrutinizes: i) the impact of transparency in explanations, ii) the interaction between faking behavior and detection capacity and iii) the role of behavior imitation. We find that the risks of transparent explanations are alleviated if effective methods to detect faking behaviors are in place. Furthermore, we observe that behavioral imitation --- as often happens across societies --- can alleviate malicious adaptation and contribute to accuracy, even after transparent explanations.