Dissemin is shutting down on January 1st, 2025

Published in

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

DOI: 10.24963/ijcai.2021/675

Links

Tools

Export citation

Search in Google Scholar

Safety Analysis of Deep Neural Networks

Proceedings article published in 2021 by Dario Guidotti ORCID
This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Deep Neural Networks (DNNs) are popular machine learning models which have found successful application in many different domains across computer science. Nevertheless, providing formal guarantees on the behaviour of neural networks is hard and therefore their reliability in safety-critical domains is still a concern. Verification and repair emerged as promising solutions to address this issue. In the following, I will present some of my recent efforts in this area.