Published in

Proceedings of the AAAI Conference on Artificial Intelligence, 4(35), p. 3110-3118, 2021

DOI: 10.1609/aaai.v35i4.16420

Links

Tools

Export citation

Search in Google Scholar

Learning Semantic Context from Normal Samples for Unsupervised Anomaly Detection

Journal article published in 2021 by Xudong Yan, Huaidong Zhang, Xuemiao Xu, Xiaowei Hu ORCID, Pheng-Ann Heng
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Unsupervised anomaly detection aims to identify data samples that have low probability density from a set of input samples, and only the normal samples are provided for model training. The inference of abnormal regions on the input image requires an understanding of the surrounding semantic context. This work presents a Semantic Context based Anomaly Detection Network, SCADN, for unsupervised anomaly detection by learning the semantic context from the normal samples. To achieve this, we first generate multi-scale striped masks to remove a part of regions from the normal samples, and then train a generative adversarial network to reconstruct the unseen regions. Note that the masks are designed in multiple scales and stripe directions, and various training examples are generated to obtain the rich semantic context . In testing, we obtain an error map by computing the difference between the reconstructed image and the input image for all samples, and infer the abnormal samples based on the error maps. Finally, we perform various experiments on three public benchmark datasets and a new dataset LaceAD collected by us, and show that our method clearly outperforms the current state-of-the-art methods.