Published in

Machine Learning for Biomedical Imaging, April 2023(2), p. 172-201, 2023

DOI: 10.59275/j.melba.2023-18c1

Springer, Lecture Notes in Computer Science, p. 14-23, 2022

DOI: 10.1007/978-3-031-18576-2_2

Links

Tools

Export citation

Search in Google Scholar

Cross Attention Transformers for Multi-modal Unsupervised Whole-Body PET Anomaly Detection

Distributing this paper is prohibited by the publisher
Distributing this paper is prohibited by the publisher

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Cancer is a highly heterogeneous condition that can occur almost anywhere in the human body. [<sup>18</sup>F]fluorodeoxyglucose Positron Emission Tomography (<sup>18</sup>F-FDG PET) is a imaging modality commonly used to detect cancer due to its high sensitivity and clear visualisation of the pattern of metabolic activity. Nonetheless, as cancer is highly heterogeneous, it is challenging to train general-purpose discriminative cancer detection models, with data availability and disease complexity often cited as a limiting factor. Unsupervised learning methods, more specifically anomaly detection models, have been suggested as a putative solution. These models learn a healthy representation of tissue and detect cancer by predicting deviations from the healthy norm, which requires models capable of accurately learning long-range interactions between organs, their imaging patterns, and other abstract features with high levels of expressivity. Such characteristics are suitably satisfied by transformers, which have been shown to generate state-of-the-art results in unsupervised anomaly detection by training on normal data. This work expands upon such approaches by introducing multi-modal conditioning of the transformer via cross-attention i.e. supplying anatomical reference information from paired CT images to aid the PET anomaly detection task. Furthermore, we show the importance and impact of codebook sizing within a Vector Quantized Variational Autoencoder, on the ability of the transformer network to fulfill the task of anomaly detection. Using 294 whole-body PET/CT samples containing various cancer types, we show that our anomaly detection method is robust and capable of achieving accurate cancer localization results even in cases where normal training data is unavailable. In addition, we show the efficacy of this approach on out-of-sample data showcasing the generalizability of this approach even with limited training data. Lastly, we propose to combine model uncertainty with a new kernel density estimation approach, and show that it provides clinically and statistically significant improvements in accuracy and robustness, when compared to the classic residual-based anomaly maps. Overall, a superior performance is demonstrated against leading state-of-the-art alternatives, drawing attention to the potential of these approaches.