Dissemin is shutting down on January 1st, 2025

Published in

Nature Research, Scientific Reports, 1(13), 2023

DOI: 10.1038/s41598-022-24754-w

Links

Tools

Export citation

Search in Google Scholar

Multimodal sensor fusion in the latent representation space

Journal article published in 2023 by Robert J. Piechocki ORCID, Xiaoyang Wang, Mohammud J. Bocus
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

AbstractA new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.