Dissemin is shutting down on January 1st, 2025

Published in

arXiv, 2020

DOI: 10.48550/arxiv.2005.01662

Cambridge University Press, Microscopy and Microanalysis, S2(26), p. 2462-2465, 2020

DOI: 10.1017/s1431927620021674

Elsevier, Ultramicroscopy, (219), p. 113122, 2020

DOI: 10.1016/j.ultramic.2020.113122

Links

Tools

Export citation

Search in Google Scholar

Dynamic Compressed Sensing for Real-Time Tomographic Reconstruction

Journal article published in 2020 by Jonathan Schwartz, Huihuo Zheng ORCID, Marcus Hanwell, Yi Jiang ORCID, Robert Hovden
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Electron tomography has achieved higher resolution and quality at reduced doses with recent advances in compressed sensing. Compressed sensing (CS) theory exploits the inherent sparse signal structure to efficiently reconstruct three-dimensional (3D) volumes at the nanoscale from undersampled measurements. However, the process bottlenecks 3D reconstruction with computation times that run from hours to days. Here we demonstrate a framework for dynamic compressed sensing that produces a 3D specimen structure that updates in real-time as new specimen projections are collected. Researchers can begin interpreting 3D specimens as data is collected to facilitate high-throughput and interactive analysis. Using scanning transmission electron microscopy (STEM), we show that dynamic compressed sensing accelerates the convergence speed by 3-fold while also reducing its error by 27% for an Au/SrTiO3 nanoparticle specimen. Before a tomography experiment is completed, the 3D tomogram has interpretable structure within 33% of completion and fine details are visible as early as 66%. Upon completion of an experiment, a high-fidelity 3D visualization is produced without further delay. Additionally, reconstruction parameters that tune data fidelity can be manipulated throughout the computation without rerunning the entire process.