Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Circuits and Systems for Video Technology, 1(22), p. 128-137, 2012

DOI: 10.1109/tcsvt.2011.2158362

Links

Tools

Export citation

Search in Google Scholar

View Interpolation for Medical Images on Autostereoscopic Displays

Journal article published in 2012 by Svitlana Zinger, Daniel Ruijters ORCID, Luat Do, Peter H. N. de With
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

We present an approach for efficient rendering and transmitting views to a high-resolution autostereoscopic display for medical purposes. Displaying biomedical images on an autostereoscopic display poses different requirements than in a consumer case. For medical usage, it is essential that the perceived image represents the actual clinical data and offers sufficiently high quality for diagnosis or understanding. Autostereoscopic display of multiple views introduces two hurdles: transmission of multi-view data through a bandwidth-limited channel and the computation time of the volume rendering algorithm. We address both issues by generating and transmitting limited set of views enhanced with a depth signal per view. We propose an efficient view interpolation and rendering algorithm at the receiver side based on texture+depth data representation, which can operate with a limited amount of views. We study the main artifacts that occur during rendering—occlusions, and we quantify them first for a synthetic model and then for real-world biomedical data. The experimental results allow us to quantify the peak signal-to-noise ratio for rendered texture and depth as well as the amount of disoccluded pixels as a function of the angle between surrounding cameras.