Dissemin is shutting down on January 1st, 2025

Published in

MDPI, Applied Sciences, 8(14), p. 3335, 2024

DOI: 10.3390/app14083335

Links

Tools

Export citation

Search in Google Scholar

Holoscopic Elemental-Image-Based Disparity Estimation Using Multi-Scale, Multi-Window Semi-Global Block Matching

Journal article published in 2024 by Bodor Almatrouk ORCID, Hongying Meng ORCID, Mohammad Rafiq Swash ORCID
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

In Holoscopic imaging, a single aperture is used to acquire full-colour spatial images like a fly’s eye by gently altering angles between nearby lenses with a micro-lens array. Due to its simple data collection and visualisation methods, which provide robust and scalable spatial information, and its motion parallax, binocular disparity, and convergence, this technique may be able to overcome traditional 2D imaging issues like depth, scalability, and multi-perspective problems. A novel disparity-map-generating method uses angular information from a single Holoscopic image’s micro-images, or Elemental Images (EIs), to create a scene’s disparity map. Not much research has used EIs instead of Viewpoint Images (VPIs) for disparity estimation. This study investigates whether angular perspective data may replace spatial orthographic data. Using noise reduction and contrast enhancement, EIs with a low resolution and lack of texture are pre-processed to calculate the disparity. The Semi-Global Block Matching (SGBM) technique is used to calculate the disparity between EI pixels. A multi-resolution approach overcomes EIs’ resolution constraints, and a content-aware analysis dynamically modifies the SGBM window size settings to generate disparities across different texture and complexity levels. A background mask and nearby EIs with accurate backgrounds detect and rectify EIs with erroneous backgrounds. Our method generates disparity maps that outperform two state-of-the-art deep learning algorithms and VPIs in real images.