Published in

Bioengineered and Bioinspired Systems II

DOI: 10.1117/12.608399

Links

Tools

Export citation

Search in Google Scholar

Multifocus fusion with oriented windows

Proceedings article published in 2005 by F. Sroubek ORCID, S. Gabarda, R. Redondo, S. Fischer, G. Cristobal
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

A wide variety of image fusion techniques exist. A key term that is common to most is the "decision map". This map determines which information to take and at what place. Multifocus fusion deals with a stack of images that were acquired with a different focus point. In this case, one can say that the task of the decision map is to label parts that are in focus. If the focus length for each image in the stack is known, the decision map determines also a depth map that can be used for 3D surface reconstruction. Accuracy of the decision map is critical not only for image fusion itself, but even more for the surface reconstruction. Erroneous decisions can produce unrealistic glitches. We propose here to use information about image edges for increasing the accuracy of the decision map and enhancing in this way a standard wavelet-based fusion approach. We demonstrate the performance on real multifocus data under different noise levels.