Published in

Elsevier, Expert Systems with Applications, 16(41), p. 7425-7435

DOI: 10.1016/j.eswa.2014.05.043

Links

Tools

Export citation

Search in Google Scholar

A novel approach for multimodal medical image fusion

Journal article published in 2014 by Zhaodong Liu, Hongpeng Yin, Yi Chai, Simon X. Yang ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Fusion of multimodal medical images increases robustness and enhances accuracy in biomedical research and clinical diagnosis. It attracts much attention over the past decade. In this paper, an efficient multimodal medical image fusion approach based on compressive sensing is presented to fuse computed tomography (CT) and magnetic resonance imaging (MRI) images. The significant sparse coefficients of CT and MRI images are acquired via multi-scale discrete wavelet transform. A proposed weighted fusion rule is utilized to fuse the high frequency coefficients of the source medical images; while the pulse coupled neural networks (PCNN) fusion rule is exploited to fuse the low frequency coefficients. Random Gaussian matrix is used to encode and measure. The fused image is reconstructed via Compressive Sampling Matched Pursuit algorithm (CoSaMP). To show the efficiency of the proposed approach, several comparative experiments are conducted. The results reveal that the proposed approach achieves better fused image quality than the existing state-of-the-art methods. Furthermore, the novel fusion approach has the superiority of high stability, good flexibility and low time consumption.