In order to improve the efficiency and quality of image fusion, a new image fusion algorithm based on four-direction Sparse Representation (SR) and fast Non-Subsampled Contourlet Transform (NSCT) is proposed. The proposed method firstly provides a series of low- and high-frequency sub-bands of source images via fast NSCT decomposition. Then adaptive DCT over-complete dictionary is used for the fast four-direction sparse representation and coefficients fusion of low-pass sub-band, while Gaussian weighted regional energy based fusion rule are used in high-pass sub-bands. Fast NSCT modifies the tree structure filter bank of traditional NSCT into multi-channel structure, and it saves about half of the time. The fast SR fusion method adopts a four-direction sparse representation for coefficients fusion instead of traditional sliding window method, and further improves the efficiency of algorithm. The experimental results show that the proposed fast fusion algorithm can improve the efficiency nearly 20 times without sacrificing fusion quality.
BURT P J and ANDELSON E H. The Laplacian pyramid as a compact image code[J]. IEEE Transactions on Communications, 1983, 31(4): 532-540. doi: 10.1109/TCOM. 1983.1095851.
[2]
LI H, MANJUNATH B, and ITRA S. Multisensor image fusion using the wavelet transform[J]. Graphical Models and Image Processing, 1995, 57(3): 235-245.
[3]
MINH N D and MARTIN V. The finite ridgelet transform for image representation[J]. IEEE Transactions on Image Processing, 2003, 12(1): 16-28. doi: 10.1109/TIP.2002.806252.
[4]
NENCINI F, GARZELLI A, BARONTI S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2): 143-156.
[5]
CUNHA A L, ZHOU J, and DO M N. The nonsubsampled contourlet transform: theory, design, and applications[J]. IEEE Transactions on Image Processing, 2006, 15(10): 3089-3101. doi: 10.1109/TIP.2006.877507.
[6]
DO M N and VETTERLI M. The contourlet transform: an efficient directional multiresolution image representation[J]. IEEE Transactions on Image Processing, 2005, 14(12): 2091-2106. doi: 10.1109/TIP.2005.859376.
YANG Xiaohui, JIA Jian, and JIAO Licheng. Image fusion algorithm in nonsubsampled contourlet domain based on activity measure and closed loop feedback[J]. Journal of Electronics & Information Technology, 2010, 32(2): 422-426. doi: 10.3724/SP.J.1146.2008.01038.
ZHAO Chunhui, MA Lijuan, and SHAO Guofeng. An image fusion algorithm based on WA-WBA and improved non- subsampled contourlet transform[J]. Journal of Electronics & Information Technology, 2014, 36(2): 304-311. doi: 10.3724/ SP.J.1146.2013.00542.
[9]
YANG B and LI S. Pixel-level image fusion with simultaneous orthogonal matching pursuit[J]. Information
Fusion, 2012, 13(1): 10-19.
[10]
LIU Y, LIU S, and WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24(1): 147-164.
[11]
ZHENG Wei, SUN Xueqing, HAO Dongmei, et al. Thyroid image fusion based on shearlet transform and sparse representation[J]. Opto-Electronic Engineering, 2015, 42(1): 77-83.
[12]
YAGHOOBI M, WU D, and DAVIES M E. Fast non-negative orthogonal matching pursuit[J]. IEEE Signal Processing Letters, 2015, 22(9): 1229-1233. doi: 10.1109/LSP.2015. 2393637.
[13]
ZHENG Hao and TAO Dapeng. Discriminative dictionary learning via Fisher discrimination K-SVD algorithm[J]. Neurocomputing, 2015, 2015(162): 9-15.
[14]
ZHAO Chunhui, GUO Yunting, and WANG Yulei. A fast fusion scheme for infrared and visible light images in NSCT domain[J]. Infrared Physics & Technology, 2015, 2015(72): 266-275.
SHOU Zhaoyu, HU Rong, OUYANG Ning, et al. Image fusion based on multi-scale sparse representation[J]. Computer Engineering and Design, 2015, 36(1): 232-235. doi: 10.16208/j.issn1000-7024.2015.01.042.
[16]
YIN H T, LI S T, and FANG L Y. Simultaneous image fusion and super-resolution using sparse representation[J]. Information Fusion, 2013, 14(3): 229-240.
[17]
QU G, ZHANG D, and YAN P. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315.