Please wait a minute...
 
Remote Sensing for Natural Resources    2022, Vol. 34 Issue (2) : 47-55     DOI: 10.6046/zrzyyg.2021179
|
Knowledge-based remote sensing image fusion method
KONG Ailing1(), ZHANG Chengming1(), LI Feng2, HAN Yingjuan3, SUN Huanying4, DU Manfei4
1. College of Information Science and Engineering, Shandong Agricultural University, Taian 271018, China
2. Shandong Provincal Climate Center, Jinan 250031, China
3. Key Laboratory for Meteorological Disaster Monitoring and Early Warning and Risk Management of Characteristic Agriculture in Arid Regions, CMA, Yinchuan 750002, China
4. PIESAT Information Technology Co., Ltd., Beijing 100195, China
Download: PDF(3714 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

The remote sensing image fusion technology can combine multi-source images containing complementary information to obtain images with richer content and higher spectral quality, thus it is the key and foundation of remote sensing applications. Aiming at the problems of spectral distortion and spatial structure distortion that are prone to occur in the process of remote sensing image fusion, the knowledge-based remote sensing image FuseNet (RSFuseNet) was constructed based on the attention mechanism and using normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) as prior knowledge. Firstly, considering that the high-pass filtering can fully extract edge texture details, a high-pass filtering module was constructed to extract high-frequency details of panchromatic images. Secondly, NDVI and NDWI were extracted from multi-spectral images. Then, an adaptive squeeze-and-excitation (SE) module was constructed to recalibrate the input features. Finally, the adaptive SE module was combined with the convolution unit to perform fusion processing on the input features. The experiment was conducted using Gaofen 6 remote sensing image as the data source, and selecting Gram-Schmidt (GS) transformation, principal component analysis (PCA), a deep network architecture for pan-sharpening (PanNet), and pansharpening by convolutional neural networks (PNN) models as comparative models. The experimental results show that the peak signal to noise ratio (PSNR) index (40.5) and the structural similarity (SSIM) index (0.98) of the RSFuseNet model are better than those of comparative models, indicating that the method in this study has obvious advantages in remote sensing image fusion.

Keywords image fusion      prior knowledge      convolutional neural network      attention mechanism      GF-6     
ZTFLH:  TP79  
Corresponding Authors: ZHANG Chengming     E-mail: 2019110580@sdau.edu.cn;chming@sdau.edu.cn
Issue Date: 20 June 2022
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Ailing KONG
Chengming ZHANG
Feng LI
Yingjuan HAN
Huanying SUN
Manfei DU
Cite this article:   
Ailing KONG,Chengming ZHANG,Feng LI, et al. Knowledge-based remote sensing image fusion method[J]. Remote Sensing for Natural Resources, 2022, 34(2): 47-55.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2021179     OR     https://www.gtzyyg.com/EN/Y2022/V34/I2/47
相机类型 谱段号 谱段范围/μm 星下点地面
像元分辨率
覆盖宽度 静态转函 量化值 辐射定标精度
离轴TMA全反射式 P 0.45~0.90 优于2 m 大于90 km 优于0.2 12 bit 绝对定标精度优于7%,相对定标精度优于3%
B1 0.45~0.52 优于8 m
B2 0.52~0.60
B3 0.43~0.69
B4 0.76~0.90
Tab.1  Main parameters of GF-6 satellite
Fig.1  Basic structure of the remote sensing image FuseNet
Fig.2  The structure diagram of adaptive squeeze-and-excitation module
名称 模型介绍
GS变换 该融合方法通过统计分析方法对参与融合的各波段进行最佳匹配
PCA 该融合方法将多光谱图像的多个波段看作多维数据集,通过对该数据集进行PCA变换得到多个主成分分量
NNDiffuse 图像融合算法,主要是采用最邻近插值
PNN 与RSFuseNet具有类似的结构,使用3层卷积提取特征
PanNet 使用两路分支的卷积神经网络提取特征
Tab.2  Models used in the comparison experiment
Fig.3  Comparison of experimental model results
Fig.4  Comparison of experimental model results
Fig.5  Comparison of experimental model results
Fig.6  Comparison of experimental model results
指标 NNDiffuse GS PCA PNN PanNet RSFuseNet
PSNR↑ 21.13 24.26 26.16 35.50 28.02 40.50
SSIM↑ 0.92 0.92 0.90 0.97 0.91 0.98
SAM↓ 4.62 3.68 3.57 3.71 4.22 2.09
Tab.3  Compare the models performance metrics
[1] 王建宇, 王跃明, 李春来. 高光谱成像系统的噪声模型和对辐射灵敏度的影响[J]. 遥感学报, 2010, 14(4):607-620.
[1] Wang J Y, Wang Y M, Li C L. Noise model of hyperspectral imaging system and influence on radiation sensitivity[J]. Journal of Remote Sensing, 2010, 14(4):607-620.
[2] Vivone G, Alparone L, Chanussot J, et al. A critical comparison among pansharpening algorithms[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5):2565-2586.
doi: 10.1109/TGRS.2014.2361734 url: http://ieeexplore.ieee.org/document/6998089/
[3] Xie B, Zhang H, Huang B. Revealing implicit assumptions of the component substitution pansharpening methods[J]. Remote Sensing, 2017, 9(5):443-458.
doi: 10.3390/rs9050443 url: http://www.mdpi.com/2072-4292/9/5/443
[4] Yong Y, Wan W, Huang S, et al. A novel pan-sharpening framework based on matting model and multiscale transform[J]. Remote Sensing, 2017, 9(4):391.
doi: 10.3390/rs9040391 url: http://www.mdpi.com/2072-4292/9/4/391
[5] 王晓艳, 刘勇, 蒋志勇. 一种基于结构相似度的IHS变换融合算法[J]. 遥感技术与应用, 2011, 26(5):670-676.
[5] Wang X Y, Liu Y, Jiang Z Y. An IHS fusion method based on structural similarity[J]. Remote Sensing Technology and Application, 2011, 26(5):670-676.
[6] Yang Y, Wan W, Huang S, et al. Remote sensing image fusion based on adaptive IHS and multiscale guided filter[J]. IEEE Access, 2017(4):4573-4582.
[7] W Dong, S Xiao, X Xue, et al. An improved hyperspectral pansharpening algorithm based on optimized injection model[J]. IEEE Access, 2019(7):16718-16729.
[8] Xu L, Zhang Y, Gao Y, et al. Using guided filtering to improve gram-schmidt based pansharpening method for GeoEye-1 satellite images[J]. Proceedings of the 4th International Conference on Information Systems and Computing Technology, 2016(64):33-37.
[9] Aiazzi B, Alparone L, Baronti S, et al. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery[J]. Photogrammetric Engineering & Remote Sensing, 2015, 72(5):591-596.
[10] Vivone G, Restaino R, Mura M D, et al. Contrast and error-based fusion schemes for multispectral image pansharpening[J]. IEEE Geoscience and Remote Sensing Letters, 2013, 11(5):930-934.
doi: 10.1109/LGRS.2013.2281996 url: http://ieeexplore.ieee.org/document/6616569/
[11] Alparone, Luciano, Baronti, et al. Spatial methods for multispectral pansharpening:Multiresolution analysis demystified[J]. IEEE Transactions on Geoscience & Remote Sensing, 2016(54):2563-2576.
[12] Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2):143-156.
doi: 10.1016/j.inffus.2006.02.001 url: https://linkinghub.elsevier.com/retrieve/pii/S1566253506000340
[13] Lu X, Zhang J, Zhang Y. An improved non-subsampled contourlet transform-based hybrid pan-sharpening algorithm[C]// 2017 IEEE International Geoscience and Remote Sensing Symposium. Fort Worth: IEEE, 2017:3393-3396.
[14] James R, Vadivel M. Improvement of spectral and spatial information using modified WAMM and modified bi-cubic interpolation method in non-subsampled contourlet transform domain[C]// 2015 International Conference on Robotics,Automation,Control and Embedded Systems (RACE). Chennai India: IEEE, 2015:1-5.
[15] Zhang Z, Luo X, Wu X. A new pansharpening method using statistical model and shearlet transform[J]. IEEE Technical Review, 2014, 31(5):308-316.
[16] Wang X, Bai S, Li Z, et al. The PAN and MS Image pansharpening algorithm based on adaptive neural network and sparse representation in the NSST domain[J]. IEEE Access, 2019(7):52508-52521.
[17] Yin H T. Sparse representation based pan-sharpening with details injection model[J]. Signal Processing, 2015(113):218-227.
[18] Giuseppe M, Davide C, Luisa V, et al. Pansharpening by convolutional neural networks[J]. Remote Sensing, 2016, 8(7):594.
doi: 10.3390/rs8070594 url: http://www.mdpi.com/2072-4292/8/7/594
[19] Scarpa G, Vitale S, Cozzolino D. Target-adaptive CNN-based pansharpening[J]. IEEE Transactions on Geoscience & Remote Sensing, 2018, 56(9):5443-5457.
[20] Yuan Q, Wei Y, Meng X, et al. A multiscale and multidepth convolutional neural network for remote sensing imagery pan-sharpening[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(3),978-989.
doi: 10.1109/JSTARS.2018.2794888 url: https://ieeexplore.ieee.org/document/8281501/
[21] Yang J, Fu X, Hu Y, et al. PanNet:A deep network architecture for pan-sharpening[C]// 2017 IEEE International Conference on Computer Vision(ICCV). Venice Italy: IEEE, 2017:5449-5457.
[22] Shao Z, Cai J. Remote sensing image fusion with deep convolutional neural network[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(5):1656-1669.
doi: 10.1109/JSTARS.2018.2805923 url: https://ieeexplore.ieee.org/document/8314460/
[23] Azarang A, Manoochehri H E, Kehtarnavaz N. Convolutional autoencoder-based multispectral image fusion[J]. IEEE Access, 2019, 7:35673-35683.
doi: 10.1109/ACCESS.2019.2905511
[24] Jie H, Li S, Gang S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8):2011-2023.
doi: 10.1109/TPAMI.2019.2913372 pmid: 31034408
[25] 聂文昌. 融合先验知识的深度学习目标识别与定位研究[D]. 哈尔滨: 哈尔滨工程大学, 2019.
[25] Nie Wenchang. Research on target recognition and locationbased on prior knowledge using deep learning[D]. Harbin: Harbin Engineering University, 2019.
[26] Pushparaj J, Hegde A V. Evaluation of pan-sharpening methods for spatial and spectral quality[J]. Applied Geomatics, 2016, 9(1):1-12.
doi: 10.1007/s12518-016-0179-2 url: http://link.springer.com/10.1007/s12518-016-0179-2
[27] 杨丽萍, 马孟, 谢巍, 等. 干旱区Landsat8全色与多光谱数据融合算法评价[J]. 国土资源遥感, 2019, 31(4):11-19.doi: 10.6046/gtzyyg.2019.04.02.
doi: 10.6046/gtzyyg.2019.04.02
[27] Yang L P, Ma M, Xie W, et al. Fusion algorithm evaluation of Landsat 8 panchromatic and multispectral images in arid regions[J]. Remote Sensing for Land and Resources, 2019, 31(4):11-19.doi: 10.6046/gtzyyg.2019.04.02.
doi: 10.6046/gtzyyg.2019.04.02
[28] Vivone G, Alparone L, Chanussot J, et al. A critical comparison among pansharpening algorithms[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5):2565-2586.
doi: 10.1109/TGRS.2014.2361734 url: http://ieeexplore.ieee.org/document/6998089/
[1] ZHANG Shibo, HU Wenmin, HAN Zhenying, LI Guo, WANG Zhongcheng, GAO Zhihai. Differences in rocky desertification information extracted from GF-6 and Landsat8 using the pixel unmixing method: A case study of Puding County[J]. Remote Sensing for Natural Resources, 2023, 35(3): 274-283.
[2] XU Xinyu, LI Xiaojun, ZHAO Heting, GAI Junfei. Pansharpening algorithm of remote sensing images based on NSCT and PCNN[J]. Remote Sensing for Natural Resources, 2023, 35(3): 64-70.
[3] NIU Xianghua, HUANG Wei, HUANG Rui, JIANG Sili. A high-fidelity method for thin cloud removal from remote sensing images based on attentional feature fusion[J]. Remote Sensing for Natural Resources, 2023, 35(3): 116-123.
[4] YAN Han, ZHANG Yi. County-level natural resource survey in western China based on both GF-6 images and the third national land resource survey results[J]. Remote Sensing for Natural Resources, 2023, 35(2): 277-286.
[5] ZHENG Zongsheng, LIU Haixia, WANG Zhenhua, LU Peng, SHEN Xukun, TANG Pengfei. Improved 3D-CNN-based method for surface feature classification using hyperspectral images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 105-111.
[6] SUN Sheng, MENG Zhimin, HU Zhongwen, YU Xu. Application of multi-scale and lightweight CNN in SAR image-based surface feature classification[J]. Remote Sensing for Natural Resources, 2023, 35(1): 27-34.
[7] XU Qingyun, LI Ying, TAN Jing, ZHANG Zhe. Information extraction method of mangrove forests based on GF-6 data[J]. Remote Sensing for Natural Resources, 2023, 35(1): 41-48.
[8] JIN Yuanhang, XU Maolin, ZHENG Jiayuan. A dead tree detection algorithm based on improved YOLOv4-tiny for UAV images[J]. Remote Sensing for Natural Resources, 2023, 35(1): 90-98.
[9] FU Yukai, YANG Shuwen, YAN Heng, XUE Qing, HONG Weili, SU Hang. An SAR and optical image fusion algorithm coupling non-local self-similarity and divergence[J]. Remote Sensing for Natural Resources, 2023, 35(1): 99-106.
[10] HU Jianwen, WANG Zeping, HU Pei. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1): 1-14.
[11] SHEN Jun’ao, MA Mengting, SONG Zhiyuan, LIU Tingzhou, ZHANG Wei. Water information extraction from high-resolution remote sensing images using the deep-learning based semantic segmentation model[J]. Remote Sensing for Natural Resources, 2022, 34(4): 129-135.
[12] ZHANG Pengqiang, GAO Kuiliang, LIU Bing, TAN Xiong. Classification of hyperspectral images based on deep Transformer network combined with spatial-spectral information[J]. Remote Sensing for Natural Resources, 2022, 34(3): 27-32.
[13] WANG Yiru, WANG Guanghui, YANG Huachao, LIU Huijie. A method for color consistency of remote sensing images based on generative adversarial networks[J]. Remote Sensing for Natural Resources, 2022, 34(3): 65-72.
[14] YANG Zhaoying, HAN Lingyi, ZHENG Xiangxiang, LI Wenji, FENG Lei, WANG Yi, YANG Yongpeng. Landslide identification using remote sensing images and DEM based on convolutional neural network: A case study of loess landslide[J]. Remote Sensing for Natural Resources, 2022, 34(2): 224-230.
[15] WANG Renjun, LI Dongying, LIU Baokang. A water body identification model for lakes in Hoh Xil based on GF-6 WFV satellite data[J]. Remote Sensing for Natural Resources, 2022, 34(2): 80-87.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech