Please wait a minute...
 
自然资源遥感  2022, Vol. 34 Issue (2): 47-55    DOI: 10.6046/zrzyyg.2021179
  技术方法 本期目录 | 过刊浏览 | 高级检索 |
基于知识引导的遥感影像融合方法
孔爱玲1(), 张承明1(), 李峰2, 韩颖娟3, 孙焕英4, 杜漫飞4
1.山东农业大学信息科学与工程学院,泰安 271018
2.山东省气候中心,济南 250031
3.中国气象局旱区特色农业气象灾害监测预警与风险管理重点实验室, 银川 750002
4.航天宏图信息技术股份有限公司,北京 100195
Knowledge-based remote sensing image fusion method
KONG Ailing1(), ZHANG Chengming1(), LI Feng2, HAN Yingjuan3, SUN Huanying4, DU Manfei4
1. College of Information Science and Engineering, Shandong Agricultural University, Taian 271018, China
2. Shandong Provincal Climate Center, Jinan 250031, China
3. Key Laboratory for Meteorological Disaster Monitoring and Early Warning and Risk Management of Characteristic Agriculture in Arid Regions, CMA, Yinchuan 750002, China
4. PIESAT Information Technology Co., Ltd., Beijing 100195, China
全文: PDF(3714 KB)   HTML  
输出: BibTeX | EndNote (RIS)      
摘要 

遥感图像融合技术能够将含有互补信息的多源图像进行融合,从而得到内容更丰富、光谱质量更高的图像,是遥感应用的关键和基础。针对遥感影像融合过程中容易出现的光谱失真和空间结构失真问题,该文以注意力机制为基础,利用归一化植被指数(normalized difference vegetation index,NDVI)和归一化水指数(normalized difference water index,NDWI)作为先验知识,构建了基于知识引导的遥感影像融合模型(remote sensing image FuseNet,RSFuseNet)。首先,基于高通滤波能够充分提取边缘纹理细节的优势,构建了高通滤波模块提取全色图像高频细节信息; 其次,提取多光谱图像的NDVI和NDWI信息; 再次,构建自适应挤压激励(squeeze-and-excitation,SE)模块,对输入特征进行重标定; 最后,将自适应SE模块与卷积单元结合,对输入特征进行融合处理。以高分六号遥感影像作为数据源,选择施密特正交化(gram-schmidt,GS)、主成分分析(principal component analysis,PCA)、深度网络结构的图像融合网络(a deep network architecture for pan-sharpening,PanNet)、以卷积神经网络为基础的融合网络(pansharpening by convolutional neural networks,PNN)模型作为对比模型开展实验,实验结果表明: 该文提出模型的峰值信噪比(peak signal to noise ratio,PSNR)指标(40.5)和结构相似性(structural similarity,SSIM)指标(0.98)均优于对比模型,表明这一方法在遥感影像图像融合方面具有明显的优势。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
孔爱玲
张承明
李峰
韩颖娟
孙焕英
杜漫飞
关键词 图像融合先验知识卷积神经网络注意力机制高分六号    
Abstract

The remote sensing image fusion technology can combine multi-source images containing complementary information to obtain images with richer content and higher spectral quality, thus it is the key and foundation of remote sensing applications. Aiming at the problems of spectral distortion and spatial structure distortion that are prone to occur in the process of remote sensing image fusion, the knowledge-based remote sensing image FuseNet (RSFuseNet) was constructed based on the attention mechanism and using normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) as prior knowledge. Firstly, considering that the high-pass filtering can fully extract edge texture details, a high-pass filtering module was constructed to extract high-frequency details of panchromatic images. Secondly, NDVI and NDWI were extracted from multi-spectral images. Then, an adaptive squeeze-and-excitation (SE) module was constructed to recalibrate the input features. Finally, the adaptive SE module was combined with the convolution unit to perform fusion processing on the input features. The experiment was conducted using Gaofen 6 remote sensing image as the data source, and selecting Gram-Schmidt (GS) transformation, principal component analysis (PCA), a deep network architecture for pan-sharpening (PanNet), and pansharpening by convolutional neural networks (PNN) models as comparative models. The experimental results show that the peak signal to noise ratio (PSNR) index (40.5) and the structural similarity (SSIM) index (0.98) of the RSFuseNet model are better than those of comparative models, indicating that the method in this study has obvious advantages in remote sensing image fusion.

Key wordsimage fusion    prior knowledge    convolutional neural network    attention mechanism    GF-6
收稿日期: 2021-06-03      出版日期: 2022-06-20
ZTFLH:  TP79  
基金资助:山东省自然科学基金“基于多源遥感影像的冬小麦精细空间分布提取方法”(ZR2021MD097);山东省自然科学基金“基于知识融合的遥感影像分割方法研究”(ZR2020MF130);中国气象局旱区特色农业气象灾害监测预警与风险管理重点实验室开放基金项目“旱区作物种植信息智能化提取技术研究”(CAMF-202001);青海省基础研究计划“基于遥感图像超分辨率技术的油菜地土壤水分监测”(2021-ZJ-739);及中国气象局旱区特色农业气象灾害监测预警与风险管理重点实验室指令性项目“基于遥感技术的小麦、水稻不同发育期长势评估”(CAMP-201916)
通讯作者: 张承明
作者简介: 孔爱玲(1996-),女,硕士研究生,主要从事计算机视觉研究。Email: 2019110580@sdau.edu.cn
引用本文:   
孔爱玲, 张承明, 李峰, 韩颖娟, 孙焕英, 杜漫飞. 基于知识引导的遥感影像融合方法[J]. 自然资源遥感, 2022, 34(2): 47-55.
KONG Ailing, ZHANG Chengming, LI Feng, HAN Yingjuan, SUN Huanying, DU Manfei. Knowledge-based remote sensing image fusion method. Remote Sensing for Natural Resources, 2022, 34(2): 47-55.
链接本文:  
https://www.gtzyyg.com/CN/10.6046/zrzyyg.2021179      或      https://www.gtzyyg.com/CN/Y2022/V34/I2/47
相机类型 谱段号 谱段范围/μm 星下点地面
像元分辨率
覆盖宽度 静态转函 量化值 辐射定标精度
离轴TMA全反射式 P 0.45~0.90 优于2 m 大于90 km 优于0.2 12 bit 绝对定标精度优于7%,相对定标精度优于3%
B1 0.45~0.52 优于8 m
B2 0.52~0.60
B3 0.43~0.69
B4 0.76~0.90
Tab.1  高分6号卫星的主要参数
Fig.1  RSFuseNet模型基本结构图
Fig.2  自适应SE模块结构图
名称 模型介绍
GS变换 该融合方法通过统计分析方法对参与融合的各波段进行最佳匹配
PCA 该融合方法将多光谱图像的多个波段看作多维数据集,通过对该数据集进行PCA变换得到多个主成分分量
NNDiffuse 图像融合算法,主要是采用最邻近插值
PNN 与RSFuseNet具有类似的结构,使用3层卷积提取特征
PanNet 使用两路分支的卷积神经网络提取特征
Tab.2  对比实验使用的模型
Fig.3  区域一对比实验结果
Fig.4  区域二对比实验结果
Fig.5  区域三对比实验结果
Fig.6  区域四对比实验结果
指标 NNDiffuse GS PCA PNN PanNet RSFuseNet
PSNR↑ 21.13 24.26 26.16 35.50 28.02 40.50
SSIM↑ 0.92 0.92 0.90 0.97 0.91 0.98
SAM↓ 4.62 3.68 3.57 3.71 4.22 2.09
Tab.3  对比模型性能指标表
[1] 王建宇, 王跃明, 李春来. 高光谱成像系统的噪声模型和对辐射灵敏度的影响[J]. 遥感学报, 2010, 14(4):607-620.
Wang J Y, Wang Y M, Li C L. Noise model of hyperspectral imaging system and influence on radiation sensitivity[J]. Journal of Remote Sensing, 2010, 14(4):607-620.
[2] Vivone G, Alparone L, Chanussot J, et al. A critical comparison among pansharpening algorithms[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5):2565-2586.
doi: 10.1109/TGRS.2014.2361734
[3] Xie B, Zhang H, Huang B. Revealing implicit assumptions of the component substitution pansharpening methods[J]. Remote Sensing, 2017, 9(5):443-458.
doi: 10.3390/rs9050443
[4] Yong Y, Wan W, Huang S, et al. A novel pan-sharpening framework based on matting model and multiscale transform[J]. Remote Sensing, 2017, 9(4):391.
doi: 10.3390/rs9040391
[5] 王晓艳, 刘勇, 蒋志勇. 一种基于结构相似度的IHS变换融合算法[J]. 遥感技术与应用, 2011, 26(5):670-676.
Wang X Y, Liu Y, Jiang Z Y. An IHS fusion method based on structural similarity[J]. Remote Sensing Technology and Application, 2011, 26(5):670-676.
[6] Yang Y, Wan W, Huang S, et al. Remote sensing image fusion based on adaptive IHS and multiscale guided filter[J]. IEEE Access, 2017(4):4573-4582.
[7] W Dong, S Xiao, X Xue, et al. An improved hyperspectral pansharpening algorithm based on optimized injection model[J]. IEEE Access, 2019(7):16718-16729.
[8] Xu L, Zhang Y, Gao Y, et al. Using guided filtering to improve gram-schmidt based pansharpening method for GeoEye-1 satellite images[J]. Proceedings of the 4th International Conference on Information Systems and Computing Technology, 2016(64):33-37.
[9] Aiazzi B, Alparone L, Baronti S, et al. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery[J]. Photogrammetric Engineering & Remote Sensing, 2015, 72(5):591-596.
[10] Vivone G, Restaino R, Mura M D, et al. Contrast and error-based fusion schemes for multispectral image pansharpening[J]. IEEE Geoscience and Remote Sensing Letters, 2013, 11(5):930-934.
doi: 10.1109/LGRS.2013.2281996
[11] Alparone, Luciano, Baronti, et al. Spatial methods for multispectral pansharpening:Multiresolution analysis demystified[J]. IEEE Transactions on Geoscience & Remote Sensing, 2016(54):2563-2576.
[12] Nencini F, Garzelli A, Baronti S, et al. Remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2):143-156.
doi: 10.1016/j.inffus.2006.02.001
[13] Lu X, Zhang J, Zhang Y. An improved non-subsampled contourlet transform-based hybrid pan-sharpening algorithm[C]// 2017 IEEE International Geoscience and Remote Sensing Symposium. Fort Worth: IEEE, 2017:3393-3396.
[14] James R, Vadivel M. Improvement of spectral and spatial information using modified WAMM and modified bi-cubic interpolation method in non-subsampled contourlet transform domain[C]// 2015 International Conference on Robotics,Automation,Control and Embedded Systems (RACE). Chennai India: IEEE, 2015:1-5.
[15] Zhang Z, Luo X, Wu X. A new pansharpening method using statistical model and shearlet transform[J]. IEEE Technical Review, 2014, 31(5):308-316.
[16] Wang X, Bai S, Li Z, et al. The PAN and MS Image pansharpening algorithm based on adaptive neural network and sparse representation in the NSST domain[J]. IEEE Access, 2019(7):52508-52521.
[17] Yin H T. Sparse representation based pan-sharpening with details injection model[J]. Signal Processing, 2015(113):218-227.
[18] Giuseppe M, Davide C, Luisa V, et al. Pansharpening by convolutional neural networks[J]. Remote Sensing, 2016, 8(7):594.
doi: 10.3390/rs8070594
[19] Scarpa G, Vitale S, Cozzolino D. Target-adaptive CNN-based pansharpening[J]. IEEE Transactions on Geoscience & Remote Sensing, 2018, 56(9):5443-5457.
[20] Yuan Q, Wei Y, Meng X, et al. A multiscale and multidepth convolutional neural network for remote sensing imagery pan-sharpening[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(3),978-989.
doi: 10.1109/JSTARS.2018.2794888
[21] Yang J, Fu X, Hu Y, et al. PanNet:A deep network architecture for pan-sharpening[C]// 2017 IEEE International Conference on Computer Vision(ICCV). Venice Italy: IEEE, 2017:5449-5457.
[22] Shao Z, Cai J. Remote sensing image fusion with deep convolutional neural network[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(5):1656-1669.
doi: 10.1109/JSTARS.2018.2805923
[23] Azarang A, Manoochehri H E, Kehtarnavaz N. Convolutional autoencoder-based multispectral image fusion[J]. IEEE Access, 2019, 7:35673-35683.
doi: 10.1109/ACCESS.2019.2905511
[24] Jie H, Li S, Gang S, et al. Squeeze-and-excitation networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8):2011-2023.
doi: 10.1109/TPAMI.2019.2913372 pmid: 31034408
[25] 聂文昌. 融合先验知识的深度学习目标识别与定位研究[D]. 哈尔滨: 哈尔滨工程大学, 2019.
Nie Wenchang. Research on target recognition and locationbased on prior knowledge using deep learning[D]. Harbin: Harbin Engineering University, 2019.
[26] Pushparaj J, Hegde A V. Evaluation of pan-sharpening methods for spatial and spectral quality[J]. Applied Geomatics, 2016, 9(1):1-12.
doi: 10.1007/s12518-016-0179-2
[27] 杨丽萍, 马孟, 谢巍, 等. 干旱区Landsat8全色与多光谱数据融合算法评价[J]. 国土资源遥感, 2019, 31(4):11-19.doi: 10.6046/gtzyyg.2019.04.02.
doi: 10.6046/gtzyyg.2019.04.02
Yang L P, Ma M, Xie W, et al. Fusion algorithm evaluation of Landsat 8 panchromatic and multispectral images in arid regions[J]. Remote Sensing for Land and Resources, 2019, 31(4):11-19.doi: 10.6046/gtzyyg.2019.04.02.
doi: 10.6046/gtzyyg.2019.04.02
[28] Vivone G, Alparone L, Chanussot J, et al. A critical comparison among pansharpening algorithms[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(5):2565-2586.
doi: 10.1109/TGRS.2014.2361734
[1] 徐欣钰, 李小军, 赵鹤婷, 盖钧飞. NSCT和PCNN相结合的遥感图像全色锐化算法[J]. 自然资源遥感, 2023, 35(3): 64-70.
[2] 牛祥华, 黄微, 黄睿, 蒋斯立. 基于注意力特征融合的高保真遥感图像薄云去除[J]. 自然资源遥感, 2023, 35(3): 116-123.
[3] 闫涵, 张毅. 利用GF-6影像结合国土“三调”开展西部地区县域自然资源调查[J]. 自然资源遥感, 2023, 35(2): 277-286.
[4] 郑宗生, 刘海霞, 王振华, 卢鹏, 沈绪坤, 唐鹏飞. 改进3D-CNN的高光谱图像地物分类方法[J]. 自然资源遥感, 2023, 35(2): 105-111.
[5] 孙盛, 蒙芝敏, 胡忠文, 余旭. 多尺度轻量化CNN在SAR图像地物分类中的应用[J]. 自然资源遥感, 2023, 35(1): 27-34.
[6] 许青云, 李莹, 谭靖, 张哲. 基于高分六号卫星数据的红树林提取方法[J]. 自然资源遥感, 2023, 35(1): 41-48.
[7] 金远航, 徐茂林, 郑佳媛. 基于改进YOLOv4-tiny的无人机影像枯死树木检测算法[J]. 自然资源遥感, 2023, 35(1): 90-98.
[8] 胡建文, 汪泽平, 胡佩. 基于深度学习的空谱遥感图像融合综述[J]. 自然资源遥感, 2023, 35(1): 1-14.
[9] 沈骏翱, 马梦婷, 宋致远, 柳汀洲, 张微. 基于深度学习语义分割模型的高分辨率遥感图像水体提取[J]. 自然资源遥感, 2022, 34(4): 129-135.
[10] 张鹏强, 高奎亮, 刘冰, 谭熊. 联合空谱信息的高光谱影像深度Transformer网络分类[J]. 自然资源遥感, 2022, 34(3): 27-32.
[11] 王艺儒, 王光辉, 杨化超, 刘慧杰. 基于生成对抗网络的遥感影像色彩一致性方法[J]. 自然资源遥感, 2022, 34(3): 65-72.
[12] 廖廓, 聂磊, 杨泽宇, 张红艳, 王艳杰, 彭继达, 党皓飞, 冷伟. 基于多维卷积神经网络的多源高分辨率卫星影像茶园分类[J]. 自然资源遥感, 2022, 34(2): 152-161.
[13] 杨昭颖, 韩灵怡, 郑向向, 李文吉, 冯磊, 王轶, 杨永鹏. 基于卷积神经网络的遥感影像及DEM滑坡识别——以黄土滑坡为例[J]. 自然资源遥感, 2022, 34(2): 224-230.
[14] 王仁军, 李东颖, 刘宝康. 基于高分六号WFV数据的可可西里湖泊水体识别模型[J]. 自然资源遥感, 2022, 34(2): 80-87.
[15] 刘广进, 王光辉, 毕卫华, 刘慧杰, 杨化超. 基于DenseNet与注意力机制的遥感影像云检测算法[J]. 自然资源遥感, 2022, 34(2): 88-96.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发