Please wait a minute...
 
自然资源遥感  2025, Vol. 37 Issue (6): 97-106    DOI: 10.6046/gyzyyg.2024222
  技术方法 本期目录 | 过刊浏览 | 高级检索 |
基于特征增强和三流Transformer的高光谱遥感图像全色锐化
张杰1(), 王恒友1,2(), 霍连志3
1.北京建筑大学理学院,北京 100044
2.北京建筑大学大数据建模理论与技术研究所,北京 100044
3.中国科学院空天信息创新研究院,北京 100044
Pansharpening of hyperspectral remote sensing images based on feature enhancement and Three-Stream Transformer
ZHANG Jie1(), WANG Hengyou1,2(), HUO Lianzhi3
1. School of Science, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
2. Institute of Big Data Modeling Theory and Technology, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
3. Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100044, China
全文: PDF(6425 KB)   HTML  
输出: BibTeX | EndNote (RIS)      
摘要 

遥感图像的全色锐化是将全色图像(panchromatic images,PAN)与低空间分辨率的高光谱遥感图像(low spatial resolution hyperspectral images,LR-HSI)融合,生成高空间分辨率的高光谱遥感图像(high spatial resolution hyperspectral images,HR-HSI)。当前,基于深度学习的全色锐化方法逐渐成熟,但是依然存在一些不足,例如特征信息提取不够充分、信息融合缺乏指导、全色锐化阶段单一等,导致生成的HR-HSI图像空间和光谱信息不完善。针对以上问题,该文提出一种基于特征增强和三流Transformer的高光谱遥感图像全色锐化方法,采用双阶段实现全色锐化解决单阶段方法的不足。首先,采用特征增强模块与多尺度信息融合模块生成初步增强的高光谱遥感图像,特征增强模块主要从多个尺度上对HSI图像进行空间与光谱信息的增强,多尺度信息融合模块将增强后的HSI图像在不同尺度进行融合; 其次,采用Transformer中注意力思想,将初步增强的HSI,PAN以二者融合生成的图像作为3种特征流,通过线性层将其分别转换为QKV,进而进行多头注意力计算,有效指导空间信息与光谱信息的提取与融合; 再次,利用增强的HSI图像与融合模块进一步提高图像质量,使生成的HR-HSI图像具有更加丰富的空间与光谱信息; 最后,在3个经典的高光谱数据集上进行了实验验证。实验结果表明,该文所提出的全色锐化方法在定量评价指标上优于传统方法和现有的深度学习方法,在定性结果上能够更好地保留高光谱图像的光谱信息和全色图像的空间信息,融合生成更加真实的HR-HSI图像。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
张杰
王恒友
霍连志
关键词 高光谱遥感图像全色锐化特征增强三流Transformer多尺度融合    
Abstract

Pansharpening of remote sensing images refers to the fusion of panchromatic images (PAN) and low-spatial-resolution hyperspectral (or multispectral) images (LR-HSI/LRMS) to produce high-spatial-resolution hyperspectral (or multispectral) images (HR-HSI/HRMS). Currently, deep learning-based pansharpening methods have increasingly matured. However, pansharpening still faces several challenges, including inadequate feature extraction, insufficient guidance for information fusion, and oversimplified single-stage architectures, resulting in HR-HSI imagery with compromised spatial and spectral fidelity. To address these issues, this paper proposed a two-stage pansharpening method for hyperspectral images based on feature enhancement and a Three-Stream Transformer architecture. In the first stage, preliminarily enhanced hyperspectral images (HSI) were generated using a feature enhancement module and a multi-scale fusion module. Specifically, the feature enhancement module strengthened spatial and spectral information across multiple scales, while the multi-scale fusion module integrated the enhanced HSI at different scales. In the second stage, the initially enhanced HSI, PAN, and images resulting from their fusion were treated as three separate feature streams using the self-attention mechanism of the Transformer. Then, these streams were transformed into the Q(Query), K(Key), and V(Value) matrices via linear layers, followed by multi-head attention computation, which effectively guides the extraction and fusion of spatial and spectral information. Furthermore, the enhanced HSI and an additional fusion module were leveraged to refine image quality, yielding HR-HSI results with richer spatial and spectral details. Validation experiments were conducted on three classic hyperspectral datasets. The results demonstrate that the proposed method outperforms both conventional and existing deep learning-based approaches in terms of quantitative evaluation metrics. Considering qualitative evaluation results, it also preserves spectral information of the HSI and spatial details of the PAN images, producing more realistic HR-HSI images.

Key wordshyperspectral remote sensing imagery    pansharpening    feature enhancement    Three-Stream Transformer    multi-scale fusion
收稿日期: 2024-06-21      出版日期: 2025-12-31
ZTFLH:  TP751.1  
基金资助:国家自然科学基金项目“基于高阶TV正则化低秩矩阵重构的深度神经网络鲁棒性增强研究”(62072024);国家自然科学基金项目“退耕还林区森林时空格局变化的循环神经网络建模关键技术研究”(41971396);建大杰青项目“基于数据驱动的偏微分方程学习及其在图像重建中的应用研究”(JDJQ20220805);北京建筑大学研究生科研基金项目“基于深度学习网络的真实场景下遥感图像全色锐化研究”(PG2024149)
通讯作者: 王恒友(1982-),男,博士,教授,博士生导师,主要研究方向为计算机视觉图像分析、稀疏表示、低秩矩阵理论及其图像重构等。Email: wanghengyou@bucea.edu.cn
作者简介: 张杰(1999-),男,硕士研究生,主要研究方向为遥感图像处理等。Email: z1752678k@163.com
引用本文:   
张杰, 王恒友, 霍连志. 基于特征增强和三流Transformer的高光谱遥感图像全色锐化[J]. 自然资源遥感, 2025, 37(6): 97-106.
ZHANG Jie, WANG Hengyou, HUO Lianzhi. Pansharpening of hyperspectral remote sensing images based on feature enhancement and Three-Stream Transformer. Remote Sensing for Natural Resources, 2025, 37(6): 97-106.
链接本文:  
https://www.gtzyyg.com/CN/10.6046/gyzyyg.2024222      或      https://www.gtzyyg.com/CN/Y2025/V37/I6/97
Fig.1  整体网络结构图
Fig.2  特征增强模块和融合模块
Fig.3  多尺度信息融合模块
方法 CC/% SAM RMSE/
%
ERGAS PSNR
PCA 85.84 8.98 3.39 6.47 31.08
BF 92.43 9.60 3.51 6.78 30.81
GS 95.74 6.44 2.55 4.94 32.93
MG 95.62 6.54 2.18 4.47 34.35
PLRDiff 97.51 6.21 2.05 4.21 37.88
PanNet 96.82 6.35 1.92 3.89 35.61
HyperPNN 96.73 6.09 1.72 3.82 36.74
GPPNN 96.53 6.52 1.91 4.01 35.35
HyperKite 98.04 5.61 1.29 2.85 38.97
HyperTransformer 98.84 4.67 1.24 2.31 41.56
本文方法 98.92 3.79 0.88 2.07 42.82
理想值 100.00 0.00 0.00 0.00
Tab.1  Pavia Center 数据集上的定量实验结果(降低分辨率)
方法 CC/% SAM RMSE/
%
ERGAS PSNR
PCA 94.81 2.38 1.97 2.31 39.95
BF 92.93 2.46 1.86 2.37 39.91
GS 95.10 2.31 1.95 2.19 40.21
MG 96.04 2.07 1.51 1.74 41.85
PLRDiff 97.14 1.93 1.33 1.82 42.07
PanNet 93.36 2.17 1.53 2.71 40.41
HyperPNN 97.28 1.74 1.18 1.44 43.45
GPPNN 96.21 1.90 1.41 1.65 42.05
HyperKite 98.13 1.46 1.03 1.22 44.97
HyperTransformer 98.04 1.33 0.94 1.15 45.74
本文方法 98.54 1.25 0.92 1.11 45.91
理想值 100.00 0.00 0.00 0.00
Tab.2  Botswana 数据集上的定量实验结果(降低分辨率)
Fig.4  Pavia Center 数据集上的可视化实验结果(降低分辨率)
Fig.5  Pavia Center数据集对应图像的残差可视化结果(降低分辨率)
Fig.6  Botswana 数据集上的可视化实验结果(降低分辨率)
Fig.7  Botswana 数据集对应图像的残差可视化结果(降低分辨率)
Fig.8  Pavia Center(100,100)数据集上不同网络方法全色锐化结果与地面真值之间的差异
Fig.9  Botswana(30,30)数据集上不同网络方法的全色锐化结果与地面真值之间的差异
方法 参数总量 时间/s PSNR
PanNet[15] 8.0×105 2.472 35.61
HyperPNN[13] 1.3×105 1.207 36.74
GPPNN 3.0×106 2.819 35.35
HyperKite[28] 5.2×105 2.413 38.97
HyperTransformer[24] 1.2×107 3.845 41.56
本文方法 5.8×106 3.176 42.82
Tab.3  Pavia Center数据集上运行时间以及网络参数总量对比
Fig.10  Chikusei 数据集上的可视化实验结果(全分辨率)
方法 ${D}_{\mathrm{\lambda }}$ Ds QNR
PCA 0.053 4 0.061 5 0.888 4
BF 0.048 7 0.057 4 0.896 7
GS 0.055 8 0.080 1 0.868 6
MG 0.041 4 0.066 5 0.894 9
PLRDiff 0.034 5 0.053 6 0.913 7
PanNet 0.025 7 0.032 3 0.942 8
HyperPNN 0.021 5 0.029 7 0.949 4
GPPNN 0.027 9 0.035 7 0.937 4
HyperKite 0.012 5 0.024 5 0.962 3
HyperTransformer 0.015 7 0.016 4 0.968 2
本文方法 0.013 8 0.015 1 0.971 3
理想值 0.000 0 0.000 0 1.000 0
Tab.4  Chikusei 数据集定量实验结果(全分辨率)
N CC/% SAM RMSE/
%
ERGAS PSNR
4 97.28 4.96 1.47 2.31 40.98
8 98.31 4.35 1.19 2.10 42.40
12 98.92 3.79 0.88 2.07 42.82
16 98.40 3.95 1.13 2.12 42.52
20 97.56 4.37 1.40 2.27 41.33
Tab.5  Transformer模块里的多头数量(N)消融实验研究(Pavia Center)
方法 CC/% SAM RMSE/
%
ERGAS PSNR
B/L 92.07 6.85 3.07 5.76 32.54
B/L+FEM 96.46 5.34 2.65 4.18 35.12
B/L+Transformer 97.96 4.51 1.72 3.09 38.90
B/L+FEM+
Transformer
98.92 3.79 0.88 2.07 42.82
Tab.6  FEM和Transformer模块的消融实验研究(Pavia Center)
[1] Patil P M R, Shirashyad P V V, Pandhare P P S, et al. An overview of sustainable development applications using artificial intelligence and remote sensing in urban planning[J]. Interantional Journal of Scientific Research in Engineering and Management, 2023, 7(10):1-11.
[2] Zhao S H, Wang Q, Li Y, et al. An overview of satellite remote sensing technology used in China’s environmental protection[J]. Earth Science Informatics, 2017, 10(2):137-148.
doi: 10.1007/s12145-017-0286-6
[3] Hamad H N, Mushref Z J. Digital processing of satellite imagery to determine the spatial distribution of vegetation cover in the neighborhoods of Ramadi city[J]. Al-Anbar University Journal for Humanities, 2024, 21(1):317-338.
[4] Martadinata I, Gibran M N, Kaunang E S, et al. Axiological review of the philosophy of defense science in efforts to improve national defense capabilities[J]. Jurnal Multidisiplin Madani, 2024, 4(3):473-480.
doi: 10.55927/mudima.v4i3
[5] Xie B, Zhang H K, Huang B. Revealing implicit assumptions of the component substitution pansharpening methods[J]. Remote Sen-sing, 2017, 9(5):443.
[6] Wang P, Yao H Y, Huang B, et al. Multiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023,61: 1-17.
[7] Wu Z C, Huang T Z, Deng L J, et al. VO net:An adaptive approach using variational optimization and deep learning for panchromatic sharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021,60:5401016.
[8] Wu Z C, Huang T Z, Deng L J, et al. LRTCFPan:Low-rank tensor completion based framework for pansharpening[J]. IEEE Transactions on Image Processing, 2023,32:1640-1655.
[9] Lin S Z, Han Z, Li D W, et al. Integrating model- and data-driven methods for synchronous adaptive multi-band image fusion[J]. Information Fusion, 2020,54:145-160.
[10] Cao X H, Lian Y S, Wang K X, et al. Unsupervised hybrid network of transformer and CNN for blind hyperspectral and multispectral image fusion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024,62:5507615.
[11] Chen X, Pan J S, Lu J Y, et al. Hybrid CNN-transformer feature fusion for single image deraining[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(1):378-386.
doi: 10.1609/aaai.v37i1.25111
[12] He L, Zhu J W, Li J, et al. HyperPNN:Hyperspectral pansharpe-ning via spectrally predictive convolutional neural networks[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(8):3092-3100.
doi: 10.1109/JSTARS.4609443
[13] Yang J F, Fu X Y, Hu Y W, et al. PanNet:A deep network architecture for pan-sharpening[C]// 2017 IEEE International Conference on Computer Vision (ICCV).October 22-29,2017,Venice,Italy.IEEE, 2017:1753-1761.
[14] Deng L J, Vivone G, Jin C, et al. Detail injection-based deep convo-lutional neural networks for pansharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(8):6995-7010.
doi: 10.1109/TGRS.2020.3031366
[15] Wang D, Li Y, Ma L, et al. Going deeper with densely connected convolutional neural networks for multispectral pansharpening[J]. Remote Sensing, 2019, 11(22):2608.
doi: 10.3390/rs11222608
[16] 郭彭浩, 邱建林, 赵淑男. 高频域多深度空洞网络的遥感图像全色锐化算法[J]. 自然资源遥感, 2024, 36(3):146-153.doi:10.6046/zrzyyg.2023133.
Guo P H, Qiu J L, Zhao S N. High frequency do-main multi-depth learning network for pansharpening[J]. Remote Sensing for Natural Resources, 2024, 36(3):146-153.doi:10.6046/zrzyyg.2023133.
[17] 路琨婷, 费蓉蓉, 张选德. 融合卷积神经网络的遥感图像全色锐化[J]. 计算机应用, 2023, 43(9):2963-2969.
doi: 10.11772/j.issn.1001-9081.2022091458
Lu K T, Fei R R, Zhang X D. Remote sensing image pansharpening by convolutional neural network[J]. Journal of Computer Applications, 2023, 43(9):2963-2969.
doi: 10.11772/j.issn.1001-9081.2022091458
[18] Ma J Y, Yu W, Chen C, et al. Pan-GAN:An unsupervised pan-sharpening method for remote sensing image fusion[J]. Information Fusion, 2020,62:110-120.
[19] Zhou H Y, Liu Q J, Wang Y H. PGMAN:An unsupervised generative multiadversarial network for pansharpening[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021,14:6316-6327.
[20] Qu J H, Dong W Q, Li Y S, et al. An interpretable unsupervised unrolling network for hyperspectral pansharpening[J]. IEEE Transactions on Cybernetics, 2023, 53(12):7943-7956.
doi: 10.1109/TCYB.2023.3241165
[21] Rui X Y, Cao X Y, Pang L, et al. Unsupervised hyperspectral pansharpening via low-rank diffusion model[J]. Information Fusion, 2024,107:102325.
[22] Bandara W G C, Patel V M. HyperTransformer:A textural and spectral feature fusion transformer for pansharpening[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 18-24,2022,New Orleans,LA,USA.IEEE, 2022:1757-1767.
[23] Plaza A, Benediktsson J A, Boardman J W, et al. Recent advances in techniques for hyperspectral image processing[J]. Remote Sen-sing of Environment, 2009,113:S110-S122.
[24] Ungar S G, Pearlman J S, Mendenhall J A, et al. Overview of the earth observing one (EO-1) mission[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(6):1149-1159.
doi: 10.1109/TGRS.2003.815999
[25] Yokoya N, Lwasaki A. Airborne hyperspect-ral data over chikusei[J] Space Appl.Lab.,Univ.Tokyo,Tokyo,Japan,Tech.Rep.SAL-2016-05-27. 2016 May 27, 5(5): 5.
[26] Bandara W G C, Valanarasu J M J, Patel V M. Hyperspectral pansharpening based on improved deep image prior and residual reconstruction[J]. IEEE Transactions on Geoscience and Remote Sen-sing, 2021,60: 1-16.
[27] Zheng Y X, Li J J, Li Y S, et al. Hyperspectral pansharpening using deep prior and dual attention residual network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(11):8059-8076.
doi: 10.1109/TGRS.36
[28] Wald L. Quality of high resolution synthesised images:Is there a simple criterion?[M].2000 Third conference “Fusion of Earth data: merging point measurements,raster maps and remotely sensed images”, 2000:99-103.
[29] Zeng Y N, Huang W, Liu M G, et al. Fusion of satellite images in urban area:Assessing the quality of resulting images[C]// 2010 18th International Conference on Geoinformatics. June 18-20,2010,Beijing,China. IEEE, 2010:1-4.
[30] Loncan L, De Almeida L B, Bioucas-Dias J M, et al. Hyperspectral pansharpening:A review[J]. IEEE Geoscience and Remote Sen-sing Magazine, 2015, 3(3):27-46.
[31] Yokoya N, Yairi T, Iwasaki A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2012, 50(2):528-537.
doi: 10.1109/TGRS.2011.2161320
[32] Chavez P S, Kwarteng A Y. Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis[J]. Photogrammetric Engineering and Remote Sensing, 1989, 55(1): 339-348.
[33] Wei Q, Dobigeon N, Tourneret J Y. Bayesian fusion of multi-band images[J]. IEEE Journal of Selected Topics in Signal Processing, 2015, 9(6):1117-1127.
doi: 10.1109/JSTSP.2015.2407855
[34] Aiazzi B, Baronti S, Selva M. Improving component substitution pansharpening through multivariate regression of MS+Pan data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2007, 45(10):3230-3239.
doi: 10.1109/TGRS.2007.901007
[35] Aiazzi B, Alparone L, Baronti S, et al. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(10):2300-2312.
doi: 10.1109/TGRS.2002.803623
[36] Xu S, Zhang J S, Zhao Z X, et al. Deep gradient projection networks for pan-sharpening[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 20-25,2021,Nashville,TN,USA.IEEE, 2021:1366-1375.
[37] Zhuo Y W, Zhang T J, Hu J F, et al. A deep-shallow fusion network with multidetail extractor and spectral attention for hyperspectral pansharpening[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022,15:7539-7555.
[1] 马飞, 孙陆鹏, 杨飞霞, 徐光宪. 基于变换域多尺度加权神经网络的全色锐化[J]. 自然资源遥感, 2025, 37(3): 76-84.
[2] 徐欣钰, 李小军, 盖钧飞, 李轶鲲. 结合NSCT变换和引导滤波的多光谱图像全色锐化算法[J]. 自然资源遥感, 2025, 37(1): 24-30.
[3] 郭彭浩, 邱建林, 赵淑男. 高频域多深度空洞网络的遥感图像全色锐化算法[J]. 自然资源遥感, 2024, 36(3): 146-153.
[4] 赵鹤婷, 李小军, 徐欣钰, 盖钧飞. 基于ICM的高光谱图像自适应全色锐化算法[J]. 自然资源遥感, 2024, 36(2): 97-104.
[5] 邱磊, 张学志, 郝大为. 基于深度学习的视频SAR动目标检测与跟踪算法[J]. 自然资源遥感, 2023, 35(2): 157-166.
[6] 鲁锦涛, 马丽. 基于流形对齐的高光谱遥感图像降维和分类算法[J]. 国土资源遥感, 2017, 29(1): 104-109.
[7] 余先川, 李建广, 徐金东, 张立保, 胡丹. 基于二次散射的高光谱遥感图像光谱非线性混合模型[J]. 国土资源遥感, 2013, 25(1): 18-25.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发