Please wait a minute...
 
Remote Sensing for Natural Resources    2025, Vol. 37 Issue (6) : 97-106     DOI: 10.6046/gyzyyg.2024222
|
Pansharpening of hyperspectral remote sensing images based on feature enhancement and Three-Stream Transformer
ZHANG Jie1(), WANG Hengyou1,2(), HUO Lianzhi3
1. School of Science, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
2. Institute of Big Data Modeling Theory and Technology, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
3. Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100044, China
Download: PDF(6425 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

Pansharpening of remote sensing images refers to the fusion of panchromatic images (PAN) and low-spatial-resolution hyperspectral (or multispectral) images (LR-HSI/LRMS) to produce high-spatial-resolution hyperspectral (or multispectral) images (HR-HSI/HRMS). Currently, deep learning-based pansharpening methods have increasingly matured. However, pansharpening still faces several challenges, including inadequate feature extraction, insufficient guidance for information fusion, and oversimplified single-stage architectures, resulting in HR-HSI imagery with compromised spatial and spectral fidelity. To address these issues, this paper proposed a two-stage pansharpening method for hyperspectral images based on feature enhancement and a Three-Stream Transformer architecture. In the first stage, preliminarily enhanced hyperspectral images (HSI) were generated using a feature enhancement module and a multi-scale fusion module. Specifically, the feature enhancement module strengthened spatial and spectral information across multiple scales, while the multi-scale fusion module integrated the enhanced HSI at different scales. In the second stage, the initially enhanced HSI, PAN, and images resulting from their fusion were treated as three separate feature streams using the self-attention mechanism of the Transformer. Then, these streams were transformed into the Q(Query), K(Key), and V(Value) matrices via linear layers, followed by multi-head attention computation, which effectively guides the extraction and fusion of spatial and spectral information. Furthermore, the enhanced HSI and an additional fusion module were leveraged to refine image quality, yielding HR-HSI results with richer spatial and spectral details. Validation experiments were conducted on three classic hyperspectral datasets. The results demonstrate that the proposed method outperforms both conventional and existing deep learning-based approaches in terms of quantitative evaluation metrics. Considering qualitative evaluation results, it also preserves spectral information of the HSI and spatial details of the PAN images, producing more realistic HR-HSI images.

Keywords hyperspectral remote sensing imagery      pansharpening      feature enhancement      Three-Stream Transformer      multi-scale fusion     
ZTFLH:  TP751.1  
Issue Date: 31 December 2025
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Jie ZHANG
Hengyou WANG
Lianzhi HUO
Cite this article:   
Jie ZHANG,Hengyou WANG,Lianzhi HUO. Pansharpening of hyperspectral remote sensing images based on feature enhancement and Three-Stream Transformer[J]. Remote Sensing for Natural Resources, 2025, 37(6): 97-106.
URL:  
https://www.gtzyyg.com/EN/10.6046/gyzyyg.2024222     OR     https://www.gtzyyg.com/EN/Y2025/V37/I6/97
Fig.1  Overall network structure diagram
Fig.2  Feature enhancement module and fusion module
Fig.3  Multi-scale information fusion module
方法 CC/% SAM RMSE/
%
ERGAS PSNR
PCA 85.84 8.98 3.39 6.47 31.08
BF 92.43 9.60 3.51 6.78 30.81
GS 95.74 6.44 2.55 4.94 32.93
MG 95.62 6.54 2.18 4.47 34.35
PLRDiff 97.51 6.21 2.05 4.21 37.88
PanNet 96.82 6.35 1.92 3.89 35.61
HyperPNN 96.73 6.09 1.72 3.82 36.74
GPPNN 96.53 6.52 1.91 4.01 35.35
HyperKite 98.04 5.61 1.29 2.85 38.97
HyperTransformer 98.84 4.67 1.24 2.31 41.56
本文方法 98.92 3.79 0.88 2.07 42.82
理想值 100.00 0.00 0.00 0.00
Tab.1  Quantitative experimental results on the Pavia Center dataset (reduced-resolution)
方法 CC/% SAM RMSE/
%
ERGAS PSNR
PCA 94.81 2.38 1.97 2.31 39.95
BF 92.93 2.46 1.86 2.37 39.91
GS 95.10 2.31 1.95 2.19 40.21
MG 96.04 2.07 1.51 1.74 41.85
PLRDiff 97.14 1.93 1.33 1.82 42.07
PanNet 93.36 2.17 1.53 2.71 40.41
HyperPNN 97.28 1.74 1.18 1.44 43.45
GPPNN 96.21 1.90 1.41 1.65 42.05
HyperKite 98.13 1.46 1.03 1.22 44.97
HyperTransformer 98.04 1.33 0.94 1.15 45.74
本文方法 98.54 1.25 0.92 1.11 45.91
理想值 100.00 0.00 0.00 0.00
Tab.2  Quantitative experimental results on the Botswana dataset (reduced-resolution)
Fig.4  Visualization experiment results on the Pavia Center dataset (reduced-resolution)
Fig.5  Residual visualization results of images corresponding to the Pavia Center dataset (reduced-resolution)
Fig.6  Visualization experiment results on the Botswana dataset (reduced-resolution)
Fig.7  Residual visualization results of images corresponding to the Botswana dataset (reduced-resolution)
Fig.8  Differences between panchromatic sharpening results and ground truth values using different network methods on the Pavia Center (100,100) dataset
Fig.9  Differences between panchromatic sharpening results and ground truth values using different network methods on the Botswana (30,30) dataset
方法 参数总量 时间/s PSNR
PanNet[15] 8.0×105 2.472 35.61
HyperPNN[13] 1.3×105 1.207 36.74
GPPNN 3.0×106 2.819 35.35
HyperKite[28] 5.2×105 2.413 38.97
HyperTransformer[24] 1.2×107 3.845 41.56
本文方法 5.8×106 3.176 42.82
Tab.3  Comparison of runtime and total network parameters on the Pavia Center dataset
Fig.10  Visualization experimental results on the Chikusei dataset (full resolution)
方法 ${D}_{\mathrm{\lambda }}$ Ds QNR
PCA 0.053 4 0.061 5 0.888 4
BF 0.048 7 0.057 4 0.896 7
GS 0.055 8 0.080 1 0.868 6
MG 0.041 4 0.066 5 0.894 9
PLRDiff 0.034 5 0.053 6 0.913 7
PanNet 0.025 7 0.032 3 0.942 8
HyperPNN 0.021 5 0.029 7 0.949 4
GPPNN 0.027 9 0.035 7 0.937 4
HyperKite 0.012 5 0.024 5 0.962 3
HyperTransformer 0.015 7 0.016 4 0.968 2
本文方法 0.013 8 0.015 1 0.971 3
理想值 0.000 0 0.000 0 1.000 0
Tab.4  Quantitative experimental results (full-resolution) on the Chikusei dataset
N CC/% SAM RMSE/
%
ERGAS PSNR
4 97.28 4.96 1.47 2.31 40.98
8 98.31 4.35 1.19 2.10 42.40
12 98.92 3.79 0.88 2.07 42.82
16 98.40 3.95 1.13 2.12 42.52
20 97.56 4.37 1.40 2.27 41.33
Tab.5  Multiple head count (N) ablation experimental study in the Transformer module (Pavia Center)
方法 CC/% SAM RMSE/
%
ERGAS PSNR
B/L 92.07 6.85 3.07 5.76 32.54
B/L+FEM 96.46 5.34 2.65 4.18 35.12
B/L+Transformer 97.96 4.51 1.72 3.09 38.90
B/L+FEM+
Transformer
98.92 3.79 0.88 2.07 42.82
Tab.6  Experimental study on ablation of FEM and Transformer modules (Pavia Center)
[1] Patil P M R, Shirashyad P V V, Pandhare P P S, et al. An overview of sustainable development applications using artificial intelligence and remote sensing in urban planning[J]. Interantional Journal of Scientific Research in Engineering and Management, 2023, 7(10):1-11.
[2] Zhao S H, Wang Q, Li Y, et al. An overview of satellite remote sensing technology used in China’s environmental protection[J]. Earth Science Informatics, 2017, 10(2):137-148.
doi: 10.1007/s12145-017-0286-6 url: http://link.springer.com/10.1007/s12145-017-0286-6
[3] Hamad H N, Mushref Z J. Digital processing of satellite imagery to determine the spatial distribution of vegetation cover in the neighborhoods of Ramadi city[J]. Al-Anbar University Journal for Humanities, 2024, 21(1):317-338.
[4] Martadinata I, Gibran M N, Kaunang E S, et al. Axiological review of the philosophy of defense science in efforts to improve national defense capabilities[J]. Jurnal Multidisiplin Madani, 2024, 4(3):473-480.
doi: 10.55927/mudima.v4i3 url: https://journal.formosapublisher.org/index.php/mudima/issue/view/498
[5] Xie B, Zhang H K, Huang B. Revealing implicit assumptions of the component substitution pansharpening methods[J]. Remote Sen-sing, 2017, 9(5):443.
[6] Wang P, Yao H Y, Huang B, et al. Multiresolution analysis pansharpening based on variation factor for multispectral and panchromatic images from different times[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023,61: 1-17.
[7] Wu Z C, Huang T Z, Deng L J, et al. VO net:An adaptive approach using variational optimization and deep learning for panchromatic sharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021,60:5401016.
[8] Wu Z C, Huang T Z, Deng L J, et al. LRTCFPan:Low-rank tensor completion based framework for pansharpening[J]. IEEE Transactions on Image Processing, 2023,32:1640-1655.
[9] Lin S Z, Han Z, Li D W, et al. Integrating model- and data-driven methods for synchronous adaptive multi-band image fusion[J]. Information Fusion, 2020,54:145-160.
[10] Cao X H, Lian Y S, Wang K X, et al. Unsupervised hybrid network of transformer and CNN for blind hyperspectral and multispectral image fusion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024,62:5507615.
[11] Chen X, Pan J S, Lu J Y, et al. Hybrid CNN-transformer feature fusion for single image deraining[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(1):378-386.
doi: 10.1609/aaai.v37i1.25111 url: https://ojs.aaai.org/index.php/AAAI/article/view/25111
[12] He L, Zhu J W, Li J, et al. HyperPNN:Hyperspectral pansharpe-ning via spectrally predictive convolutional neural networks[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(8):3092-3100.
doi: 10.1109/JSTARS.4609443 url: https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4609443
[13] Yang J F, Fu X Y, Hu Y W, et al. PanNet:A deep network architecture for pan-sharpening[C]// 2017 IEEE International Conference on Computer Vision (ICCV).October 22-29,2017,Venice,Italy.IEEE, 2017:1753-1761.
[14] Deng L J, Vivone G, Jin C, et al. Detail injection-based deep convo-lutional neural networks for pansharpening[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(8):6995-7010.
doi: 10.1109/TGRS.2020.3031366 url: https://ieeexplore.ieee.org/document/9240949/
[15] Wang D, Li Y, Ma L, et al. Going deeper with densely connected convolutional neural networks for multispectral pansharpening[J]. Remote Sensing, 2019, 11(22):2608.
doi: 10.3390/rs11222608 url: https://www.mdpi.com/2072-4292/11/22/2608
[16] 郭彭浩, 邱建林, 赵淑男. 高频域多深度空洞网络的遥感图像全色锐化算法[J]. 自然资源遥感, 2024, 36(3):146-153.doi:10.6046/zrzyyg.2023133.
[16] Guo P H, Qiu J L, Zhao S N. High frequency do-main multi-depth learning network for pansharpening[J]. Remote Sensing for Natural Resources, 2024, 36(3):146-153.doi:10.6046/zrzyyg.2023133.
[17] 路琨婷, 费蓉蓉, 张选德. 融合卷积神经网络的遥感图像全色锐化[J]. 计算机应用, 2023, 43(9):2963-2969.
doi: 10.11772/j.issn.1001-9081.2022091458
[17] Lu K T, Fei R R, Zhang X D. Remote sensing image pansharpening by convolutional neural network[J]. Journal of Computer Applications, 2023, 43(9):2963-2969.
doi: 10.11772/j.issn.1001-9081.2022091458
[18] Ma J Y, Yu W, Chen C, et al. Pan-GAN:An unsupervised pan-sharpening method for remote sensing image fusion[J]. Information Fusion, 2020,62:110-120.
[19] Zhou H Y, Liu Q J, Wang Y H. PGMAN:An unsupervised generative multiadversarial network for pansharpening[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021,14:6316-6327.
[20] Qu J H, Dong W Q, Li Y S, et al. An interpretable unsupervised unrolling network for hyperspectral pansharpening[J]. IEEE Transactions on Cybernetics, 2023, 53(12):7943-7956.
doi: 10.1109/TCYB.2023.3241165 url: https://ieeexplore.ieee.org/document/10045820/
[21] Rui X Y, Cao X Y, Pang L, et al. Unsupervised hyperspectral pansharpening via low-rank diffusion model[J]. Information Fusion, 2024,107:102325.
[22] Bandara W G C, Patel V M. HyperTransformer:A textural and spectral feature fusion transformer for pansharpening[C]// 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 18-24,2022,New Orleans,LA,USA.IEEE, 2022:1757-1767.
[23] Plaza A, Benediktsson J A, Boardman J W, et al. Recent advances in techniques for hyperspectral image processing[J]. Remote Sen-sing of Environment, 2009,113:S110-S122.
[24] Ungar S G, Pearlman J S, Mendenhall J A, et al. Overview of the earth observing one (EO-1) mission[J]. IEEE Transactions on Geoscience and Remote Sensing, 2003, 41(6):1149-1159.
doi: 10.1109/TGRS.2003.815999 url: http://ieeexplore.ieee.org/document/1220222/
[25] Yokoya N, Lwasaki A. Airborne hyperspect-ral data over chikusei[J] Space Appl.Lab.,Univ.Tokyo,Tokyo,Japan,Tech.Rep.SAL-2016-05-27. 2016 May 27, 5(5): 5.
[26] Bandara W G C, Valanarasu J M J, Patel V M. Hyperspectral pansharpening based on improved deep image prior and residual reconstruction[J]. IEEE Transactions on Geoscience and Remote Sen-sing, 2021,60: 1-16.
[27] Zheng Y X, Li J J, Li Y S, et al. Hyperspectral pansharpening using deep prior and dual attention residual network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(11):8059-8076.
doi: 10.1109/TGRS.36 url: https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=36
[28] Wald L. Quality of high resolution synthesised images:Is there a simple criterion?[M].2000 Third conference “Fusion of Earth data: merging point measurements,raster maps and remotely sensed images”, 2000:99-103.
[29] Zeng Y N, Huang W, Liu M G, et al. Fusion of satellite images in urban area:Assessing the quality of resulting images[C]// 2010 18th International Conference on Geoinformatics. June 18-20,2010,Beijing,China. IEEE, 2010:1-4.
[30] Loncan L, De Almeida L B, Bioucas-Dias J M, et al. Hyperspectral pansharpening:A review[J]. IEEE Geoscience and Remote Sen-sing Magazine, 2015, 3(3):27-46.
[31] Yokoya N, Yairi T, Iwasaki A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion[J]. IEEE Transactions on Geoscience and Remote Sensing, 2012, 50(2):528-537.
doi: 10.1109/TGRS.2011.2161320 url: http://ieeexplore.ieee.org/document/5982386/
[32] Chavez P S, Kwarteng A Y. Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis[J]. Photogrammetric Engineering and Remote Sensing, 1989, 55(1): 339-348.
[33] Wei Q, Dobigeon N, Tourneret J Y. Bayesian fusion of multi-band images[J]. IEEE Journal of Selected Topics in Signal Processing, 2015, 9(6):1117-1127.
doi: 10.1109/JSTSP.2015.2407855 url: http://ieeexplore.ieee.org/document/7050351/
[34] Aiazzi B, Baronti S, Selva M. Improving component substitution pansharpening through multivariate regression of MS+Pan data[J]. IEEE Transactions on Geoscience and Remote Sensing, 2007, 45(10):3230-3239.
doi: 10.1109/TGRS.2007.901007 url: http://ieeexplore.ieee.org/document/4305344/
[35] Aiazzi B, Alparone L, Baronti S, et al. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis[J]. IEEE Transactions on Geoscience and Remote Sensing, 2002, 40(10):2300-2312.
doi: 10.1109/TGRS.2002.803623 url: http://ieeexplore.ieee.org/document/1105917/
[36] Xu S, Zhang J S, Zhao Z X, et al. Deep gradient projection networks for pan-sharpening[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 20-25,2021,Nashville,TN,USA.IEEE, 2021:1366-1375.
[37] Zhuo Y W, Zhang T J, Hu J F, et al. A deep-shallow fusion network with multidetail extractor and spectral attention for hyperspectral pansharpening[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022,15:7539-7555.
[1] MA Fei, SUN Lupeng, YANG Feixia, XU Guangxian. Pansharpening based on the multiscale weighted neural network in the transform domain[J]. Remote Sensing for Natural Resources, 2025, 37(3): 76-84.
[2] GUO Penghao, QIU Jianlin, ZHAO Shunan. A pansharpening algorithm for remote sensing images based on high-frequency domain and multi-depth dilated network[J]. Remote Sensing for Natural Resources, 2024, 36(3): 146-153.
[3] ZHAO Heting, LI Xiaojun, XU Xinyu, GAI Junfei. An ICM-based adaptive pansharpening algorithm for hyperspectral images[J]. Remote Sensing for Natural Resources, 2024, 36(2): 97-104.
[4] QIU Lei, ZHANG Xuezhi, HAO Dawei. VideoSAR moving target detection and tracking algorithm based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(2): 157-166.
[5] HU Jianwen, WANG Zeping, HU Pei. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1): 1-14.
[6] YU Xianchuan, LI Jianguang, XU Jindong, ZHANG Libao, HU Dan. A nonlinear spectral mixture model for hyperspectral imagery based on secondary scattering[J]. REMOTE SENSING FOR LAND & RESOURCES, 2013, 25(1): 18-25.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech