Please wait a minute...
 
Remote Sensing for Natural Resources    2024, Vol. 36 Issue (1) : 58-66     DOI: 10.6046/gtzyyg.2022362
|
A two-stage remote sensing image inpainting network combined with spatial semantic attention
LIU Yujia1(), XIE Shizhe2, DU Yang3, YAN Jin4,5(), NAN Yanyun4, WEN Zhongkai3,6
1. School of Geomatics, Liaoning Technical University, Fuxin 123000, China
2. School of Information Engineering, China University of Geosciemces(Beijing), Beijing 100083, China
3. College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
4. National Earthquake Response Support Service, Beijing 100049, China
5. College of Resource Environment and Toursim, Capital Normal University, Beijing 100048, China
6. Institute of Remote Sensing Satellite, CAST,Beijing 100094, China
Download: PDF(23370 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

In high-resolution remote sensing images, missing areas feature intricate surface features and pronounced spatial heterogeneity, causing the image inpainting results to suffer texture blurring and structural distortion, particularly for boundaries and areas with complex textures. This study proposed a two-stage remote sensing image inpainting network combined with spatial semantic attention (SSA). The network comprised two networks in series: one for coarse image inpainting and one for fine-scale image inpainting (also referred to as the coarse and fine-scale networks, respectively). This network was designed to guide the fine-scale network to restore the missing areas using the priori information provided by the coarse network. In the coarse network, a multi-level loss structure was constructed to enhance the stability of network training. In the fine-scale network, a novel SSA mechanism was proposed, with SSA being embedded differentially in the encoder and decoder based on the distribution of network features. This ensured the continuity of local features and the correlation of global semantic information. The experimental results show that the network proposed in this study can further improve the image inpainting effects compared to other existing algorithms.

Keywords two-stage network      remote sensing image inpainting      spatial semantic attention      continuity of local features      correlation of global semantic information     
ZTFLH:  TP751  
Issue Date: 13 March 2024
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Yujia LIU
Shizhe XIE
Yang DU
Jin YAN
Yanyun NAN
Zhongkai WEN
Cite this article:   
Yujia LIU,Shizhe XIE,Yang DU, et al. A two-stage remote sensing image inpainting network combined with spatial semantic attention[J]. Remote Sensing for Natural Resources, 2024, 36(1): 58-66.
URL:  
https://www.gtzyyg.com/EN/10.6046/gtzyyg.2022362     OR     https://www.gtzyyg.com/EN/Y2024/V36/I1/58
Fig.1  Typical cases of missing information from remote sensing data
Fig.2  A two-stage deep adversarial network with spatial semantic attention
Fig.3  SSA calculation process
Fig.4  A variety of mask of this paper
序号 实际 CSA LBP AOT 本文方法
a
b
c
d
Tab.1  Experimental results of GF-2 data
指标 图像 CSA LBP AOT 本文方法 指标 图像 CSA LBP AOT 本文方法
SSIM a 0.906 6 0.903 8 0.913 3 0.915 3 PSNR a 26.033 0 26.286 1 26.958 4 28.025 6
b 0.935 0 0.928 7 0.924 8 0.935 5 b 27.785 0 26.694 4 27.256 7 27.801 7
c 0.949 8 0.925 3 0.933 0 0.958 5 c 29.439 7 28.106 6 28.921 1 29.627 4
d 0.939 2 0.926 9 0.929 2 0.949 8 d 28.701 5 28.029 9 28.795 4 28.849 8
测试集
平均
0.932 6 0.921 2 0.925 1 0.939 8 测试集
平均
27.989 8 27.279 3 27.982 9 28.576 1
Tab.2  Quality evaluation indexes of different methods in GF-2 data
序号 实际 CSA LBP AOT 本文方法
a
b
c
d
Tab.3  Experimental results of aerial image data
指标 图像 CSA LBP AOT 本文方法 指标 图像 CSA LBP AOT 本文方法
SSIM a 0.904 2 0.908 8 0.898 1 0.908 8 PSNR a 30.789 4 31.348 7 30.516 7 31.191 0
b 0.827 1 0.806 5 0.818 7 0.819 3 b 20.882 2 21.473 0 21.164 2 21.233 8
c 0.839 5 0.835 5 0.838 9 0.850 9 c 23.670 7 23.552 4 24.275 7 23.798 9
d 0.814 1 0.796 9 0.818 4 0.832 3 d 21.256 1 20.545 1 21.294 7 21.871 7
测试集
平均
0.846 2 0.836 9 0.843 5 0.852 8 测试集
平均
24.149 6 24.229 8 24.312 8 24.523 9
Tab.4  Multiple methods for evaluating indicators on aeronautical data
序号 实际 CSA LBP AOT 本文方法
a
b
c
d
Tab.5  Experimental results on natural data
指标 图像 CSA LBP AOT 本文方法 指标 图像 CSA LBP AOT 本文方法
SSIM a 0.912 8 0.905 3 0.889 5 0.915 4 PSNR a 21.243 1 30.824 1 29.337 9 31.262 9
b 0.697 6 0.688 9 0.695 9 0.701 8 b 21.201 7 19.898 7 20.862 0 21.239 3
c 0.723 6 0.710 4 0.722 6 0.729 7 c 22.083 0 21.654 8 21.663 1 22.597 9
d 0.811 2 0.797 0 0.811 6 0.817 8 d 24.034 8 23.673 8 24.136 5 24.161 0
测试集
平均
0.786 3 0.775 4 0.779 9 0.791 2 测试集
平均
22.140 7 24.012 9 23.999 9 24.815 3
Tab.6  Multiple methods for evaluating indicators on natural data
[1] 张兵. 当代遥感科技发展的现状与未来展望[J]. 中国科学院院刊, 2017, 32(7):774-784.
[1] Zhang B. Current status and future prospects of remote sensing[J]. Bulletin of Chinese Academy of Sciences, 2017, 32(7):774-784.
[2] 沈远航. CCD拼接相机技术研究[D]. 成都: 中国科学院大学(中国科学院光电技术研究所), 2020.
[2] Shen Y H. Research on technology of CCD mosaic camera[D]. Chengdu: Institute of Optics and Electronics,Chinese Academy of Sciences, 2020.
[3] 陈仁喜, 李鑫慧. GIS辅助数据下的影像缺失信息恢复[J]. 武汉大学学报(信息科学版), 2008, 33(5):461-464.
[3] Chen R X, Li X H. Restoring lost information on remote sensing images based on accessorial GIS data[J]. Geomatics and Information Science of Wuhan University, 2008, 33(5):461-464.
[4] 张强. 基于时—空—谱深度特征学习的遥感影像缺失信息重建方法研究[D]. 武汉: 武汉大学, 2019.
[4] Zhang Q. Research on missing information reconstruction of remote sensing imagery employing temporal-spatial-spectral deep feature learning[D]. Wuhan: Wuhan University, 2019.
[5] Dong F, Chen Z, Wang J. A new level set method for inhomogeneous image segmentation[J]. Image and Vision Computing, 2013, 31(10):809-822.
doi: 10.1016/j.imavis.2013.08.003 url: https://linkinghub.elsevier.com/retrieve/pii/S0262885613001297
[6] Zhang Y, Fan Y, Xu M. A background-purification-based framework for anomaly target detection in hyperspectral imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(7):1238-1242.
doi: 10.1109/LGRS.8859 url: https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8859
[7] Wang L, Wang Y, Zhao Y, et al. Classification of remotely sensed images using an ensemble of improved convolutional network[J]. IEEE Geoscience and Remote Sensing Letters, 2021, 18(5):930-934.
doi: 10.1109/LGRS.2020.2988934 url: https://ieeexplore.ieee.org/document/9086521/
[8] 李梅菊, 祁清. 数字图像修复技术综述[J]. 信息通信, 2016, 29(2):130-131.
[8] Li M J, Qi Q. Overview of digital image restoration technology[J]. Information & Communications, 2016, 29(2):130-131.
[9] 赵露露, 沈玲, 洪日昌. 图像修复研究进展综述[J]. 计算机科学, 2021, 48(3):14-26.
doi: 10.11896/jsjkx.210100048
[9] Zhao L L, Shen L, Hong R C. Survey on image inpainting research progress[J]. Computer Science, 2021, 48(3):14-26.
[10] 王海涌, 李海洋, 高雪娇. 基于结构嵌入的图像修复方法研究[J]. 计算机工程与应用, 2021, 57(22):241-246.
doi: 10.3778/j.issn.1002-8331.2007-0279
[10] Wang H Y, Li H Y, Gao X J. Research on image restoration method based on structure embedding[J]. Computer Engineering and Applications, 2021, 57(22):241-246.
doi: 10.3778/j.issn.1002-8331.2007-0279
[11] 周先春, 徐燕. 基于结构相关性的自适应图像修复[J]. 计算机科学, 2020, 47(4):131-135.
doi: 10.11896/jsjkx.190300149
[11] Zhou X C, Xu Y. Adaptive image inpainting based on structural correlation[J]. Computer Science, 2020, 47(4):131-135.
[12] Shen J, Chan T F. Mathematical models for local nontexture inpaintings[J]. SIAM Journal on Applied Mathematics, 2002, 62(3):1019-1043.
doi: 10.1137/S0036139900368844 url: http://epubs.siam.org/doi/10.1137/S0036139900368844
[13] 张桂梅, 李艳兵. 结合纹理结构的分数阶TV模型的图像修复[J]. 中国图象图形学报, 2019, 24(5):700-713.
[13] Zhang G M, Li Y B. Image inpainting of fractional TV model combined with texture structure[J]. Journal of Image and Graphics, 2019, 24(5):700-713.
[14] Criminisi A, Pérez P, Toyama K. Region filling and object removal by exemplar-based image inpainting[J]. IEEE Transactions on Image Processing:a Publication of the IEEE Signal Processing Society, 2004, 13(9):1200-1212.
doi: 10.1109/TIP.2004.833105 url: http://ieeexplore.ieee.org/document/1323101/
[15] Barnes C, Shechtman E, Finkelstein A, et al. PatchMatch:A randomized correspondence algorithm for structural image editing[J]. ACM Transactions on Graphics, 2009, 28(3):24-34.
[16] Pathak D, Krhenbühl P, Donahue J, et al. Context encoders:Feature learning by inpainting[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Las Vegas,NV,USA.IEEE, 2016:2536-2544.
[17] 徐慧铭, 程海, 姜焕德, 等. 基于深度学习的图像修复方法研究[J]. 黑龙江大学自然科学学报, 2022, 39(1):114-120.
[17] Xu H M, Cheng H, Jiang H D, et al. Research on image restoration methods based on deep leaming[J]. Journal of Natural Science of Heilongjiang University, 2022, 39(1):114-120.
[18] Zhang H, Goodfellow I, Metaxas D, et al. Self-attention generative adversarial networks[EB/OL]. 2018:arXiv:1805.08318. https://arxiv.org/abs/1805.08318.pdf.
url: https://arxiv.org/abs/1805.08318.pdf
[19] Wang N, Li J, Zhang L, et al. MUSICAL:Multi-scale image contextual attention learning for inpainting[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence.2019,Macao,China.ACM, 2019:3748-3754.
[20] Zeng Y, Fu J, Chao H, et al. Aggregated contextual transformations for high-resolution image inpainting[J]. IEEE Transactions on Visualization and Computer Graphics, 2023, 29(7):3266-3280.
doi: 10.1109/TVCG.2022.3156949 url: https://ieeexplore.ieee.org/document/9729564/
[21] 刘强, 张道畅. 结合SENet的密集卷积生成对抗网络图像修复方法[J]. 小型微型计算机系统, 2022, 43(5):1056-1060.
[21] Liu Q, Zhang D C. Dense convolution generative adversarial network image inpainting method with SENet[J]. Journal of Chinese Computer Systems, 2022, 43(5):1056-1060.
[22] 唐骞. 基于对抗学习的图像修复[J]. 计算机产品与流通, 2019(1):210.
[22] Tang Q. Image restoration based on antagonistic learning[J]. Computer Products and Circulation, 2019(1):210.
[23] 孙劲光, 杨忠伟, 黄胜. 全局与局部属性一致的图像修复模型[J]. 中国图象图形学报, 2020, 25(12):2505-2516.
[23] Sun J G, Yang Z W, Huang S. Image inpainting model with consistent global and local attributes[J]. Journal of Image and Graphics, 2020, 25(12):2505-2516.
[24] 林竹, 王敏. 基于新编码器和相似度约束的图像修复[J]. 计算机系统应用, 2021, 30(1):122-128.
[24] Lin Z, Wang M. Image inpainting based on new encoder and similarity constraint[J]. Computer Systems & Applications, 2021, 30(1):122-128.
[25] Yu J, Lin Z, Yang J, et al. Generative image inpainting with contextual attention[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA.IEEE, 2018:5505-5514.
[26] Nazeri K, Ng E, Joseph T, et al. EdgeConnect:Generative image inpainting with adversarial edge learning[EB/OL]. 2019:arXiv:1901.00212. https://arxiv.org/abs/1901.00212.pdf.
url: https://arxiv.org/abs/1901.00212.pdf
[27] Liu H, Jiang B, Xiao Y, et al. Coherent semantic attention for image inpainting[C]// 2019 IEEE/CVF International Conference on Computer Vision (ICCV).IEEE, 2019:4169-4178.
[28] Wu H, Zhou J, Li Y. Deep generative model for image inpainting with local binary pattern learning and spatial attention[C]// IEEE Transactions on Multimedia.IEEE, 2021:4016-4027.
[29] Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham:Springer, 2015:234-241.
[30] Isola P, Zhu J Y, Zhou T, et al. Image-to-image translation with conditional adversarial networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).IEEE, 2017:5967-5976.
[31] Maggiori E, Tarabalka Y, Charpiat G, et al. Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark[C]// 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).IEEE, 2017:3226-3229.
[32] Zhou B, Lapedriza A, Khosla A, et al. Places:A 10 million image database for scene recognition[C]// IEEE Transactions on Pattern Analysis and Machine Intelligence.IEEE, 2018:1452-1464.
[1] MA Hui, LIU Bo, DU Shihong. Multi-class change detection using a multi-task Siamese network of remote sensing images[J]. Remote Sensing for Natural Resources, 2024, 36(1): 77-85.
[2] LI Lei, SUN Xiyan, JI Yuanfa, FU Wentao. Hyperspectral image classification based on superpixel segmentation and extended multi-attribute profiles[J]. Remote Sensing for Natural Resources, 2023, 35(4): 114-121.
[3] DENG Dingzhu. Deep learning-based cloud detection method for multi-source satellite remote sensing images[J]. Remote Sensing for Natural Resources, 2023, 35(4): 9-16.
[4] ZHI Lu, HU Tao, ZOU Bin, LI Haosheng, ZHAO Yongqiang. Spatio-temporal differentiation of vegetation net primary productivity in Henan Province as well as its driving factors[J]. Remote Sensing for Natural Resources, 2023, 35(4): 169-177.
[5] NIU Xianghua, HUANG Wei, HUANG Rui, JIANG Sili. A high-fidelity method for thin cloud removal from remote sensing images based on attentional feature fusion[J]. Remote Sensing for Natural Resources, 2023, 35(3): 116-123.
[6] ZHENG Zongsheng, LIU Haixia, WANG Zhenhua, LU Peng, SHEN Xukun, TANG Pengfei. Improved 3D-CNN-based method for surface feature classification using hyperspectral images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 105-111.
[7] DIAO Mingguang, LIU Yong, GUO Ningbo, LI Wenji, JIANG Jikang, WANG Yunxiao. Mask R-CNN-based intelligent identification of sparse woods from remote sensing images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 97-104.
[8] LONG En, LYU Shouye, QIAN Guodong, LIAN Cuiping, YANG Yuke, CHEN Lingyan. A space-based remote sensing service mode based on cloud+ terminals[J]. Remote Sensing for Natural Resources, 2023, 35(1): 258-264.
[9] JIN Yuanhang, XU Maolin, ZHENG Jiayuan. A dead tree detection algorithm based on improved YOLOv4-tiny for UAV images[J]. Remote Sensing for Natural Resources, 2023, 35(1): 90-98.
[10] FU Yukai, YANG Shuwen, YAN Heng, XUE Qing, HONG Weili, SU Hang. An SAR and optical image fusion algorithm coupling non-local self-similarity and divergence[J]. Remote Sensing for Natural Resources, 2023, 35(1): 99-106.
[11] LI Yefan, WANG Lin, ZHANG Dongzhu. Information extraction of coastal aquaculture ponds based on spectral features and spatial convolution[J]. Remote Sensing for Natural Resources, 2022, 34(4): 42-52.
[12] ZHANG Pengqiang, GAO Kuiliang, LIU Bing, TAN Xiong. Classification of hyperspectral images based on deep Transformer network combined with spatial-spectral information[J]. Remote Sensing for Natural Resources, 2022, 34(3): 27-32.
[13] LIU Guangjin, WANG Guanghui, BI Weihua, LIU Huijie, YANG Huachao. Cloud detection algorithm of remote sensing image based on DenseNet and attention mechanism[J]. Remote Sensing for Natural Resources, 2022, 34(2): 88-96.
[14] YANG Zhaoying, HAN Lingyi, ZHENG Xiangxiang, LI Wenji, FENG Lei, WANG Yi, YANG Yongpeng. Landslide identification using remote sensing images and DEM based on convolutional neural network: A case study of loess landslide[J]. Remote Sensing for Natural Resources, 2022, 34(2): 224-230.
[15] QU Haicheng, WAND Yaxuan, SHEN Lei. Hyperspectral super-resolution combining multi-receptive field features with spectral-spatial attention[J]. Remote Sensing for Natural Resources, 2022, 34(1): 43-52.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech