Please wait a minute...
 
Remote Sensing for Land & Resources    2020, Vol. 32 Issue (1) : 27-34     DOI: 10.6046/gtzyyg.2020.01.05
|
Feasibility of conditional generation adversarial network in remote sensing image restoration
Lijing BU, Xiuwei LI(), Zhengpeng ZHANG, Haonan JIANG
School of Surveying and Mapping and Geographical Sciences, Liaoning University of Engineering and Technology, Fuxin 123000, China
Download: PDF(3984 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

For the problem of degrading and blurring in remote sensing images, the classical image restoration methods have poor restoration effect due to the difficulty of estimating the blur function. In order to avoid the difficulty of estimating the blur function, the authors have studied the image restoration method based on Conditional Generative Adversarial Nets (CGAN) through depth learning. Firstly, the training database of the training network is created, and then the initial parameters of the training network are set. The network alternately learns the generator model and the discriminator model in the adversarial way. By learning the difference between the degraded image and the clear image continuously and combining the adversarial loss with the perceptual loss, the difference between them can be reduced and the image restoration can be realized. A Hybrid blur training library based on GOPRO data set is used to train the network, and is compared with other methods. The results show that this means has better restoration effect in image details and evaluation indexes. The details and texture information of the restored image are guaranteed, and the method of conditional generation antagonism network is proved to be applicable to the restoration of remote sensing image.

Keywords degraded blur      image restoration      conditional generative adversarial nets      deep learning     
:  TP79  
Corresponding Authors: Xiuwei LI     E-mail: 1259756850@qq.com
Issue Date: 14 March 2020
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Lijing BU
Xiuwei LI
Zhengpeng ZHANG
Haonan JIANG
Cite this article:   
Lijing BU,Xiuwei LI,Zhengpeng ZHANG, et al. Feasibility of conditional generation adversarial network in remote sensing image restoration[J]. Remote Sensing for Land & Resources, 2020, 32(1): 27-34.
URL:  
https://www.gtzyyg.com/EN/10.6046/gtzyyg.2020.01.05     OR     https://www.gtzyyg.com/EN/Y2020/V32/I1/27
网络结构 卷积核个数
7×7 卷积层 64
3×3卷积层, 步长为2 128
3×3卷积层, 步长为2 256
残差块 256
残差块 256
转置卷积层, 步长为2 128
转置卷积层, 步长为2 128
7×7卷积层 64
tanh函数
Tab.1  Structure of the generator
Fig.1  Residual block structure
网络结构 卷积核个数
4×4卷积层,步长为2 64
4×4卷积层,步长为2 128
4×4卷积层,步长为2 256
4×4卷积层,步长为1 512
4×4卷积层,步长为1 1
Tab.2  Structure of the discriminator
Fig.2  Degradation blur CGAN training process
实验类型 原始图像 降质图像 本文方法复原图像 盲反卷积算法复原图像 SRN算法复原图像
第1组
第2组
第3组
第4组
第5组
Tab.3  Simulation experiment
图像 PSNR/dB SSIM
第1组 22.31 0.68
第2组 34.30 0.86
第3组 12.90 0.19
第4组 26.17 0.66
第5组 21.43 0.43
Tab.4  PSNR and SSIM of degraded images in simulation experiments
实验
类型
本文方法 盲反卷积算法 SRN算法
PSNR/dB SSIM PSNR/dB SSIM PSNR/dB SSIM
第1组 26.62 0.68 26.57 0.35 19.10 0.51
第2组 35.52 0.85 33.32 0.71 35.29 0.50
第3组 28.85 0.18 27.79 0.19 12.35 0.20
第4组 26.02 0.74 20.35 0.37 25.31 0.70
第5组 20.12 0.44 19.97 0.40 20.10 0.46
Tab.5  PSNR and SSIM of Restored image
实验类型 降质图像 本文方法复原图像 盲卷积算法复原图像 SRN算法复原图像
第6组
第7组
第8组
Tab.6  Real experiment result
实验
类型
本文方法 盲反卷积算法 SRN算法
PSNR/dB SSIM PSNR/dB SSIM PSNR/dB SSIM
第6组 35.41 0.95 28.83 0.37 32.14 0.51
第7组 36.07 0.95 27.98 0.23 33.15 0.98
第8组 39.79 0.96 29.79 0.41 36.32 0.97
Tab.7  PSNR and SSIM of Real experiment results
实验类型 原始图像 运动模糊图像 本文方法复原图像
第9组
第10组
第11组
Tab.8  Simulation experiment results of the remote sensing training library
图像 PSNR/dB SSIM
第9组 25.14 0.74
第10组 29.09 0.71
第11组 13.35 0.24
Tab.9  PSNR and SSIM of simulation experiment results of remote sensing training library
[1] Richardson W H . Bayesian-based iterative method of image restoration[J]. Journal of the Optical Society of America, 1972,62(1):55-59.
[2] Gonzalez R C, Wintz P . Digital image processing[J]. Prentice Hall International, 2001,28(4):484-486.
[3] Besag J . On the statistical analysis of dirty pictures[J]. Journal of the Royal Statistical Society(Methodological), 1986,3(8):259-302.
[4] Harikumar G, Bresler Y . Perfect blind restoration of images blurred by multiple filters:Theory and efficient algorithms[J]. IEEE Transactions on Image Processing, 1999,8(2):202-219.
[5] Sroubek F, Flusser J . Multichannel blind deconvolution of spatially misaligned images[J]. IEEE Transactions on Image Processing, 2005,14(7):874-883.
[6] 鄢建成 . 基于混合正则化方法的图像复原问题研究[D]. 南昌:南昌航空大学, 2014.
[6] Yan J C . Research on Image Restoration Problem Based on Hybrid Regularization Method[D]. Nanchang:Nanchang Hangkong University, 2014.
[7] Rudin L I, Osher S, Fatemi E . Nonlinear total variation based noise removal algorithms[C]// 11th International Conference of the Center for Nonlinear Studies on Experimental Mathematics. 1992: 259-268.
[8] Chan T F, Hatzinakos D . A novel blind deconvolution scheme for image restoration using recuisive filtering[J]. IEEE Transaction on Signal Processing, 1998,46(2):375-390.
[9] Gong D, Yang J, Liu L , et al. From motion blur to motion flow:A deep learning solution for removing heterogeneous motion blur[C]// IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 2017: 3806-3815.
[10] Sun J, Cao W, Xu Z , et al. Learning a convolutional neural network for non-uniform motion blur removal[C]// IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 2015.
[11] Tao X, Gao H, Wang Y , et al. Scale-recurrent network for deep image deblurring[C]// IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 2018.
[12] Kupyn O, Budzan V, Mykhailych M , et al. DeblurGAN:Blind motion deblurring using conditional adversarial networks[C]// IEEE Conference on Computer Vision and Pattern Recognition.IEEE, 2017.
[13] Goodfellow I . Generative adversarial networks[C]// NIPS Conference and Workshop on Neural Information Processing Systems. 2016.
[14] Isola P, Zhu J Y, Zhou T , et al. Image-to-image translation with conditional adversarial networks[EB/OL]. (2016-11-21). https://arxiv.org/abs/1611.07004.
url: https://arxiv.org/abs/1611.07004
[15] Johnson J, Alahi A, Li F F . Perceptual losses for real-time style transfer and super-resolution[C]// ECCV In European Conference on Computer Vision, 2016.
[16] Ledig C, Theis L, Huszar F , et al. Photo-realistic single image super-resolution using a generative adversarial network[C]// IEEE Conference on Computer Vision and Pattern Recognition,IEEE, 2016.
[17] Nah S, Kim T H, Lee K M . Deep multi-scale convolutional neural network for dynamic scene deblurring[C]// Conference on Computer Vision and Pattern Recognition.IEEE, 2016.
[18] Deng J, Dong W, Socher R , et al. ImageNet:A large-scale hierarchical image database[C]// IEEE Conference on Computer Vision & Pattern Recognition.IEEE, 2009.
[19] Gulrajani I, Ahmed F, Arjovsky M , et al. Improved training of wasserstein GANs[J]. Machine Learning, 2017.
[1] XUE Bai, WANG Yizhe, LIU Shuhan, YUE Mingyu, WANG Yiying, ZHAO Shihu. Change detection of high-resolution remote sensing images based on Siamese network[J]. Remote Sensing for Natural Resources, 2022, 34(1): 61-66.
[2] GUO Xiaozheng, YAO Yunjun, JIA Kun, ZHANG Xiaotong, ZHAO Xiang. Information extraction of Mars dunes based on U-Net[J]. Remote Sensing for Natural Resources, 2021, 33(4): 130-135.
[3] FENG Dongdong, ZHANG Zhihua, SHI Haoyue. Fine extraction of urban villages in provincial capitals based on multivariate data[J]. Remote Sensing for Natural Resources, 2021, 33(3): 272-278.
[4] WU Yu, ZHANG Jun, LI Yixu, HUANG Kangyu. Research on building cluster identification based on improved U-Net[J]. Remote Sensing for Land & Resources, 2021, 33(2): 48-54.
[5] LU Qi, QIN Jun, YAO Xuedong, WU Yanlan, ZHU Haochen. Buildings extraction of GF-2 remote sensing image based on multi-layer perception network[J]. Remote Sensing for Land & Resources, 2021, 33(2): 75-84.
[6] AN Jianjian, MENG Qingyan, HU Die, HU Xinli, YANG Jian, YANG Tianliang. The detection and determination of the working state of cooling tower in the thermal power plant based on Faster R-CNN[J]. Remote Sensing for Land & Resources, 2021, 33(2): 93-99.
[7] CAI Xiang, LI Qi, LUO Yan, QI Jiandong. Surface features extraction of mining area image based on object-oriented and deep-learning method[J]. Remote Sensing for Land & Resources, 2021, 33(1): 63-71.
[8] ZHENG Zhiteng, FAN Haisheng, WANG Jie, WU Yanlan, WANG Biao, HUANG Tengjie. An improved double-branch network method for intelligently extracting marine cage culture area[J]. Remote Sensing for Land & Resources, 2020, 32(4): 120-129.
[9] DU Fangzhou, SHI Yuli, SHENG Xia. Research on downscaling of TRMM precipitation products based on deep learning: Exemplified by northeast China[J]. Remote Sensing for Land & Resources, 2020, 32(4): 145-153.
[10] PEI Chan, LIAO Tiejun. Multi-scale architecture search method for remote sensing object detection[J]. Remote Sensing for Land & Resources, 2020, 32(4): 53-60.
[11] LIU Zhao, LIAO Feifan, ZHAO Tong. Remote sensing image urban built-up area extraction and optimization method based on PSPNet[J]. Remote Sensing for Land & Resources, 2020, 32(4): 84-89.
[12] CAI Zhiling, WENG Qian, YE Shaozhen, JIAN Cairen. Remote sensing image scene classification based on Inception-V3[J]. Remote Sensing for Land & Resources, 2020, 32(3): 80-89.
[13] Wenya LIU, Anzhi YUE, Jue JI, Weihua SHI, Ruru DENG, Yeheng LIANG, Longhai XIONG. Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model[J]. Remote Sensing for Land & Resources, 2020, 32(2): 120-129.
[14] Bo YU, Junjun ZHANG, Chungeng LI, Jubai AN. Automated extraction of roads from mobile laser scanning point clouds by image semantic segmentation[J]. Remote Sensing for Land & Resources, 2020, 32(1): 66-74.
[15] Haiping WU, Shicun HUANG. Research on new construction land information extraction based on deep learning: Innovation exploration of the national project of land use monitoring via remote sensing[J]. Remote Sensing for Land & Resources, 2019, 31(4): 159-166.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech