Please wait a minute...
 
Remote Sensing for Natural Resources    2023, Vol. 35 Issue (1) : 27-34     DOI: 10.6046/zrzyyg.2021421
|
Application of multi-scale and lightweight CNN in SAR image-based surface feature classification
SUN Sheng1(), MENG Zhimin1, HU Zhongwen2, YU Xu3
1. School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
2. Key Laboratory for Geo-Environmental Monitoring of Great Bay Area, Ministry of Natural Rresources, Shenzhen University, Shenzhen 518000, China
3. School of Civil and Transportation Engineering,Guangdong University of Technology, Guangzhou 510006, China
Download: PDF(3524 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

Targeting the subtropical climate characteristics of the Guangdong-Hong Kong-Macao Greater Bay Area, this study acquired the images of the experimental area from the TerraSAR-X Radar remote sensing satellite. Given the varying scale of the surface feature targets in the Radar satellite observation scenes, this study proposed an ENet convolution spatial pyramid pooling module (ENet-CSPP) model for surface feature classification. Since ordinary convolution can more effectively maintain domain information than atrous convolution, this study proposed a multi-scale feature fusion module based on convolution spatial pyramid pooling. Since there were a few training samples in the SAR remote sensing image dataset, this study combined the multi-scale feature fusion module with the lightweight convolutional neural network. The encoder of the ENet-CSPP network consisted of an improved ENet network and the convolution spatial pyramid pooling module. The decoder output surface feature classification images after the fusion of deep and shallow features. The quantitative comparison experiments were conducted on the GDUT-Nansha dataset. The ENet-CSPP model outperformed other models in three performance indices, namely pixel accuracy, average pixel accuracy, and mean intersection over union. This result indicates that the multi-scale lightweight model effectively improved the accuracy of surface feature classification.

Keywords synthetic aperture Radar(SAR)      surface feature classification      convolutional neural network      lightweight network     
ZTFLH:  P236  
Issue Date: 20 March 2023
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Sheng SUN
Zhimin MENG
Zhongwen HU
Xu YU
Cite this article:   
Sheng SUN,Zhimin MENG,Zhongwen HU, et al. Application of multi-scale and lightweight CNN in SAR image-based surface feature classification[J]. Remote Sensing for Natural Resources, 2023, 35(1): 27-34.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2021421     OR     https://www.gtzyyg.com/EN/Y2023/V35/I1/27
ENet ENet-CSPP
层名 采样类型 输出大小/像素 层名 采样类型 输出大小/像素
initial 16×256×256
bottleneck1.0 downsampling 64×128×128 bottleneck1.0 downsampling 64×256×256
4bottlencek1.x 64×128×128 4bottlencek1.x 64×256×256
bottleneck2.0 downsampling 128×64×64 bottleneck2.0 downsampling 128×128×128
8bottleneck2.x 128×64×64 8bottleneck2.x 128×128×128
repeat section 2,without bottleneck2.0 CSPP N2×128×128
bottleneck4.0 upsampling 64×128×128 upsample1.0 upsampling N2×256×256
2bottleneck4.x 64×128×128 concat (N1+N2)×256×256
bottleneck5.0 upsampling 16×256×256 lastconv C×256×256
2bottleneck5.x 16×256×256 upsample2.0 upsampling C×512×512
fullconv C×512×512
Tab.1  Comparison of ENet and ENet-CSPP structures
Fig.1  Overall architecture
Fig.2  Partial training set data
Fig.3  Testing set area
权重参数 PA mPA mIoU
无权重 88.8 78.8 70.1
0.64,0.73,0.65,0.96 88.6 81.5 71.0
Tab.2  Effect of cross entropy loss function category weight on the model(%)
N2+N1 PA mPA mIoU
64+0 88.6 76.9 68.6
64+8 88.4 79.2 69.9
64+16 88.4 78.7 69.6
64+32 88.2 79.0 69.6
64+64 88.3 78.9 69.6
16+8 88.6 81.5 71.0
32+8 88.6 77.6 69.2
128+8 88.7 79.3 70.2
Tab.3  Effect of shallow and deep feature channel numbers on the model(%)
Fig.4  Classification images of the diffeent models on the test image
模型 PA mPA IoU mIoU
植被 建筑物 水体 道路
FCN 87.5 78.3 72.8 80.1 89.7 30.3 68.2
SegNet 87.1 79.8 72.9 79.5 88.8 32.9 68.5
U-Net 88.0 80.0 73.3 80.5 90.5 34.1 69.6
DeepLabV3+ 87.1 80.2 70.5 79.7 89.1 37.0 69.1
ENet 85.7 77.7 69.3 76.7 88.8 29.4 66.1
ENet-CSPP 88.6 81.5 74.4 81.8 90.3 37.3 71.0
Tab.4  Comparison of different models on GDUT-Nansha data set(%)
[1] 吕启, 窦勇, 牛新, 等. 基于DBN模型的遥感图像分类[J]. 计算机研究与发展, 2014, 51(9):1911-1918.
[1] Lyu Q, Dou Y, Niu X, et al. Remote sensing image classification based on DBN model[J]. Journal of Computer Research and Development, 2014, 51(9):1911-1918.
[2] 高照忠, 魏海霞, 黄铁兰. 粤港澳大湾区土地覆盖及景观格局时空变化分析[J]. 测绘通报, 2021(5):25-29.
[2] Gao Z Z, Wei H X, Huang T L. Analysis of spatio-temporal changes of land cover and landscape pattern in Guangdong-Hong Kong-Macao Greater Bay Area[J]. Bulletin of Surveying and Mapping, 2021(5):25-29.
[3] van Zyl J J, Kim Y. Synthetic aperture Radar polarimetry[M]. Hoboken:Wiley, 2011:3.
[4] Yao W, Marmanis D, Datcu M. Semantic segmentation using deep neural networks for SAR and optical image pairs[C]// Big Data From Space, 2017:1-4.
[5] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-650.
doi: 10.1109/TPAMI.2016.2572683 pmid: 27244717
[6] Henry C, Azimi S M, Merkle N. Road segmentation in SAR satellite images with deep fully convolutional neural networks[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(12):1867-1871.
doi: 10.1109/LGRS.2018.2864342 url: https://ieeexplore.ieee.org/document/8447237/
[7] Wu W J, Li H L, Xin W, et al. PolSAR image semantic segmentation based on deep transfer learning-realizing smooth classification with small training sets[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(6):977-981.
doi: 10.1109/LGRS.8859 url: https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8859
[8] Zhao H S, Shi J P, Qi X J, et al. Pyramid scene parsing network[C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2017:6230-6239.
[9] Chen L C, Papandreou G, Kokkinos I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[J]. Computer Science, 2014(4):357-361.
[10] Chen L C, Papandreou G, Kokkinos I, et al. DeepLab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4):834-848.
doi: 10.1109/TPAMI.2017.2699184 url: http://ieeexplore.ieee.org/document/7913730/
[11] 张静, 靳淇兆, 王洪振, 等. 多尺度信息融合的遥感图像语义分割模型[J]. 计算机辅助设计与图形学学报, 2019, 31(9):1510-1517.
[11] Zhang J, Jin Q Z, Wang H Z, et al. Semantic segmentation on remote sensing images with multi-scale feature fusion[J]. Journal of Computer-Aided Design and Computer Graphics, 2019, 31(9):1510-1517.
[12] Chen L C, Papandreou G, Schroff F. Rethinking atrous convolution for semantic image segmentation[EB/OL].(2017-06-17)(2021-11-17). https://arxiv.org/abs/1706.05587.
url: https://arxiv.org/abs/1706.05587
[13] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[C]// International Conference on Learning Representations, 2015:1-14.
[14] He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition[C]// IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2016:770-778.
[15] 刘钊, 赵桐, 廖斐凡, 等. 基于语义分割网络的高分遥感影像城市建成区提取方法研究与对比分析[J]. 国土资源遥感, 2021, 33(1):45-53.doi:10.6046/gtzyyg.2020162.
doi: 10.6046/gtzyyg.2020162
[15] Liu Z, Zhao T, Liao F F, et al. Research and comparative analysis of urban built-up area extraction methods from high-resolution remote sensing images based on semantic segmentation network[J]. Remote Sensing for Land and Resources, 2021, 33(1):45-53.doi:10.6046/gtzyyg.2020162.
doi: 10.6046/gtzyyg.2020162
[16] Paszke A, Chaurasia A, Kim S, et al. ENet:A deep neural network ar- chitecture for real-time semantic segmentation[EB/OL].(2016-06-07)[2021-11-17]. https://arxiv.org/abs/1606.02147.
url: https://arxiv.org/abs/1606.02147
[17] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
doi: 10.1109/TPAMI.2016.2644615 pmid: 28060704
[18] Ronneberger O, Fischer P, Brox T. U-Net:Convolutional networks for biomedical image segmentation[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Berlin:Springer, 2015:234-241.
[19] Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Computer Vision-European Conference on Computer Vision. Berlin:Springer, 2018:833-851.
[20] 水文泽, 孙盛, 余旭, 等. 轻量化卷积神经网络在SAR图像语义分割中的应用[J]. 计算机应用研究, 2021, 38(5):1573-1580.
[20] Shui W Z, Sun S, Yu X, et al. Application of lightweight convolutional neural network in SAR image semantic segmentation[J]. Application Research of Computers, 2021, 38(5):1573-1580.
[21] Werninghaus R, Buckreuss S. The TerraSAR-X mission and system design[J]. IEEE Trans on Geoscience and Remote Sensing, 2009, 48(2):606-614.
doi: 10.1109/TGRS.2009.2031062 url: http://ieeexplore.ieee.org/document/5339240/
[1] ZHENG Zongsheng, LIU Haixia, WANG Zhenhua, LU Peng, SHEN Xukun, TANG Pengfei. Improved 3D-CNN-based method for surface feature classification using hyperspectral images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 105-111.
[2] HU Jianwen, WANG Zeping, HU Pei. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1): 1-14.
[3] YANG Zhaoying, HAN Lingyi, ZHENG Xiangxiang, LI Wenji, FENG Lei, WANG Yi, YANG Yongpeng. Landslide identification using remote sensing images and DEM based on convolutional neural network: A case study of loess landslide[J]. Remote Sensing for Natural Resources, 2022, 34(2): 224-230.
[4] KONG Ailing, ZHANG Chengming, LI Feng, HAN Yingjuan, SUN Huanying, DU Manfei. Knowledge-based remote sensing image fusion method[J]. Remote Sensing for Natural Resources, 2022, 34(2): 47-55.
[5] LIAO Kuo, NIE Lei, YANG Zeyu, ZHANG Hongyan, WANG Yanjie, PENG Jida, DANG Haofei, LENG Wei. Classification of tea garden based on multi-source high-resolution satellite images using multi-dimensional convolutional neural network[J]. Remote Sensing for Natural Resources, 2022, 34(2): 152-161.
[6] YU Xinli, SONG Yan, YANG Miao, HUANG Lei, ZHANG Yanjie. Multi-model and multi-scale scene recognition of shipbuilding enterprises based on convolutional neural network with spatial constraints[J]. Remote Sensing for Natural Resources, 2021, 33(4): 72-81.
[7] QIU Yifan, CHAI Dengfeng. A deep learning method for Landsat image cloud detection without manually labeled data[J]. Remote Sensing for Land & Resources, 2021, 33(1): 102-107.
[8] LIU Zhao, ZHAO Tong, LIAO Feifan, LI Shuai, LI Haiyang. Research and comparative analysis on urban built-up area extraction methods from high-resolution remote sensing image based on semantic segmentation network[J]. Remote Sensing for Land & Resources, 2021, 33(1): 45-53.
[9] WEI Hongyu, ZHAO Yindi, DONG Jihong. Cooling tower detection based on the improved RetinaNet[J]. Remote Sensing for Land & Resources, 2020, 32(4): 68-73.
[10] LIU Zhao, LIAO Feifan, ZHAO Tong. Remote sensing image urban built-up area extraction and optimization method based on PSPNet[J]. Remote Sensing for Land & Resources, 2020, 32(4): 84-89.
[11] CAI Yaotong, LIU Shutong, LIN Hui, ZHANG Meng. Extraction of paddy rice based on convolutional neural network using multi-source remote sensing data[J]. Remote Sensing for Land & Resources, 2020, 32(4): 97-104.
[12] CAI Zhiling, WENG Qian, YE Shaozhen, JIAN Cairen. Remote sensing image scene classification based on Inception-V3[J]. Remote Sensing for Land & Resources, 2020, 32(3): 80-89.
[13] WU Tong, PENG Ling, HU Yuan. Informal garbage dumps detection in high resolution remote sensing images based on SU-RetinaNet[J]. Remote Sensing for Land & Resources, 2020, 32(3): 90-97.
[14] Famao YE, Wei LUO, Yanfei SU, Xuqing ZHAO, Hui XIAO, Weidong MIN. Application of convolutional neural network feature to remote sensing image registration[J]. Remote Sensing for Land & Resources, 2019, 31(2): 32-37.
[15] Yang ZHOU, Yunsheng ZHANG, Siyang CHEN, Zhengrong ZOU, Yaochen ZHU, Ruixue ZHAO. Disaster damage detection in building areas based on DCNN features[J]. Remote Sensing for Land & Resources, 2019, 31(2): 44-50.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech