Please wait a minute...
 
Remote Sensing for Land & Resources    2020, Vol. 32 Issue (2) : 120-129     DOI: 10.6046/gtzyyg.2020.02.16
|
Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model
Wenya LIU1,2,3, Anzhi YUE2,3, Jue JI4, Weihua SHI4(), Ruru DENG1,5,6, Yeheng LIANG1, Longhai XIONG1
1. School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China
2. National Engineering Laboratory for Integrated Air-Space-Ground-Ocean Big Data Application Technology, Beijing 100101, China
3. Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
4. Ministry of Housing and Urban-Rural Development of the People’s Republic of China, Beijing 100101, China
5. Guangdong Engineering Research Center of Water Environment Remote Sensing Monitoring, Guangzhou 510275, China
6. Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, Guangzhou 510275, China
Download: PDF(9409 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

The efficient and accurate extraction of urban green space (UGS) is of great significance to land planning and construction. The application of deep learning semantic segmentation algorithm to remote sensing image classification is a new exploration in recent years. This paper describes a multilevel architecture which targets UGS extraction from GF-2 imagery based on DeepLabv3plus semantic segmentation network. Through Atrous Spatial Pyramid Pooling (ASPP) and other modules of the network, high-level features are extracted, and data set creation, model training, urban green space extraction and accuracy evaluation are completed relying on the architecture. The accuracy evaluation shows that DeepLabv3plus outperforms the traditional machine learning methods, such as maximum likelihood (ML), support vector machine (SVM), random forest (RF) and other four semantic segmentation networks (PspNet, SegNet, U-Net and DeepLabv2), allowing us to better extract UGS, especially exclude interference of farmland. Through accuracy evaluation, the proposed architecture reaches an acceptable accuracy, with overall accuracy being 91.02% and F Score being 0.86. Furthermore, the authors also explored the portability of the method by applying the model to another city. Overall, the automatic architecture in this paper is capable of excluding farmland pixels'interference and extracting UGS accurately from RGB high spatial RS images, which provides reference for urban planning and managements.

Keywords urban green space      DeepLab      semantic segmentation      deep learning      GF-2     
:  TP79  
Corresponding Authors: Weihua SHI     E-mail: 20143262@qq.com
Issue Date: 18 June 2020
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Wenya LIU
Anzhi YUE
Jue JI
Weihua SHI
Ruru DENG
Yeheng LIANG
Longhai XIONG
Cite this article:   
Wenya LIU,Anzhi YUE,Jue JI, et al. Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model[J]. Remote Sensing for Land & Resources, 2020, 32(2): 120-129.
URL:  
https://www.gtzyyg.com/EN/10.6046/gtzyyg.2020.02.16     OR     https://www.gtzyyg.com/EN/Y2020/V32/I2/120
Fig.1  Samples data source distribution
Fig.2  Urban green space automatic extraction process of GF-2 imagery
Fig.3  Overall accuracy varies with different sample sizes
Fig.4  Architecture of DeepLabv3+
Fig.5  Figure of atrous convolution
Fig.6  Figure of depth wise separable convolution
Fig.7  Six validation slices with of test images and their GT labels.
  Extraction results comparison of traditional methods
指标 方法 切片1 切片2 切片3 切片4 切片5 切片6 平均值
Precision/% ML 52.28 41.18 44.77 75.87 50.40 63.70 54.70
SVM 68.42 54.87 69.74 76.37 43.71 58.39 61.92
RF 56.64 59.30 59.97 69.14 41.69 60.55 57.88
DeepLabv3+ 88.63 86.54 88.2 86.49 81.23 86.99 86.35
Recall/% ML 60.13 61.61 59.26 78.12 67.67 63.08 64.98
SVM 79.67 70.57 67.88 80.99 67.94 64.72 71.96
RF 59.37 68.95 65.17 75.68 69.27 64.09 67.09
DeepLabv3+ 83.42 87.11 86.96 78.73 88.08 86.26 85.09
F ML 0.56 0.50 0.51 0.77 0.56 0.63 0.59
SVM 0.74 0.62 0.69 0.79 0.53 0.61 0.66
RF 0.58 0.64 0.62 0.72 0.52 0.62 0.62
DeepLabv3+ 0.86 0.87 0.88 0.82 0.85 0.87 0.86
OA/% ML 73.99 65.29 64.12 81.11 74.94 72.42 71.98
SVM 84.32 75.98 80.60 82.18 71.41 69.19 77.68
RF 76.37 78.48 75.32 76.51 69.48 70.61 74.46
DeepLabv3+ 92.51 92.74 92.23 87.24 92.28 90.68 91.02
Tab.2  The precision results comparison of traditional methods
Tab.3  Extraction results of semantic segmentation networks
指标 方法 切片1 切片2 切片3 切片4 切片5 切片6 平均值
Precision/% PspNet 74.13 72.81 82.62 83.78 73.85 72.36 76.59
SegNet 76.34 70.09 81.94 85.79 78.66 78.69 78.59
U-Net 85.83 77.58 78.42 81.42 75.62 78.67 79.59
DeepLabv2 74.09 81.41 81.03 86.19 72.91 74.30 78.32
DeepLabv3+ 88.63 86.54 88.20 86.49 81.23 86.99 86.35
Recall/% PspNet 81.62 71.18 83.63 75.21 72.14 75.52 76.55
SegNet 80.83 85.19 78.70 73.68 74.20 85.67 79.71
U-Net 80.31 86.99 78.10 73.30 72.55 85.41 79.45
DeepLabv2 78.31 81.52 88.82 81.49 74.80 79.81 80.79
DeepLabv3+ 83.42 87.11 86.96 78.73 88.08 86.26 85.09
F PspNet 0.77 0.72 0.83 0.79 0.73 0.74 0.77
SegNet 0.79 0.77 0.80 0.79 0.76 0.82 0.79
U-Net 0.83 0.82 0.78 0.77 0.75 0.80 0.79
DeepLabv2 0.76 0.81 0.85 0.84 0.75 0.76 0.79
DeepLabv3+ 0.86 0.87 0.88 0.82 0.85 0.87 0.86
OA/% PspNet 87.13 84.79 89.30 84.09 79.79 87.25 85.39
SegNet 87.86 85.95 87.82 84.42 85.80 89.02 86.81
U-Net 90.96 89.53 86.32 82.44 84.05 88.73 87.01
DeepLabv2 86.53 89.81 89.92 87.24 81.13 87.79 87.07
DeepLabv3+ 92.51 92.74 92.23 86.43 89.92 92.28 91.02
Tab.4  The precision results comparison of semantic segmentation networks
Fig.8  The results of entire area E and area F based on the proposed architecture
真值 提取结果
城市绿地 非绿地 合计
城市绿地 6 598 325 6 923
非绿地 253 2 824 3 077
合计 6 851 3 149 10 000
Recall/% 95.30
Precision/% 96.30
F 0.95
OA/% 94.22
Tab.5  Precision results of entire area E
真值 提取结果
城市绿地 非绿地 合计
城市绿地 6 811 523 7 334
非绿地 169 2 497 2 666
合计 6 980 3 020 10 000
Recall/% 92.87
Precision/% 97.58
F 0.95
OA/% 93.08
Tab.6  Precision results of area F
[1] 刘小平, 邓孺孺, 彭晓鹃. 城市绿地遥感信息自动提取研究——以广州市为例[J]. 地域研究与开发, 2005,24(5):110-113.
[1] Liu X P, Deng R R, Peng X J. An automatic extraction model of urban greenbelt remote sensing image:A study on Guangzhou City[J]. Areal Research and Development, 2005,24(5):110-113.
[2] 程灿然. 基于GF-1卫星影像的城市绿地信息提取与景观格局研究[D]. 兰州:兰州交通大学, 2017.
[2] Cheng C R. Information extraction and landscape pattern of urban greenbelt based on GF-1 satellite image:A case study of Lanzhou[D]. Lanzhou:Lanzhou Jiaotong University, 2017.
[3] Liu C, Guo Z, Fu N. Applying a new integrated classification method to monitor shifting mangrove wetlands [C]// International Conference on Multimedia Technology. 2010.
[4] Zylshal, Sulma S, Yulianto F,et al.A support vector machine object-based image analysis approach on urban green space extraction using Pleiades-1A imagery[J]. Modeling Earth Systems and Environment, 2016,2(2):54.
[5] Xu L, Ming D, Zhou W, et al. Farmland extraction from high spatial resolution remote sensing images based on stratified scale pre-estimation[J]. Remote Sensing, 2019,11, 108.
[6] Zujovic J, Pappas T N, Neuhoff D L. Structural texture similarity metrics for image analysis and retrieval[J]. IEEE Transactions on Image Processing, 2013,22(7):2545-2558.
[7] Durduran S S. Automatic classification of high resolution land cover using a new data weighting procedure:The combination of K-means clustering algorithm and central tendency measures(KMC-CTM)[J]. Applied Soft Computing, 2015,35(1):136-150.
[8] Gidaris S, Komodakis N. Object detection via a multi-region and semantic segmentation-aware CNN model [C]// IEEE International Conference on Computer Vision.IEEE, 2015.
[9] Marmanis D, Wegner J D, Galliani S, et al. Semantic segmentation of aerial images with an ensemble of CNSS [C]// ISPRS Congress. 2016.
[10] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014,39(4):640-651.
[11] Zheng S, Jayasumana S, Romera-Paredes B, et al. Conditional random fields as recurrent neural networks [C]// IEEE International Conference on Computer Vision(ICCV).IEEE, 2015.
[12] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation [C]// IEEE Transactions on Pattern Analysis and Machine Intelligence.IEEE, 2017,39(12):2481-2495.
[13] Zhao H S, Shi J P, Qi X J, et al.Pyramid scene parsing network[EB/OL].(2017-04-27)[2019-07-19]. https://arxiv.org/abs/1612.01105.
url: https://arxiv.org/abs/1612.01105
[14] Ronneberger O, Fischer P, Brox T. U-Net:Convolutional networks for biomedical image segmentation [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015: 234-241.
[15] Chen L C, Papandreou G, Kokkinos I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[J].Computer Science, 2014(4):357-361.
[16] 陈天华, 郑司群, 于峻川. 采用改进DeepLab网络的遥感图像分割[J]. 测控技术, 2018,37(11):34-39.
[16] Chen T H, Zheng S Q, Yu J C. Remote sensing image segmentation based on improved DeepLab network[J]. Measurement and Control Technology, 2018,37(11):34-39.
[17] Wang Y, Sun S, Yu J, et al.Skin lesion segmentation using atrous convolution via DeepLabv3[EB/OL].(2018-06-24)[2019-07-19]. https://arxiv.org/abs/1807.08891.
url: https://arxiv.org/abs/1807.08891
[18] King A, Bhandarkar S M, Hopkinson B M. A comparison of deep learning methods for semantic segmentation of coral reef survey images [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW).IEEE, 2018.
[19] Herranz-Perdiguero C, Redondo-Cabrera C, López-Sastre R J. In pixels we trust:From pixel labeling to object localization and scene categorization[EB/OL] .(2018-06-19)[2019-07-19]. https://arxiv.org/abs/1807.07284.
url: https://arxiv.org/abs/1807.07284
[20] Chen Y K, Meng G F, Zhang Q, et al.Reinforced evolutionary neural architecture search[EB/OL].(2019-04-10)[2019-07-19]. https://arxiv.org/abs/1808.00193.
url: https://arxiv.org/abs/1808.00193
[21] Chen L C, Papandreou G, Kokkinos I, et al. DeepLab:Semantic image segmentation with deep convolutional nets, atrous convolution,and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018,40(4):834-848.
[22] Chen L C, Papandreou G, Schroff F, et al.Rethinking atrous convolution for semantic image segmentation[EB/OL].(2017-05-12)[2019-07-19]. https://arxiv.org/abs/1706.05587.
url: https://arxiv.org/abs/1706.05587
[23] Chen L C, Zhu Y, Papandreou G, et al.Encoder-decoder with atrous separable convolution for semantic image segmentation[EB/OL].(2018-08-22)[2019-07-19]. https://arxiv.org/abs/1802.02611v1.
url: https://arxiv.org/abs/1802.02611v1
[24] 潘腾, 关晖, 贺玮. “高分二号”卫星遥感技术[J]. 航天返回与遥感, 2015,36(4):16-24.
[24] Pan T, Guan H, He W. GF-2 satellite remote sensing technology[J]. Spacecraft Recovery and Remote Sensing, 2015,36(4):16-24.
[25] 杨建宇, 周振旭, 杜贞容, 等. 基于SegNet语义模型的高分辨率遥感影像农村建设用地提取[J]. 农业工程学报, 2019,35(5):251-258.
url: http://www.tcsae.org/nygcxb/ch/reader/view_abstract.aspx?file_no=20190531&flag=1
[25] Yang J Y, Zhou Z X, Du Z R, et al. Rural construction land extraction from high spatial resolution remote sensing image based on SegNet semantic segmentation model[J]. Transactions of the Chinese Society of Agricultural Engineering, 2019,35(5):251-258.
[26] 陈杰, 陈铁桥, 梅小明, 等. 基于最优尺度选择的高分辨率遥感影像丘陵农田提取[J]. 农业工程学报, 2014,30(5):99-107.
url: http://www.tcsae.org/nygcxb/ch/reader/view_abstract.aspx?file_no=20140513&flag=1
[26] Chen J, Chen T Q, Mei X M, et al. Hilly farmland extraction from high resolution remote sensing imagery based on optimal scale selection[J]. Transactions of the Chinese Society of Agricultural Engineering, 2014,30(5):99-107.
url: http://www.tcsae.org/nygcxb/ch/reader/view_abstract.aspx?file_no=20140513&flag=1
[1] XUE Bai, WANG Yizhe, LIU Shuhan, YUE Mingyu, WANG Yiying, ZHAO Shihu. Change detection of high-resolution remote sensing images based on Siamese network[J]. Remote Sensing for Natural Resources, 2022, 34(1): 61-66.
[2] LIU Mingxing, LIU Jianhong, MA Minfei, JIANG Ya, ZENG Jingchao. Monitoring of Zanthoxylum bungeanum Maxim planting using GF-2 PMS images and the random forest algorithm: A case study of Linxia, Gansu Province[J]. Remote Sensing for Natural Resources, 2022, 34(1): 218-229.
[3] GUO Xiaozheng, YAO Yunjun, JIA Kun, ZHANG Xiaotong, ZHAO Xiang. Information extraction of Mars dunes based on U-Net[J]. Remote Sensing for Natural Resources, 2021, 33(4): 130-135.
[4] FENG Dongdong, ZHANG Zhihua, SHI Haoyue. Fine extraction of urban villages in provincial capitals based on multivariate data[J]. Remote Sensing for Natural Resources, 2021, 33(3): 272-278.
[5] WU Yu, ZHANG Jun, LI Yixu, HUANG Kangyu. Research on building cluster identification based on improved U-Net[J]. Remote Sensing for Land & Resources, 2021, 33(2): 48-54.
[6] LU Qi, QIN Jun, YAO Xuedong, WU Yanlan, ZHU Haochen. Buildings extraction of GF-2 remote sensing image based on multi-layer perception network[J]. Remote Sensing for Land & Resources, 2021, 33(2): 75-84.
[7] AN Jianjian, MENG Qingyan, HU Die, HU Xinli, YANG Jian, YANG Tianliang. The detection and determination of the working state of cooling tower in the thermal power plant based on Faster R-CNN[J]. Remote Sensing for Land & Resources, 2021, 33(2): 93-99.
[8] GUO Wen, ZHANG Qiao. Building extraction using high-resolution satellite imagery based on an attention enhanced full convolution neural network[J]. Remote Sensing for Land & Resources, 2021, 33(2): 100-107.
[9] QIU Yifan, CHAI Dengfeng. A deep learning method for Landsat image cloud detection without manually labeled data[J]. Remote Sensing for Land & Resources, 2021, 33(1): 102-107.
[10] HU Guoqing, CHEN Donghua, LIU Congfang, XIE Yimei, LIU Saisai, LI Hu. Dynamic monitoring of urban black-odor water bodies based on GF-2 image[J]. Remote Sensing for Land & Resources, 2021, 33(1): 30-37.
[11] LIU Zhao, ZHAO Tong, LIAO Feifan, LI Shuai, LI Haiyang. Research and comparative analysis on urban built-up area extraction methods from high-resolution remote sensing image based on semantic segmentation network[J]. Remote Sensing for Land & Resources, 2021, 33(1): 45-53.
[12] XIONG Yujiu, ZHAO Shaohua, YAN Chunhua, QIU Gouyu, SUN Hua, WANG Yanlin, QIN Longjun. A comparative study of methods for monitoring and assessing urban green space resources at multiple scales[J]. Remote Sensing for Land & Resources, 2021, 33(1): 54-62.
[13] CAI Xiang, LI Qi, LUO Yan, QI Jiandong. Surface features extraction of mining area image based on object-oriented and deep-learning method[J]. Remote Sensing for Land & Resources, 2021, 33(1): 63-71.
[14] ZHENG Zhiteng, FAN Haisheng, WANG Jie, WU Yanlan, WANG Biao, HUANG Tengjie. An improved double-branch network method for intelligently extracting marine cage culture area[J]. Remote Sensing for Land & Resources, 2020, 32(4): 120-129.
[15] DU Fangzhou, SHI Yuli, SHENG Xia. Research on downscaling of TRMM precipitation products based on deep learning: Exemplified by northeast China[J]. Remote Sensing for Land & Resources, 2020, 32(4): 145-153.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech