Please wait a minute...
 
国土资源遥感  2020, Vol. 32 Issue (2): 120-129    DOI: 10.6046/gtzyyg.2020.02.16
  技术方法 本期目录 | 过刊浏览 | 高级检索 |
基于DeepLabv3+语义分割模型的GF-2影像城市绿地提取
刘文雅1,2,3, 岳安志2,3, 季珏4, 师卫华4(), 邓孺孺1,5,6, 梁业恒1, 熊龙海1
1.中山大学地理科学与规划学院,广州 510275
2.空天地海一体化大数据应用技术国家工程实验室,北京 100101
3.中国科学院空天信息创新研究院,北京 100101
4.中国住房和城乡建设部城乡规划管理中心,北京 100101
5.广东省城市化与地理环境空间模拟重点实验室,广州 510275
6.广东省水环境遥感监测工程技术研究中心,广州 510275
Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model
Wenya LIU1,2,3, Anzhi YUE2,3, Jue JI4, Weihua SHI4(), Ruru DENG1,5,6, Yeheng LIANG1, Longhai XIONG1
1. School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, China
2. National Engineering Laboratory for Integrated Air-Space-Ground-Ocean Big Data Application Technology, Beijing 100101, China
3. Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
4. Ministry of Housing and Urban-Rural Development of the People’s Republic of China, Beijing 100101, China
5. Guangdong Engineering Research Center of Water Environment Remote Sensing Monitoring, Guangzhou 510275, China
6. Guangdong Provincial Key Laboratory of Urbanization and Geo-Simulation, Guangzhou 510275, China
全文: PDF(9409 KB)   HTML  
输出: BibTeX | EndNote (RIS)      
摘要 

高效准确地提取城市绿地对国土规划建设意义重大,将深度学习语义分割算法应用于遥感图像分类是近年研究的新探索。提出一种基于DeepLabv3+深度学习语义分割网络的GF-2遥感影像城市绿地自动化提取架构,通过网络的多孔空间金字塔池化(atrous spatial pyramid pooling,ASPP)等模块,提取高层特征,并依托架构完成数据集创建,模型训练,城市绿地提取以及精度评估。研究表明,本文架构分类的总体精度达到91.02%, F值为0.86,优于最大似然法(maximum likelihood, ML)、支持向量机(support vector machine, SVM)和随机森林法(random forest, RF)3种传统方法及另外4种语义分割网络(PspNet,SegNet,U-Net和DeepLabv2),可以准确提取城市绿地,排除农田像元干扰; 此外,对另一地区的提取试验也证实了本架构具有一定的迁移能力。所提出的GF-2遥感影像城市绿地自动化提取架构,可实现更精确、效率更高的城市绿地提取,为城市规划管理提供参考。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
刘文雅
岳安志
季珏
师卫华
邓孺孺
梁业恒
熊龙海
关键词 城市绿地DeepLab深度学习语义分割GF-2    
Abstract

The efficient and accurate extraction of urban green space (UGS) is of great significance to land planning and construction. The application of deep learning semantic segmentation algorithm to remote sensing image classification is a new exploration in recent years. This paper describes a multilevel architecture which targets UGS extraction from GF-2 imagery based on DeepLabv3plus semantic segmentation network. Through Atrous Spatial Pyramid Pooling (ASPP) and other modules of the network, high-level features are extracted, and data set creation, model training, urban green space extraction and accuracy evaluation are completed relying on the architecture. The accuracy evaluation shows that DeepLabv3plus outperforms the traditional machine learning methods, such as maximum likelihood (ML), support vector machine (SVM), random forest (RF) and other four semantic segmentation networks (PspNet, SegNet, U-Net and DeepLabv2), allowing us to better extract UGS, especially exclude interference of farmland. Through accuracy evaluation, the proposed architecture reaches an acceptable accuracy, with overall accuracy being 91.02% and F Score being 0.86. Furthermore, the authors also explored the portability of the method by applying the model to another city. Overall, the automatic architecture in this paper is capable of excluding farmland pixels'interference and extracting UGS accurately from RGB high spatial RS images, which provides reference for urban planning and managements.

Key wordsurban green space    DeepLab    semantic segmentation    deep learning    GF-2
收稿日期: 2019-07-19      出版日期: 2020-06-18
:  TP79  
基金资助:国家科技重大专项项目“GF-6卫星宽视场影像林地/非林地类型快速识别”(21-Y20A06-9001-17/18-3);国家重点研发计划项目“城镇生态资源高分遥感与地面协同监测关键技术研究”(2017YFB0503903);国家科技重大专项项目“高分辨率对地观测重大专项”(03-Y20A04-9001-17/18);广东省省级科技计划项目“珠江三角洲大气污染高分遥感监测及预警”(2017B020216001);中央高校基本科研业务费专项资金“裸土区浅层地下水埋深遥感反演研究”(19lgpy45)
通讯作者: 师卫华
作者简介: 刘文雅(1995-),女,硕士研究生,主要从事深度学习图像处理研究。Email: liuwy28@mail2.sysu.edu.cn。
引用本文:   
刘文雅, 岳安志, 季珏, 师卫华, 邓孺孺, 梁业恒, 熊龙海. 基于DeepLabv3+语义分割模型的GF-2影像城市绿地提取[J]. 国土资源遥感, 2020, 32(2): 120-129.
Wenya LIU, Anzhi YUE, Jue JI, Weihua SHI, Ruru DENG, Yeheng LIANG, Longhai XIONG. Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model. Remote Sensing for Land & Resources, 2020, 32(2): 120-129.
链接本文:  
https://www.gtzyyg.com/CN/10.6046/gtzyyg.2020.02.16      或      https://www.gtzyyg.com/CN/Y2020/V32/I2/120
Fig.1  样本数据来源分布
Fig.2  GF-2遥感影像城市绿地自动化提取流程
Fig.3  总体分类精度随训练样本尺度的变化
Fig.4  DeepLabv3+网络结构
Fig.5  空洞卷积示意图
Fig.6  深度可分离卷积示意图
Fig.7  6组测试影像切片及对应真值
  传统方法提取城市绿地结果对比
指标 方法 切片1 切片2 切片3 切片4 切片5 切片6 平均值
Precision/% ML 52.28 41.18 44.77 75.87 50.40 63.70 54.70
SVM 68.42 54.87 69.74 76.37 43.71 58.39 61.92
RF 56.64 59.30 59.97 69.14 41.69 60.55 57.88
DeepLabv3+ 88.63 86.54 88.2 86.49 81.23 86.99 86.35
Recall/% ML 60.13 61.61 59.26 78.12 67.67 63.08 64.98
SVM 79.67 70.57 67.88 80.99 67.94 64.72 71.96
RF 59.37 68.95 65.17 75.68 69.27 64.09 67.09
DeepLabv3+ 83.42 87.11 86.96 78.73 88.08 86.26 85.09
F ML 0.56 0.50 0.51 0.77 0.56 0.63 0.59
SVM 0.74 0.62 0.69 0.79 0.53 0.61 0.66
RF 0.58 0.64 0.62 0.72 0.52 0.62 0.62
DeepLabv3+ 0.86 0.87 0.88 0.82 0.85 0.87 0.86
OA/% ML 73.99 65.29 64.12 81.11 74.94 72.42 71.98
SVM 84.32 75.98 80.60 82.18 71.41 69.19 77.68
RF 76.37 78.48 75.32 76.51 69.48 70.61 74.46
DeepLabv3+ 92.51 92.74 92.23 87.24 92.28 90.68 91.02
Tab.2  传统方法结果精度对比
Tab.3  语义分割网络提取城市绿地结果对比
指标 方法 切片1 切片2 切片3 切片4 切片5 切片6 平均值
Precision/% PspNet 74.13 72.81 82.62 83.78 73.85 72.36 76.59
SegNet 76.34 70.09 81.94 85.79 78.66 78.69 78.59
U-Net 85.83 77.58 78.42 81.42 75.62 78.67 79.59
DeepLabv2 74.09 81.41 81.03 86.19 72.91 74.30 78.32
DeepLabv3+ 88.63 86.54 88.20 86.49 81.23 86.99 86.35
Recall/% PspNet 81.62 71.18 83.63 75.21 72.14 75.52 76.55
SegNet 80.83 85.19 78.70 73.68 74.20 85.67 79.71
U-Net 80.31 86.99 78.10 73.30 72.55 85.41 79.45
DeepLabv2 78.31 81.52 88.82 81.49 74.80 79.81 80.79
DeepLabv3+ 83.42 87.11 86.96 78.73 88.08 86.26 85.09
F PspNet 0.77 0.72 0.83 0.79 0.73 0.74 0.77
SegNet 0.79 0.77 0.80 0.79 0.76 0.82 0.79
U-Net 0.83 0.82 0.78 0.77 0.75 0.80 0.79
DeepLabv2 0.76 0.81 0.85 0.84 0.75 0.76 0.79
DeepLabv3+ 0.86 0.87 0.88 0.82 0.85 0.87 0.86
OA/% PspNet 87.13 84.79 89.30 84.09 79.79 87.25 85.39
SegNet 87.86 85.95 87.82 84.42 85.80 89.02 86.81
U-Net 90.96 89.53 86.32 82.44 84.05 88.73 87.01
DeepLabv2 86.53 89.81 89.92 87.24 81.13 87.79 87.07
DeepLabv3+ 92.51 92.74 92.23 86.43 89.92 92.28 91.02
Tab.4  语义分割网络结果精度对比
Fig.8  基于本文架构的E区、F区城市绿地提取结果
真值 提取结果
城市绿地 非绿地 合计
城市绿地 6 598 325 6 923
非绿地 253 2 824 3 077
合计 6 851 3 149 10 000
Recall/% 95.30
Precision/% 96.30
F 0.95
OA/% 94.22
Tab.5  E区提取精度
真值 提取结果
城市绿地 非绿地 合计
城市绿地 6 811 523 7 334
非绿地 169 2 497 2 666
合计 6 980 3 020 10 000
Recall/% 92.87
Precision/% 97.58
F 0.95
OA/% 93.08
Tab.6  F区提取精度
[1] 刘小平, 邓孺孺, 彭晓鹃. 城市绿地遥感信息自动提取研究——以广州市为例[J]. 地域研究与开发, 2005,24(5):110-113.
Liu X P, Deng R R, Peng X J. An automatic extraction model of urban greenbelt remote sensing image:A study on Guangzhou City[J]. Areal Research and Development, 2005,24(5):110-113.
[2] 程灿然. 基于GF-1卫星影像的城市绿地信息提取与景观格局研究[D]. 兰州:兰州交通大学, 2017.
Cheng C R. Information extraction and landscape pattern of urban greenbelt based on GF-1 satellite image:A case study of Lanzhou[D]. Lanzhou:Lanzhou Jiaotong University, 2017.
[3] Liu C, Guo Z, Fu N. Applying a new integrated classification method to monitor shifting mangrove wetlands [C]// International Conference on Multimedia Technology. 2010.
[4] Zylshal, Sulma S, Yulianto F,et al.A support vector machine object-based image analysis approach on urban green space extraction using Pleiades-1A imagery[J]. Modeling Earth Systems and Environment, 2016,2(2):54.
[5] Xu L, Ming D, Zhou W, et al. Farmland extraction from high spatial resolution remote sensing images based on stratified scale pre-estimation[J]. Remote Sensing, 2019,11, 108.
[6] Zujovic J, Pappas T N, Neuhoff D L. Structural texture similarity metrics for image analysis and retrieval[J]. IEEE Transactions on Image Processing, 2013,22(7):2545-2558.
[7] Durduran S S. Automatic classification of high resolution land cover using a new data weighting procedure:The combination of K-means clustering algorithm and central tendency measures(KMC-CTM)[J]. Applied Soft Computing, 2015,35(1):136-150.
[8] Gidaris S, Komodakis N. Object detection via a multi-region and semantic segmentation-aware CNN model [C]// IEEE International Conference on Computer Vision.IEEE, 2015.
[9] Marmanis D, Wegner J D, Galliani S, et al. Semantic segmentation of aerial images with an ensemble of CNSS [C]// ISPRS Congress. 2016.
[10] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014,39(4):640-651.
[11] Zheng S, Jayasumana S, Romera-Paredes B, et al. Conditional random fields as recurrent neural networks [C]// IEEE International Conference on Computer Vision(ICCV).IEEE, 2015.
[12] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation [C]// IEEE Transactions on Pattern Analysis and Machine Intelligence.IEEE, 2017,39(12):2481-2495.
[13] Zhao H S, Shi J P, Qi X J, et al.Pyramid scene parsing network[EB/OL].(2017-04-27)[2019-07-19]. https://arxiv.org/abs/1612.01105.
[14] Ronneberger O, Fischer P, Brox T. U-Net:Convolutional networks for biomedical image segmentation [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015: 234-241.
[15] Chen L C, Papandreou G, Kokkinos I, et al. Semantic image segmentation with deep convolutional nets and fully connected CRFs[J].Computer Science, 2014(4):357-361.
[16] 陈天华, 郑司群, 于峻川. 采用改进DeepLab网络的遥感图像分割[J]. 测控技术, 2018,37(11):34-39.
Chen T H, Zheng S Q, Yu J C. Remote sensing image segmentation based on improved DeepLab network[J]. Measurement and Control Technology, 2018,37(11):34-39.
[17] Wang Y, Sun S, Yu J, et al.Skin lesion segmentation using atrous convolution via DeepLabv3[EB/OL].(2018-06-24)[2019-07-19]. https://arxiv.org/abs/1807.08891.
[18] King A, Bhandarkar S M, Hopkinson B M. A comparison of deep learning methods for semantic segmentation of coral reef survey images [C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW).IEEE, 2018.
[19] Herranz-Perdiguero C, Redondo-Cabrera C, López-Sastre R J. In pixels we trust:From pixel labeling to object localization and scene categorization[EB/OL] .(2018-06-19)[2019-07-19]. https://arxiv.org/abs/1807.07284.
[20] Chen Y K, Meng G F, Zhang Q, et al.Reinforced evolutionary neural architecture search[EB/OL].(2019-04-10)[2019-07-19]. https://arxiv.org/abs/1808.00193.
[21] Chen L C, Papandreou G, Kokkinos I, et al. DeepLab:Semantic image segmentation with deep convolutional nets, atrous convolution,and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018,40(4):834-848.
[22] Chen L C, Papandreou G, Schroff F, et al.Rethinking atrous convolution for semantic image segmentation[EB/OL].(2017-05-12)[2019-07-19]. https://arxiv.org/abs/1706.05587.
[23] Chen L C, Zhu Y, Papandreou G, et al.Encoder-decoder with atrous separable convolution for semantic image segmentation[EB/OL].(2018-08-22)[2019-07-19]. https://arxiv.org/abs/1802.02611v1.
[24] 潘腾, 关晖, 贺玮. “高分二号”卫星遥感技术[J]. 航天返回与遥感, 2015,36(4):16-24.
Pan T, Guan H, He W. GF-2 satellite remote sensing technology[J]. Spacecraft Recovery and Remote Sensing, 2015,36(4):16-24.
[25] 杨建宇, 周振旭, 杜贞容, 等. 基于SegNet语义模型的高分辨率遥感影像农村建设用地提取[J]. 农业工程学报, 2019,35(5):251-258.
Yang J Y, Zhou Z X, Du Z R, et al. Rural construction land extraction from high spatial resolution remote sensing image based on SegNet semantic segmentation model[J]. Transactions of the Chinese Society of Agricultural Engineering, 2019,35(5):251-258.
[26] 陈杰, 陈铁桥, 梅小明, 等. 基于最优尺度选择的高分辨率遥感影像丘陵农田提取[J]. 农业工程学报, 2014,30(5):99-107.
Chen J, Chen T Q, Mei X M, et al. Hilly farmland extraction from high resolution remote sensing imagery based on optimal scale selection[J]. Transactions of the Chinese Society of Agricultural Engineering, 2014,30(5):99-107.
[1] 薛白, 王懿哲, 刘书含, 岳明宇, 王艺颖, 赵世湖. 基于孪生注意力网络的高分辨率遥感影像变化检测[J]. 自然资源遥感, 2022, 34(1): 61-66.
[2] 柳明星, 刘建红, 马敏飞, 蒋娅, 曾靖超. 基于GF-2 PMS影像和随机森林的甘肃临夏花椒树种植监测[J]. 自然资源遥感, 2022, 34(1): 218-229.
[3] 郭晓征, 姚云军, 贾坤, 张晓通, 赵祥. 基于U-Net深度学习方法火星沙丘提取研究[J]. 自然资源遥感, 2021, 33(4): 130-135.
[4] 冯东东, 张志华, 石浩月. 基于多元数据的省会城市城中村精细提取[J]. 自然资源遥感, 2021, 33(3): 272-278.
[5] 武宇, 张俊, 李屹旭, 黄康钰. 基于改进U-Net的建筑物集群识别研究[J]. 国土资源遥感, 2021, 33(2): 48-54.
[6] 卢麒, 秦军, 姚雪东, 吴艳兰, 朱皓辰. 基于多层次感知网络的GF-2遥感影像建筑物提取[J]. 国土资源遥感, 2021, 33(2): 75-84.
[7] 安健健, 孟庆岩, 胡蝶, 胡新礼, 杨健, 杨天梁. 基于Faster R-CNN的火电厂冷却塔检测及工作状态判定[J]. 国土资源遥感, 2021, 33(2): 93-99.
[8] 郭文, 张荞. 基于注意力增强全卷积神经网络的高分卫星影像建筑物提取[J]. 国土资源遥感, 2021, 33(2): 100-107.
[9] 刘钊, 赵桐, 廖斐凡, 李帅, 李海洋. 基于语义分割网络的高分遥感影像城市建成区提取方法研究与对比分析[J]. 国土资源遥感, 2021, 33(1): 45-53.
[10] 熊育久, 赵少华, 鄢春华, 邱国玉, 孙华, 王艳林, 秦龙君. 城市绿地资源多尺度监测与评价方法探讨[J]. 国土资源遥感, 2021, 33(1): 54-62.
[11] 蔡祥, 李琦, 罗言, 齐建东. 面向对象结合深度学习方法的矿区地物提取[J]. 国土资源遥感, 2021, 33(1): 63-71.
[12] 仇一帆, 柴登峰. 无人工标注数据的Landsat影像云检测深度学习方法[J]. 国土资源遥感, 2021, 33(1): 102-107.
[13] 裴婵, 廖铁军. 面向遥感目标检测的多尺度架构搜索方法[J]. 国土资源遥感, 2020, 32(4): 53-60.
[14] 刘钊, 廖斐凡, 赵桐. 基于PSPNet的遥感影像城市建成区提取及其优化方法[J]. 国土资源遥感, 2020, 32(4): 84-89.
[15] 刘尚旺, 崔智勇, 李道义. 基于Unet网络多任务学习的遥感图像建筑地物语义分割[J]. 国土资源遥感, 2020, 32(4): 74-83.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发