Please wait a minute...
 
自然资源遥感  2024, Vol. 36 Issue (3): 216-224    DOI: 10.6046/zrzyyg.2023094
  技术应用 本期目录 | 过刊浏览 | 高级检索 |
深度语义分割网络无人机遥感松材线虫病变色木识别
张瑞瑞1,2,3(), 夏浪1,2,3, 陈立平1,2,3(), 丁晨琛1,2,3, 郑爱春4, 胡新苗4, 伊铜川1,2,3, 陈梅香1,2,3, 陈天恩1,2,3,5
1.北京市农林科学院智能装备技术研究中心,北京 100097
2.国家农业智能装备工程技术研究中心,北京 100097
3.国家农业航空应用技术国际联合研究中心,北京 100097
4.南京市浦口区林业站,南京 211899
5.农芯(南京)智慧农业研究院有限公司,南京 211899
Identifying discolored trees inflected with pine wilt disease using DSSN-based UAV remote sensing
ZHANG Ruirui1,2,3(), XIA Lang1,2,3, CHEN Liping1,2,3(), DING Chenchen1,2,3, ZHENG Aichun4, HU Xinmiao4, YI Tongchuan1,2,3, CHEN Meixiang1,2,3, CHEN Tianen1,2,3,5
1. Beijing Research Center of Intelligent Equipment for Agriculture, Beijing Academy of Agricultural and Forestry Sciences, Beijing 100097, China
2. National Research Center of Intelligent Equipment for Agriculture, Beijing Academy of Agricultural and Forestry Sciences, Beijing 100097, China
3. National Center for International Research on Agricultural Aerial Application Technology, Beijing Academy of Agricultural and Forestry Sciences, Beijing 100097, China
4. Nanjing Pukou District Forestry Station, Nanjing 211899, China
5. Nongxin (Nanjing) Intelligent Agricultural Research Institute Co., Ltd., Nanjing 211899, China
全文: PDF(3819 KB)   HTML  
输出: BibTeX | EndNote (RIS)      
摘要 

松材线虫病是危害我国林业资源的主要病害,研究深度语义分割网络无人机遥感技术可提高松材线虫病变色木识别准确率,为提升和保护林业资源质量提供技术支撑。该文以青岛崂山松林为研究区,通过固定翼无人机航拍获取区域无人机松材线虫病疑似变色木影像,以全卷积网络(fully convolutional networks,FCN),U-Net,DeepLabV3+和OCNet 4种深度语义分割模型为研究对象,选用召回率(Recall)、精确率(Precision)、交并比(intersection over union,IoU)和F1值评估各模型分割精度。航拍飞行获得2 688张无人机影像,通过手动标记和样本扩增生成训练样本28 800个。4种网络均能够较好识别松材线虫病变色木,无显著误报,并且深度语义模型对颜色相近的地物,如岩石、黄色裸土等有较好的辨别结果。总体上,DeepLabV3+具有最高的变色木分割精度,IoU与F1值分别为0.711和0.829; FCN模型分割精度最低,IoU与F1值分别为0.699和0.812; DeepLabV3+训练耗时最低,达到27.2 ms/幅; FCN预测耗时最低,达到7.2 ms/幅,但分割变色木的边缘精度最低。以3种特征提取网络ResNet50,ResNet101和ResNet152为前端特征提取网络构建的DeepLabV3+模型变色木识别IoU值分别为0.711,0.702和0.702,F1值分别为0.829,0.822和0.820。DeepLabV3+比DeepLabV3网络具有更高的变色木识别精度,DeepLabV3网络变色木识别的IoU和F1值分别为0.701和0.812。DeepLabV3+模型在测试数据中具有最高变色木识别精度,特征提取网络ResNet网络深度对变色木识别精度影响较小。DeepLabV3+引入的编码和解码结构能够显著改进DeepLabV3分割精度,同时可获得详细的分割边缘,更有利于松材线虫病变色木识别。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
张瑞瑞
夏浪
陈立平
丁晨琛
郑爱春
胡新苗
伊铜川
陈梅香
陈天恩
关键词 无人机遥感变色木深度学习    
Abstract

Pine wilt disease (PWD) is identified as a major disease endangering the forest resources in China. Investigating the deep semantic segmentation network (DSSN)-based unmanned aerial vehicle (UAV) remote sensing identification can improve the identification accuracy of discolored trees infected with PWD and provide technical support for the enhancement and protection of the forest resource quality. Focusing on the pine forest in Laoshan Mountain in Qingdao, this study obtained images of suspected discolored trees through aerial photography using a fixed-wing UAV. To examine four deep semantic segmentation models, namely fully convolutional network (FCN), U-Net, DeepLabV3+, and object context network (OCNet), this study assessed the segmentation accuracies of the four models using recall, precision, IoU, and F1 score. Based on the 2 688 images acquired, 28 800 training samples were obtained through manual labeling and sample amplification. The results indicate that the four models can effectively identify the discolored trees infected with PWD, with no significant false alarms. Furthermore, these deep learning models efficiently distinguished between surface features with similar colors, such as rocks and yellow bare soils. Generally, DeeplabV3+ outperformed the remaining three models, with an IoU of 0.711 and an F1 score of 0.711. In contrast, the FCN model exhibited the lowest segmentation accuracy, with an IoU of 0.699 and an F1 score of 0.812. DeeplabV3+ proved the least time-consuming time for training, requiring merely 27.2 ms per image. Meanwhile, FCN was the least time-consuming in prediction, with only 7.2 ms needed per image. However, this model exhibited the lowest edge segmentation accuracy of discolored trees. Three DeepLabV3+ models constructed using Resnet50, Resnet101, and Resnet152 as front-end feature extraction networks exhibited IoU of 0.711, 0.702, and 0.702 and F1 scores of 0.829, 0.822, and 0.820, respectively. DeepLabV3+ surpassed DeepLabV3 in the identification accuracy of discolored trees, with the letter showing an IoU of 0.701 and an F1 score of 0.812. The train data revealed that DeepLabV3+ exhibited the highest identification accuracy of the discolored trees, while the ResNet feature extraction network produced minor impacts on the identification accuracy. The encoding and decoding structures introduced by DeepLabV3+ can significantly improve the segmentation accuracy of DeepLabV3, yielding more detailed edges. Therefore, DeepLabV3+ is more favorable for the identification of discolored trees infected with PWD.

Key wordsUAV remote sensing    discolored tree    deep learning
收稿日期: 2023-04-18      出版日期: 2024-09-03
ZTFLH:  TP79  
基金资助:国家重点研发计划项目“松材线虫病灾变机制与可持续防控技术”(2021YFD1400900);南京市企业院士工作站关键核心技术攻关项目“林业松材线虫病害智慧防控系统研发与应用”;北京市农林科学院创新能力建设项目“农林重大虫害监测预警智能化平台研究与开发”(KJCX20230205)
通讯作者: 陈立平(1973-),女,博士,研究员,主要研究方向为农业智能装备技术及应用研究。Email: chenlp@nercita.org.cn
作者简介: 张瑞瑞(1983-),男,博士,研究员,主要研究方向为农林航空应用技术研究。Email: zhangrr@nercita.org.cn
引用本文:   
张瑞瑞, 夏浪, 陈立平, 丁晨琛, 郑爱春, 胡新苗, 伊铜川, 陈梅香, 陈天恩. 深度语义分割网络无人机遥感松材线虫病变色木识别[J]. 自然资源遥感, 2024, 36(3): 216-224.
ZHANG Ruirui, XIA Lang, CHEN Liping, DING Chenchen, ZHENG Aichun, HU Xinmiao, YI Tongchuan, CHEN Meixiang, CHEN Tianen. Identifying discolored trees inflected with pine wilt disease using DSSN-based UAV remote sensing. Remote Sensing for Natural Resources, 2024, 36(3): 216-224.
链接本文:  
https://www.gtzyyg.com/CN/10.6046/zrzyyg.2023094      或      https://www.gtzyyg.com/CN/Y2024/V36/I3/216
Fig.1  研究区地理位置概况
Fig.2  无人机影像及模型训练样本示例
模型 来源 参数数量 前端 特征
FCN Long等, 2015[15] 15 305 667 Xception[25] 里程碑分割网络
U-Net Ronneberger等, 2015[18] 26 355 169 ResNet[26] 对称分割网络
DeepLabV3+ Chen等, 2018[21] 74 982 817 ResNet 空洞卷积、ASPP
OCNet Yuan等, 2018[22] 36 040 105 ResNet 注意力机制
Tab.1  模型参数表
Fig.3  模型训练损失值和F1值
序号 无人机影像 地面真值 FCN结果 U-Net结果 DeepLabV3+结果 OC-Net结果
a
b
c
d
e
Tab.2  模型分割结果
模型 IoU F1 Precision Recall 训练/(ms·幅-1) 预测/(ms·幅-1)
FCN 0.699 0.812 0.821 0.804 53.1 7.2
U-Net 0.710 0.825 0.821 0.828 44.5 20.5
DeepLabV3+ 0.711 0.829 0.826 0.833 27.2 14.8
OCNet 0.706 0.820 0.824 0.817 47.6 16.9
Tab.3  模型精度与性能
序号 无人机影像 地面真值 DeepLabV3+
ResNet50
分割结果
DeepLabV3+
ResNet101
分割结果
DeepLabV3+
ResNet152
分割结果
DeepLabV3
ResNet50
分割结果
a
b
c
d
e
Tab.4  基于不同ResNet的DeepLabV3+模型分割结果
模型 IoU F1 Precision Recall
DeepLabV3+
ResNet50
0.711 0.829 0.826 0.833
DeepLabV3+
ResNet101
0.702 0.822 0.847 0.798
DeepLabV3+
ResNet152
0.702 0.820 0.846 0.796
DeepLabv3
ResNet50
0.701 0.812 0.813 0.811
Tab.5  模型精度
[1] Proença D N, Grass G, Morais P V. Understanding pine wilt disease:Roles of the pine endophytic bacteria and of the bacteria carried by the disease-causing pinewood nematode[J]. MicrobiologyOpen, 2017, 6(2):e00415.
[2] 张瑞瑞, 夏浪, 陈立平, 等. 基于U-Net网络和无人机影像的松材线虫病变色木识别[J]. 农业工程学报, 2020, 36(12):61-68.
Zhang R R, Xia L, Chen L P, et al. Recognition of wilt wood caused by pine wilt nematode based on U-Net network and unmanned aerial vehicle images[J]. Transactions of the Chinese Society of Agricultural Engineering, 2020, 36(12):61-68.
[3] 徐信罗, 陶欢, 李存军, 等. 基于Faster R-CNN的松材线虫病受害木识别与定位[J]. 农业机械学报, 2020, 51(7):228-236.
Xu X L, Tao H, Li C J, et al. Detection and location of pine wilt disease induced dead pine trees based on faster R-CNN[J]. Transactions of the Chinese Society for Agricultural Machinery, 2020, 51(7):228-236.
[4] 叶建仁. 松材线虫病在中国的流行现状、防治技术与对策分析[J]. 林业科学, 2019, 55(9):1-10.
Ye J R. Epidemic status of pine wilt disease in China and its prevention and control techniques and counter measures[J]. Scientia Silvae Sinicae, 2019, 55(9):1-10.
[5] 国家林业和草原局. 国家林业和草原局公告(2020 年第 4 号)(2020年松材线虫病疫区)[EB/OL]. [2020-03-16]. http://www.forestry.gov.cn/main/3457/20200326/145712092854308.html.
State Forestry and Grassland Administration. Announcement of the National Forestry and Grassland Administration (No.4 of 2020) (Pine wood nematode disease epidemic area in 2020) [EB/OL]. [2020-03-16]. http://www.forestry.gov.cn/main/3457/20200326/145712092854308.html.
[6] 许青云, 李莹, 谭靖, 等. 基于高分六号卫星数据的红树林提取方法[J]. 自然资源遥感, 2023, 35(1):41-48.doi:10.6046/zrzyyg.2022048.
Xu Q Y, Li Y, Tan J, et al. Information extraction method of mangrove forests based on GF-6 data[J]. Remote Sensing for Natural Resources, 2023, 35(1):41-48.doi:10.6046/zrzyyg.2022048.
[7] Xia L, Zhang R, Chen L, et al. Evaluation of deep learning segmentation models for detection of pine wilt disease in unmanned aerial vehicle images[J]. Remote Sensing, 2021, 13(18):3594.
[8] 吴琼. 基于遥感图像的松材线虫病区域检测算法研究[D]. 合肥: 安徽大学, 2013.
Wu Q. Research on Bursaphelenchus xylophilus area detection based on remote sensing image[D]. Hefei: Anhui University, 2013.
[9] 曾全, 孙华富, 杨远亮, 等. 无人机监测松材线虫病的精度比较[J]. 四川林业科技, 2019, 40(3):92-95,114.
Zeng Q, Sun H F, Yang Y L, et al. Precision comparison for pine wood nematode disease monitoring by UAV[J]. Journal of Sichuan Forestry Science and Technology, 2019, 40(3):92-95,114.
[10] Iordache M D, Mantas V, Baltazar E, et al. A machine learning approach to detecting pine wilt disease using airborne spectral imagery[J]. Remote Sensing, 2020, 12(14):2280.
[11] Syifa M, Park S J, Lee C W. Detection of the pine wilt disease tree candidates for drone remote sensing using artificial intelligence techniques[J]. Engineering, 2020, 6(8):919-926.
[12] Xia L, Zhao F, Chen J, et al. A full resolution deep learning network for paddy rice mapping using Landsat data[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 194:91-107.
[13] 胡建文, 汪泽平, 胡佩. 基于深度学习的空谱遥感图像融合综述[J]. 自然资源遥感, 2023, 35(1):1-14.doi:10.6046/zrzyyg.2021433.
Hu J W, Wang Z P, Hu P. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1):1-14.doi:10.6046/zrzyyg.2021433.
[14] 金远航, 徐茂林, 郑佳媛. 基于改进YOLOv4-tiny的无人机影像枯死树木检测算法[J]. 自然资源遥感, 2023, 35(1):90-98.doi:10.6046/zrzyyg.2022018.
Jin Y H, Xu M L, Zheng J Y. A dead tree detection algorithm based on improved YOLOv4-tiny for UAV images[J]. Remote Sensing for Natural Resources, 2023, 35(1):90-98.doi:10.6046/zrzyyg.2022018.
[15] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-651.
doi: 10.1109/TPAMI.2016.2572683 pmid: 27244717
[16] Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Honolulu,HI,USA.IEEE, 2017:6230-6239.
[17] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
doi: 10.1109/TPAMI.2016.2644615 pmid: 28060704
[18] Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2015:234-241.
[19] Yang M, Yu K, Zhang C, et al. DenseASPP for semantic segmentation in street scenes[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA.IEEE, 2018:3684-3692.
[20] Fu J, Liu J, Tian H, et al. Dual attention network for scene segmentation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Long Beach,CA,USA.IEEE, 2019:3141-3149.
[21] Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Proceedings of the European Conference on Computer Vision (ECCV),Munich.ACM, 2018:833-851.
[22] Yuan Y, Huang L, Guo J, et al. OCNet:Object context network for scene parsing[EB/OL]. 2018:arXiv:1809.00916. https://arxiv.org/abs/1809.00916.pdf.
[23] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL].arXiv. https://arxiv.org/abs/1409.1556.pdf.
[24] Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL].arXiv. https://arxiv.org/abs/1706.05587.pdf.
[25] Chollet F. Xception:Deep learning with depthwise separable convolutions[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Honolulu,HI,USA.IEEE, 2017:1800-1807.
[26] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Las Vegas,NV,USA.IEEE, 2016:770-778.
[27] Huang Q, Sun J, Ding H, et al. Robust liver vessel extraction using 3D U-Net with variant dice loss function[J]. Computers in Biology and Medicine, 2018, 101:153-162.
doi: S0010-4825(18)30238-5 pmid: 30144657
[28] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]// 2017 IEEE International Conference on Computer Vision (ICCV).Venice,Italy.IEEE, 2017:2999-3007.
[1] 温泉, 李璐, 熊立, 杜磊, 刘庆杰, 温奇. 基于深度学习的遥感图像水体提取综述[J]. 自然资源遥感, 2024, 36(3): 57-71.
[2] 白石, 唐攀攀, 苗朝, 金彩凤, 赵博, 万昊明. 基于高分辨率遥感影像和改进U-Net模型的滑坡提取——以汶川地区为例[J]. 自然资源遥感, 2024, 36(3): 96-107.
[3] 宋爽爽, 肖开斐, 刘昭华, 曾昭亮. 一种基于YOLOv5的高分辨率遥感影像目标检测方法[J]. 自然资源遥感, 2024, 36(2): 50-59.
[4] 李婉悦, 娄德波, 王成辉, 刘欢, 张长青, 范莹琳, 杜晓川. 基于改进U-Net网络的花岗伟晶岩信息提取方法[J]. 自然资源遥感, 2024, 36(2): 89-96.
[5] 李新同, 史岚, 陈多妍. 基于深度学习的闽浙赣GPM降水产品降尺度方法[J]. 自然资源遥感, 2023, 35(4): 105-113.
[6] 邓丁柱. 基于深度学习的多源卫星遥感影像云检测方法[J]. 自然资源遥感, 2023, 35(4): 9-16.
[7] 陈笛, 彭秋志, 黄培依, 刘雅璇. 采用注意力机制与改进YOLOv5的光伏用地检测[J]. 自然资源遥感, 2023, 35(4): 90-95.
[8] 刘涵薇, 陈富龙, 廖亚奥. 明长城(北京段)遥感动态监测与影响驱动分析[J]. 自然资源遥感, 2023, 35(4): 255-263.
[9] 牛祥华, 黄微, 黄睿, 蒋斯立. 基于注意力特征融合的高保真遥感图像薄云去除[J]. 自然资源遥感, 2023, 35(3): 116-123.
[10] 刘立, 董先敏, 刘娟. 顾及地学特征的遥感影像语义分割模型性能评价方法[J]. 自然资源遥感, 2023, 35(3): 80-87.
[11] 邱磊, 张学志, 郝大为. 基于深度学习的视频SAR动目标检测与跟踪算法[J]. 自然资源遥感, 2023, 35(2): 157-166.
[12] 张仙, 李伟, 陈理, 杨昭颖, 窦宝成, 李瑜, 陈昊旻. 露天开采矿区要素遥感提取研究进展及展望[J]. 自然资源遥感, 2023, 35(2): 25-33.
[13] 刁明光, 刘勇, 郭宁博, 李文吉, 江继康, 王云霄. 基于Mask R-CNN的遥感影像疏林地智能识别方法[J]. 自然资源遥感, 2023, 35(2): 97-104.
[14] 胡建文, 汪泽平, 胡佩. 基于深度学习的空谱遥感图像融合综述[J]. 自然资源遥感, 2023, 35(1): 1-14.
[15] 赵凌虎, 袁希平, 甘淑, 胡琳, 丘鸣语. 改进Deeplabv3+的高分辨率遥感影像道路提取模型[J]. 自然资源遥感, 2023, 35(1): 107-114.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发