Please wait a minute...
 
Remote Sensing for Natural Resources    2024, Vol. 36 Issue (3) : 216-224     DOI: 10.6046/zrzyyg.2023094
|
Identifying discolored trees inflected with pine wilt disease using DSSN-based UAV remote sensing
ZHANG Ruirui1,2,3(), XIA Lang1,2,3, CHEN Liping1,2,3(), DING Chenchen1,2,3, ZHENG Aichun4, HU Xinmiao4, YI Tongchuan1,2,3, CHEN Meixiang1,2,3, CHEN Tianen1,2,3,5
1. Beijing Research Center of Intelligent Equipment for Agriculture, Beijing Academy of Agricultural and Forestry Sciences, Beijing 100097, China
2. National Research Center of Intelligent Equipment for Agriculture, Beijing Academy of Agricultural and Forestry Sciences, Beijing 100097, China
3. National Center for International Research on Agricultural Aerial Application Technology, Beijing Academy of Agricultural and Forestry Sciences, Beijing 100097, China
4. Nanjing Pukou District Forestry Station, Nanjing 211899, China
5. Nongxin (Nanjing) Intelligent Agricultural Research Institute Co., Ltd., Nanjing 211899, China
Download: PDF(3819 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

Pine wilt disease (PWD) is identified as a major disease endangering the forest resources in China. Investigating the deep semantic segmentation network (DSSN)-based unmanned aerial vehicle (UAV) remote sensing identification can improve the identification accuracy of discolored trees infected with PWD and provide technical support for the enhancement and protection of the forest resource quality. Focusing on the pine forest in Laoshan Mountain in Qingdao, this study obtained images of suspected discolored trees through aerial photography using a fixed-wing UAV. To examine four deep semantic segmentation models, namely fully convolutional network (FCN), U-Net, DeepLabV3+, and object context network (OCNet), this study assessed the segmentation accuracies of the four models using recall, precision, IoU, and F1 score. Based on the 2 688 images acquired, 28 800 training samples were obtained through manual labeling and sample amplification. The results indicate that the four models can effectively identify the discolored trees infected with PWD, with no significant false alarms. Furthermore, these deep learning models efficiently distinguished between surface features with similar colors, such as rocks and yellow bare soils. Generally, DeeplabV3+ outperformed the remaining three models, with an IoU of 0.711 and an F1 score of 0.711. In contrast, the FCN model exhibited the lowest segmentation accuracy, with an IoU of 0.699 and an F1 score of 0.812. DeeplabV3+ proved the least time-consuming time for training, requiring merely 27.2 ms per image. Meanwhile, FCN was the least time-consuming in prediction, with only 7.2 ms needed per image. However, this model exhibited the lowest edge segmentation accuracy of discolored trees. Three DeepLabV3+ models constructed using Resnet50, Resnet101, and Resnet152 as front-end feature extraction networks exhibited IoU of 0.711, 0.702, and 0.702 and F1 scores of 0.829, 0.822, and 0.820, respectively. DeepLabV3+ surpassed DeepLabV3 in the identification accuracy of discolored trees, with the letter showing an IoU of 0.701 and an F1 score of 0.812. The train data revealed that DeepLabV3+ exhibited the highest identification accuracy of the discolored trees, while the ResNet feature extraction network produced minor impacts on the identification accuracy. The encoding and decoding structures introduced by DeepLabV3+ can significantly improve the segmentation accuracy of DeepLabV3, yielding more detailed edges. Therefore, DeepLabV3+ is more favorable for the identification of discolored trees infected with PWD.

Keywords UAV remote sensing      discolored tree      deep learning     
ZTFLH:  TP79  
Issue Date: 03 September 2024
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Ruirui ZHANG
Lang XIA
Liping CHEN
Chenchen DING
Aichun ZHENG
Xinmiao HU
Tongchuan YI
Meixiang CHEN
Tianen CHEN
Cite this article:   
Ruirui ZHANG,Lang XIA,Liping CHEN, et al. Identifying discolored trees inflected with pine wilt disease using DSSN-based UAV remote sensing[J]. Remote Sensing for Natural Resources, 2024, 36(3): 216-224.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2023094     OR     https://www.gtzyyg.com/EN/Y2024/V36/I3/216
Fig.1  Location of the study area
Fig.2  UAV image and training samples
模型 来源 参数数量 前端 特征
FCN Long等, 2015[15] 15 305 667 Xception[25] 里程碑分割网络
U-Net Ronneberger等, 2015[18] 26 355 169 ResNet[26] 对称分割网络
DeepLabV3+ Chen等, 2018[21] 74 982 817 ResNet 空洞卷积、ASPP
OCNet Yuan等, 2018[22] 36 040 105 ResNet 注意力机制
Tab.1  Parameters of deep learning models
Fig.3  Training loss and F1-score of each model
序号 无人机影像 地面真值 FCN结果 U-Net结果 DeepLabV3+结果 OC-Net结果
a
b
c
d
e
Tab.2  Segmentation results of models
模型 IoU F1 Precision Recall 训练/(ms·幅-1) 预测/(ms·幅-1)
FCN 0.699 0.812 0.821 0.804 53.1 7.2
U-Net 0.710 0.825 0.821 0.828 44.5 20.5
DeepLabV3+ 0.711 0.829 0.826 0.833 27.2 14.8
OCNet 0.706 0.820 0.824 0.817 47.6 16.9
Tab.3  Accuracies and time usages of models
序号 无人机影像 地面真值 DeepLabV3+
ResNet50
分割结果
DeepLabV3+
ResNet101
分割结果
DeepLabV3+
ResNet152
分割结果
DeepLabV3
ResNet50
分割结果
a
b
c
d
e
Tab.4  Segmentation results of DeepLabV3+ with different backbones
模型 IoU F1 Precision Recall
DeepLabV3+
ResNet50
0.711 0.829 0.826 0.833
DeepLabV3+
ResNet101
0.702 0.822 0.847 0.798
DeepLabV3+
ResNet152
0.702 0.820 0.846 0.796
DeepLabv3
ResNet50
0.701 0.812 0.813 0.811
Tab.5  Accuracy of models
[1] Proença D N, Grass G, Morais P V. Understanding pine wilt disease:Roles of the pine endophytic bacteria and of the bacteria carried by the disease-causing pinewood nematode[J]. MicrobiologyOpen, 2017, 6(2):e00415.
[2] 张瑞瑞, 夏浪, 陈立平, 等. 基于U-Net网络和无人机影像的松材线虫病变色木识别[J]. 农业工程学报, 2020, 36(12):61-68.
[2] Zhang R R, Xia L, Chen L P, et al. Recognition of wilt wood caused by pine wilt nematode based on U-Net network and unmanned aerial vehicle images[J]. Transactions of the Chinese Society of Agricultural Engineering, 2020, 36(12):61-68.
[3] 徐信罗, 陶欢, 李存军, 等. 基于Faster R-CNN的松材线虫病受害木识别与定位[J]. 农业机械学报, 2020, 51(7):228-236.
[3] Xu X L, Tao H, Li C J, et al. Detection and location of pine wilt disease induced dead pine trees based on faster R-CNN[J]. Transactions of the Chinese Society for Agricultural Machinery, 2020, 51(7):228-236.
[4] 叶建仁. 松材线虫病在中国的流行现状、防治技术与对策分析[J]. 林业科学, 2019, 55(9):1-10.
[4] Ye J R. Epidemic status of pine wilt disease in China and its prevention and control techniques and counter measures[J]. Scientia Silvae Sinicae, 2019, 55(9):1-10.
[5] 国家林业和草原局. 国家林业和草原局公告(2020 年第 4 号)(2020年松材线虫病疫区)[EB/OL]. [2020-03-16]. http://www.forestry.gov.cn/main/3457/20200326/145712092854308.html.
url: http://www.forestry.gov.cn/main/3457/20200326/145712092854308.html
[5] State Forestry and Grassland Administration. Announcement of the National Forestry and Grassland Administration (No.4 of 2020) (Pine wood nematode disease epidemic area in 2020) [EB/OL]. [2020-03-16]. http://www.forestry.gov.cn/main/3457/20200326/145712092854308.html.
url: http://www.forestry.gov.cn/main/3457/20200326/145712092854308.html
[6] 许青云, 李莹, 谭靖, 等. 基于高分六号卫星数据的红树林提取方法[J]. 自然资源遥感, 2023, 35(1):41-48.doi:10.6046/zrzyyg.2022048.
[6] Xu Q Y, Li Y, Tan J, et al. Information extraction method of mangrove forests based on GF-6 data[J]. Remote Sensing for Natural Resources, 2023, 35(1):41-48.doi:10.6046/zrzyyg.2022048.
[7] Xia L, Zhang R, Chen L, et al. Evaluation of deep learning segmentation models for detection of pine wilt disease in unmanned aerial vehicle images[J]. Remote Sensing, 2021, 13(18):3594.
[8] 吴琼. 基于遥感图像的松材线虫病区域检测算法研究[D]. 合肥: 安徽大学, 2013.
[8] Wu Q. Research on Bursaphelenchus xylophilus area detection based on remote sensing image[D]. Hefei: Anhui University, 2013.
[9] 曾全, 孙华富, 杨远亮, 等. 无人机监测松材线虫病的精度比较[J]. 四川林业科技, 2019, 40(3):92-95,114.
[9] Zeng Q, Sun H F, Yang Y L, et al. Precision comparison for pine wood nematode disease monitoring by UAV[J]. Journal of Sichuan Forestry Science and Technology, 2019, 40(3):92-95,114.
[10] Iordache M D, Mantas V, Baltazar E, et al. A machine learning approach to detecting pine wilt disease using airborne spectral imagery[J]. Remote Sensing, 2020, 12(14):2280.
[11] Syifa M, Park S J, Lee C W. Detection of the pine wilt disease tree candidates for drone remote sensing using artificial intelligence techniques[J]. Engineering, 2020, 6(8):919-926.
[12] Xia L, Zhao F, Chen J, et al. A full resolution deep learning network for paddy rice mapping using Landsat data[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 194:91-107.
[13] 胡建文, 汪泽平, 胡佩. 基于深度学习的空谱遥感图像融合综述[J]. 自然资源遥感, 2023, 35(1):1-14.doi:10.6046/zrzyyg.2021433.
[13] Hu J W, Wang Z P, Hu P. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1):1-14.doi:10.6046/zrzyyg.2021433.
[14] 金远航, 徐茂林, 郑佳媛. 基于改进YOLOv4-tiny的无人机影像枯死树木检测算法[J]. 自然资源遥感, 2023, 35(1):90-98.doi:10.6046/zrzyyg.2022018.
[14] Jin Y H, Xu M L, Zheng J Y. A dead tree detection algorithm based on improved YOLOv4-tiny for UAV images[J]. Remote Sensing for Natural Resources, 2023, 35(1):90-98.doi:10.6046/zrzyyg.2022018.
[15] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-651.
doi: 10.1109/TPAMI.2016.2572683 pmid: 27244717
[16] Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Honolulu,HI,USA.IEEE, 2017:6230-6239.
[17] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
doi: 10.1109/TPAMI.2016.2644615 pmid: 28060704
[18] Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2015:234-241.
[19] Yang M, Yu K, Zhang C, et al. DenseASPP for semantic segmentation in street scenes[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA.IEEE, 2018:3684-3692.
[20] Fu J, Liu J, Tian H, et al. Dual attention network for scene segmentation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Long Beach,CA,USA.IEEE, 2019:3141-3149.
[21] Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// Proceedings of the European Conference on Computer Vision (ECCV),Munich.ACM, 2018:833-851.
[22] Yuan Y, Huang L, Guo J, et al. OCNet:Object context network for scene parsing[EB/OL]. 2018:arXiv:1809.00916. https://arxiv.org/abs/1809.00916.pdf.
url: https://arxiv.org/abs/1809.00916.pdf
[23] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL].arXiv. https://arxiv.org/abs/1409.1556.pdf.
url: https://arxiv.org/abs/1409.1556.pdf
[24] Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL].arXiv. https://arxiv.org/abs/1706.05587.pdf.
url: https://arxiv.org/abs/1706.05587.pdf
[25] Chollet F. Xception:Deep learning with depthwise separable convolutions[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Honolulu,HI,USA.IEEE, 2017:1800-1807.
[26] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Las Vegas,NV,USA.IEEE, 2016:770-778.
[27] Huang Q, Sun J, Ding H, et al. Robust liver vessel extraction using 3D U-Net with variant dice loss function[J]. Computers in Biology and Medicine, 2018, 101:153-162.
doi: S0010-4825(18)30238-5 pmid: 30144657
[28] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]// 2017 IEEE International Conference on Computer Vision (ICCV).Venice,Italy.IEEE, 2017:2999-3007.
[1] WEN Quan, LI Lu, XIONG Li, DU Lei, LIU Qingjie, WEN Qi. A review of water body extraction from remote sensing images based on deep learning[J]. Remote Sensing for Natural Resources, 2024, 36(3): 57-71.
[2] BAI Shi, TANG Panpan, MIAO Zhao, JIN Caifeng, ZHAO Bo, WAN Haoming. Information extraction of landslides based on high-resolution remote sensing images and an improved U-Net model: A case study of Wenchuan, Sichuan[J]. Remote Sensing for Natural Resources, 2024, 36(3): 96-107.
[3] SONG Shuangshuang, XIAO Kaifei, LIU Zhaohua, ZENG Zhaoliang. A YOLOv5-based target detection method using high-resolution remote sensing images[J]. Remote Sensing for Natural Resources, 2024, 36(2): 50-59.
[4] LI Wanyue, LOU Debo, WANG Chenghui, LIU Huan, ZHANG Changqing, FAN Yinglin, DU Xiaochuan. A granitic pegmatite information extraction method based on improved U-Net[J]. Remote Sensing for Natural Resources, 2024, 36(2): 89-96.
[5] LI Xintong, SHI Lan, CHEN Duoyan. A deep learning-based study on downscaling of GPM products in Fujian-Zhejiang-Jiangxi area[J]. Remote Sensing for Natural Resources, 2023, 35(4): 105-113.
[6] DENG Dingzhu. Deep learning-based cloud detection method for multi-source satellite remote sensing images[J]. Remote Sensing for Natural Resources, 2023, 35(4): 9-16.
[7] CHEN Di, PENG Qiuzhi, HUANG Peiyi, LIU Yaxuan. Detecting land for photovoltaic development based on the attention mechanism and improved YOLOv5[J]. Remote Sensing for Natural Resources, 2023, 35(4): 90-95.
[8] LIU Hanwei, CHEN Fulong, LIAO Yaao. Remote sensing dynamic monitoring and driving factor analysis for the Beijing section of Ming Great Wall[J]. Remote Sensing for Natural Resources, 2023, 35(4): 255-263.
[9] NIU Xianghua, HUANG Wei, HUANG Rui, JIANG Sili. A high-fidelity method for thin cloud removal from remote sensing images based on attentional feature fusion[J]. Remote Sensing for Natural Resources, 2023, 35(3): 116-123.
[10] LIU Li, DONG Xianmin, LIU Juan. A performance evaluation method for semantic segmentation models of remote sensing images considering surface features[J]. Remote Sensing for Natural Resources, 2023, 35(3): 80-87.
[11] QIU Lei, ZHANG Xuezhi, HAO Dawei. VideoSAR moving target detection and tracking algorithm based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(2): 157-166.
[12] ZHANG Xian, LI Wei, CHEN Li, YANG Zhaoying, DOU Baocheng, LI Yu, CHEN Haomin. Research progress and prospect of remote sensing-based feature extraction of opencast mining areas[J]. Remote Sensing for Natural Resources, 2023, 35(2): 25-33.
[13] DIAO Mingguang, LIU Yong, GUO Ningbo, LI Wenji, JIANG Jikang, WANG Yunxiao. Mask R-CNN-based intelligent identification of sparse woods from remote sensing images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 97-104.
[14] HU Jianwen, WANG Zeping, HU Pei. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1): 1-14.
[15] ZHAO Linghu, YUAN Xiping, GAN Shu, HU Lin, QIU Mingyu. An information extraction model of roads from high-resolution remote sensing images based on improved Deeplabv3+[J]. Remote Sensing for Natural Resources, 2023, 35(1): 107-114.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech