Please wait a minute...
 
Remote Sensing for Natural Resources    2024, Vol. 36 Issue (4) : 210-217     DOI: 10.6046/zrzyyg.2023150
|
A comparative study on semantic segmentation-orientated deep convolutional networks for remote sensing image-based farmland classification: A case study of the Hetao irrigation district
SU Tengfei()
College of Water Conservancy and Civil Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China
Download: PDF(5437 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

In the management of modern agriculture production, the spatial distribution of different crop types is identified as important information about agricultural conditions. Identifying crop types from satellite remote sensing imagery serves as a fundamental method for acquiring such information. Although there exist various algorithms for identifying surface features from remote sensing imagery, reliable farmland classification remains challenging. This study selected three representative semantic segmentation-orientated deep convolutional models, i.e., UNet, ResUNet, and SegNext, and compared their performance in crop classification using remote sensing images of the Hetao irrigation district from the Gaofen-2 satellite. Using the three algorithms, nine network models with varying complexities were developed to analyze the differences in the performance of various network structures in classifying crops in farmland based on remote sensing imagery, thus providing optimization insights and an experimental basis for future research on relevant models. Experimental results indicate that the six-layer UNet achieved the highest identification accuracy (88.74%), while the six-layer SegNext yielded the lowest accuracy (84.33%). The ResUNet displayed the highest complexity but serious over-fitting with the dataset used in this study. Regarding computational efficiency, ResUNet was significantly less efficient than the other two model types.

Keywords deep convolution      semantic segmentation      crop filed classification      Hetao irrigation district     
ZTFLH:  TP79  
Issue Date: 23 December 2024
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Tengfei SU
Cite this article:   
Tengfei SU. A comparative study on semantic segmentation-orientated deep convolutional networks for remote sensing image-based farmland classification: A case study of the Hetao irrigation district[J]. Remote Sensing for Natural Resources, 2024, 36(4): 210-217.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2023150     OR     https://www.gtzyyg.com/EN/Y2024/V36/I4/210
Fig.1  A backbone of UNet structure with 4 layers
Fig.2  Multi-scale residual connection module in ResUNet
层数 d
第一、二层 {1, 3, 15, 31}
第三、四层 {1, 3, 15}
第五、六、七层 {1}
Tab.1  Definition of the dilation parameter d of the atrous convolution in ResUNet
Fig.3  Multi-scale convolutional self-attention module in SegNext
Fig.4  Study area and the remote sensing image dataset
Fig.5  Accuracy comparison for different deep convolutional semantic segmentation networks
Fig.6-1  The optimal classification results for the 3 types of algorithms
Fig.6-2  The optimal classification results for the 3 types of algorithms
Fig.7  The variation of overall accuracy for the 9 deep convolutional semantic segmentation networks during training
Fig.8  Comparison of 9 deep convolutional semantic segmentation networks for the parameter size and computational load
[1] Kattenborn T, Leitloff J, Schiefer F, et al. Review on convolutional neural networks (CNN) in vegetation remote sensing[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 173:24-49.
[2] You N, Dong J. Examining earliest identifiable timing of crops using all available Sentinel 1/2 imagery and Google Earth Engine[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161:109-123.
[3] Hao P, Di L, Zhang C, et al. Transfer learning for crop classification with cropland data layer data (CDL) as training samples[J]. Science of the Total Environment, 2020, 733:138869.
[4] 梁继, 郑镇炜, 夏诗婷, 等. 高分六号红边特征的农作物识别与评估[J]. 遥感学报, 2020, 24(10):1168-1179.
[4] Liang J, Zheng Z W, Xia S T, et al. Crop recognition and evaluationusing red edge features of GF-6 satellite[J]. Journal of Remote Sensing, 2020, 24(10):1168-1179.
[5] Gao H, Wang C, Wang G, et al. A novel crop classification method based on ppfSVM classifier with time-series alignment kernel from dual-polarization SAR datasets[J]. Remote Sensing of Environment, 2021, 264:112628.
[6] 马战林, 薛华柱, 刘昌华, 等. 基于主被动遥感数据和面向对象的大蒜识别[J]. 农业工程学报, 2022, 38(2):210-222.
[6] Ma Z L, Xue H Z, Liu C H, et al. Identification of garlic based on active and passive remote sensing data and object-oriented technology[J]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38(2):210-222.
[7] Zhong Y, Hu X, Luo C, et al. WHU-Hi:UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF[J]. Remote Sensing of Environment, 2020, 250:112012.
[8] Liu S, Zhou Z, Ding H, et al. Crop mapping using sentinel full-year dual-polarized SAR data and a CPU-optimized convolutional neural network with two sampling strategies[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14:7017-7031.
[9] Gallo I, Ranghetti L, Landro N, et al. In-season and dynamic crop mapping using 3D convolution neural networks and sentinel-2 time series[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 195:335-352.
[10] Li G, Cui J, Han W, et al. Crop type mapping using time-series Sentinel-2 imagery and U-Net in early growth periods in the Hetao irrigation district in China[J]. Computers and Electronics in Agriculture, 2022, 203:107478.
[11] 许晴, 张锦水, 张凤, 等. 深度学习农作物分类的弱样本适用性[J]. 遥感学报, 2022, 26(7):1395-1409.
[11] Xu Q, Zhang J S, Zhang F, et al. Applicability of weak samples to deep learning crop classification[J]. National Remote Sensing Bulletin, 2022, 26(7):1395-1409.
[12] Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[J/OL]. arXiv, 2015. https://arxiv.org/abs/1505.04597.pdf.
url: https://arxiv.org/abs/1505.04597.pdf
[13] Diakogiannis F I, Waldner F, Caccetta P, et al. ResUNet-a:A deep learning framework for semantic segmentation of remotely sensed data[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 162:94-114.
[14] Guo M H, Lu C Z, Hou Q, et al. SegNext:Rethinking convolutional attention design for semantic segmentation[J/OL]. arXiv: 2022(2022-9-18)[2023-8-26]. https://arxiv.org/abs/2209.08575.
url: https://arxiv.org/abs/2209.08575
[15] Zhang Z, Liu Q, Wang Y. Road extraction by deep residual U-net[J/OL]. arXiv, 2017. https://arxiv.org/abs/1711.10684.pdf.
url: https://arxiv.org/abs/1711.10684.pdf
[16] Baatz M, Schäpe A. Multiresolution segmentation:An optimization approach for high quality multi-scale image segmentation[C]// Angewandte Geographische Informations Verarbeitung XII,Wichmann Verlag, 2000:12-23.
[1] QU Haicheng, LIANG Xu. Building extraction from high-resolution images using a hybrid attention mechanism combined with multi-scale feature enhancement[J]. Remote Sensing for Natural Resources, 2024, 36(4): 107-116.
[2] PAN Junjie, SHEN Li, YAN Xin, NIE Xin, DONG Kuanlin. An adversarial learning-based unsupervised domain adaptation method for semantic segmentation of high-resolution remote sensing images[J]. Remote Sensing for Natural Resources, 2024, 36(4): 149-157.
[3] LI Shiqi, YAO Guoqing. A landslide detection method using CNN- and SETR-based feature fusion[J]. Remote Sensing for Natural Resources, 2024, 36(4): 158-164.
[4] LUO Wei, LI Xiuhua, QIN Huojuan, ZHANG Muqing, WANG Zeping, JIANG Zhuhui. Identification and yield prediction of sugarcane in the south-central part of Guangxi Zhuang Autonomous Region, China based on multi-source satellite-based remote sensing images[J]. Remote Sensing for Natural Resources, 2024, 36(3): 248-258.
[5] BAI Shi, TANG Panpan, MIAO Zhao, JIN Caifeng, ZHAO Bo, WAN Haoming. Information extraction of landslides based on high-resolution remote sensing images and an improved U-Net model: A case study of Wenchuan, Sichuan[J]. Remote Sensing for Natural Resources, 2024, 36(3): 96-107.
[6] LIU Li, DONG Xianmin, LIU Juan. A performance evaluation method for semantic segmentation models of remote sensing images considering surface features[J]. Remote Sensing for Natural Resources, 2023, 35(3): 80-87.
[7] LIN Jiahui, LIU Guang, FAN Jinghui, ZHAO Hongli, BAI Shibiao, PAN Hongyu. Extracting information about mining subsidence by combining an improved U-Net model and D-InSAR[J]. Remote Sensing for Natural Resources, 2023, 35(3): 145-152.
[8] ZHAO Linghu, YUAN Xiping, GAN Shu, HU Lin, QIU Mingyu. An information extraction model of roads from high-resolution remote sensing images based on improved Deeplabv3+[J]. Remote Sensing for Natural Resources, 2023, 35(1): 107-114.
[9] MENG Congtang, ZHAO Yindi, HAN Wenquan, HE Chenyang, CHEN Xiqiu. RandLA-Net-based detection of urban building change using airborne LiDAR point clouds[J]. Remote Sensing for Natural Resources, 2022, 34(4): 113-121.
[10] SHEN Jun’ao, MA Mengting, SONG Zhiyuan, LIU Tingzhou, ZHANG Wei. Water information extraction from high-resolution remote sensing images using the deep-learning based semantic segmentation model[J]. Remote Sensing for Natural Resources, 2022, 34(4): 129-135.
[11] WANG Huajun, GE Xiaosan. Lightweight DeepLabv3+ building extraction method from remote sensing images[J]. Remote Sensing for Natural Resources, 2022, 34(2): 128-135.
[12] LIAO Kuo, NIE Lei, YANG Zeyu, ZHANG Hongyan, WANG Yanjie, PENG Jida, DANG Haofei, LENG Wei. Classification of tea garden based on multi-source high-resolution satellite images using multi-dimensional convolutional neural network[J]. Remote Sensing for Natural Resources, 2022, 34(2): 152-161.
[13] GUO Wen, ZHANG Qiao. Building extraction using high-resolution satellite imagery based on an attention enhanced full convolution neural network[J]. Remote Sensing for Land & Resources, 2021, 33(2): 100-107.
[14] QIU Yifan, CHAI Dengfeng. A deep learning method for Landsat image cloud detection without manually labeled data[J]. Remote Sensing for Land & Resources, 2021, 33(1): 102-107.
[15] LIU Zhao, ZHAO Tong, LIAO Feifan, LI Shuai, LI Haiyang. Research and comparative analysis on urban built-up area extraction methods from high-resolution remote sensing image based on semantic segmentation network[J]. Remote Sensing for Land & Resources, 2021, 33(1): 45-53.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech