A comparative study on semantic segmentation-orientated deep convolutional networks for remote sensing image-based farmland classification: A case study of the Hetao irrigation district
SU Tengfei()
College of Water Conservancy and Civil Engineering, Inner Mongolia Agricultural University, Hohhot 010018, China
In the management of modern agriculture production, the spatial distribution of different crop types is identified as important information about agricultural conditions. Identifying crop types from satellite remote sensing imagery serves as a fundamental method for acquiring such information. Although there exist various algorithms for identifying surface features from remote sensing imagery, reliable farmland classification remains challenging. This study selected three representative semantic segmentation-orientated deep convolutional models, i.e., UNet, ResUNet, and SegNext, and compared their performance in crop classification using remote sensing images of the Hetao irrigation district from the Gaofen-2 satellite. Using the three algorithms, nine network models with varying complexities were developed to analyze the differences in the performance of various network structures in classifying crops in farmland based on remote sensing imagery, thus providing optimization insights and an experimental basis for future research on relevant models. Experimental results indicate that the six-layer UNet achieved the highest identification accuracy (88.74%), while the six-layer SegNext yielded the lowest accuracy (84.33%). The ResUNet displayed the highest complexity but serious over-fitting with the dataset used in this study. Regarding computational efficiency, ResUNet was significantly less efficient than the other two model types.
苏腾飞. 深度卷积语义分割网络在农田遥感影像分类中的对比研究——以河套灌区为例[J]. 自然资源遥感, 2024, 36(4): 210-217.
SU Tengfei. A comparative study on semantic segmentation-orientated deep convolutional networks for remote sensing image-based farmland classification: A case study of the Hetao irrigation district. Remote Sensing for Natural Resources, 2024, 36(4): 210-217.
Kattenborn T, Leitloff J, Schiefer F, et al. Review on convolutional neural networks (CNN) in vegetation remote sensing[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 173:24-49.
[2]
You N, Dong J. Examining earliest identifiable timing of crops using all available Sentinel 1/2 imagery and Google Earth Engine[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161:109-123.
[3]
Hao P, Di L, Zhang C, et al. Transfer learning for crop classification with cropland data layer data (CDL) as training samples[J]. Science of the Total Environment, 2020, 733:138869.
Liang J, Zheng Z W, Xia S T, et al. Crop recognition and evaluationusing red edge features of GF-6 satellite[J]. Journal of Remote Sensing, 2020, 24(10):1168-1179.
[5]
Gao H, Wang C, Wang G, et al. A novel crop classification method based on ppfSVM classifier with time-series alignment kernel from dual-polarization SAR datasets[J]. Remote Sensing of Environment, 2021, 264:112628.
Ma Z L, Xue H Z, Liu C H, et al. Identification of garlic based on active and passive remote sensing data and object-oriented technology[J]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38(2):210-222.
[7]
Zhong Y, Hu X, Luo C, et al. WHU-Hi:UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF[J]. Remote Sensing of Environment, 2020, 250:112012.
[8]
Liu S, Zhou Z, Ding H, et al. Crop mapping using sentinel full-year dual-polarized SAR data and a CPU-optimized convolutional neural network with two sampling strategies[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14:7017-7031.
[9]
Gallo I, Ranghetti L, Landro N, et al. In-season and dynamic crop mapping using 3D convolution neural networks and sentinel-2 time series[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 195:335-352.
[10]
Li G, Cui J, Han W, et al. Crop type mapping using time-series Sentinel-2 imagery and U-Net in early growth periods in the Hetao irrigation district in China[J]. Computers and Electronics in Agriculture, 2022, 203:107478.
Xu Q, Zhang J S, Zhang F, et al. Applicability of weak samples to deep learning crop classification[J]. National Remote Sensing Bulletin, 2022, 26(7):1395-1409.
[12]
Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[J/OL]. arXiv, 2015. https://arxiv.org/abs/1505.04597.pdf.
[13]
Diakogiannis F I, Waldner F, Caccetta P, et al. ResUNet-a:A deep learning framework for semantic segmentation of remotely sensed data[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 162:94-114.
[14]
Guo M H, Lu C Z, Hou Q, et al. SegNext:Rethinking convolutional attention design for semantic segmentation[J/OL]. arXiv: 2022(2022-9-18)[2023-8-26]. https://arxiv.org/abs/2209.08575.
[15]
Zhang Z, Liu Q, Wang Y. Road extraction by deep residual U-net[J/OL]. arXiv, 2017. https://arxiv.org/abs/1711.10684.pdf.
[16]
Baatz M, Schäpe A. Multiresolution segmentation:An optimization approach for high quality multi-scale image segmentation[C]// Angewandte Geographische Informations Verarbeitung XII,Wichmann Verlag, 2000:12-23.