|
Abstract In the management of modern agriculture production, the spatial distribution of different crop types is identified as important information about agricultural conditions. Identifying crop types from satellite remote sensing imagery serves as a fundamental method for acquiring such information. Although there exist various algorithms for identifying surface features from remote sensing imagery, reliable farmland classification remains challenging. This study selected three representative semantic segmentation-orientated deep convolutional models, i.e., UNet, ResUNet, and SegNext, and compared their performance in crop classification using remote sensing images of the Hetao irrigation district from the Gaofen-2 satellite. Using the three algorithms, nine network models with varying complexities were developed to analyze the differences in the performance of various network structures in classifying crops in farmland based on remote sensing imagery, thus providing optimization insights and an experimental basis for future research on relevant models. Experimental results indicate that the six-layer UNet achieved the highest identification accuracy (88.74%), while the six-layer SegNext yielded the lowest accuracy (84.33%). The ResUNet displayed the highest complexity but serious over-fitting with the dataset used in this study. Regarding computational efficiency, ResUNet was significantly less efficient than the other two model types.
|
Keywords
deep convolution
semantic segmentation
crop filed classification
Hetao irrigation district
|
|
Issue Date: 23 December 2024
|
|
|
[1] |
Kattenborn T, Leitloff J, Schiefer F, et al. Review on convolutional neural networks (CNN) in vegetation remote sensing[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 173:24-49.
|
[2] |
You N, Dong J. Examining earliest identifiable timing of crops using all available Sentinel 1/2 imagery and Google Earth Engine[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161:109-123.
|
[3] |
Hao P, Di L, Zhang C, et al. Transfer learning for crop classification with cropland data layer data (CDL) as training samples[J]. Science of the Total Environment, 2020, 733:138869.
|
[4] |
梁继, 郑镇炜, 夏诗婷, 等. 高分六号红边特征的农作物识别与评估[J]. 遥感学报, 2020, 24(10):1168-1179.
|
[4] |
Liang J, Zheng Z W, Xia S T, et al. Crop recognition and evaluationusing red edge features of GF-6 satellite[J]. Journal of Remote Sensing, 2020, 24(10):1168-1179.
|
[5] |
Gao H, Wang C, Wang G, et al. A novel crop classification method based on ppfSVM classifier with time-series alignment kernel from dual-polarization SAR datasets[J]. Remote Sensing of Environment, 2021, 264:112628.
|
[6] |
马战林, 薛华柱, 刘昌华, 等. 基于主被动遥感数据和面向对象的大蒜识别[J]. 农业工程学报, 2022, 38(2):210-222.
|
[6] |
Ma Z L, Xue H Z, Liu C H, et al. Identification of garlic based on active and passive remote sensing data and object-oriented technology[J]. Transactions of the Chinese Society of Agricultural Engineering, 2022, 38(2):210-222.
|
[7] |
Zhong Y, Hu X, Luo C, et al. WHU-Hi:UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF[J]. Remote Sensing of Environment, 2020, 250:112012.
|
[8] |
Liu S, Zhou Z, Ding H, et al. Crop mapping using sentinel full-year dual-polarized SAR data and a CPU-optimized convolutional neural network with two sampling strategies[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14:7017-7031.
|
[9] |
Gallo I, Ranghetti L, Landro N, et al. In-season and dynamic crop mapping using 3D convolution neural networks and sentinel-2 time series[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2023, 195:335-352.
|
[10] |
Li G, Cui J, Han W, et al. Crop type mapping using time-series Sentinel-2 imagery and U-Net in early growth periods in the Hetao irrigation district in China[J]. Computers and Electronics in Agriculture, 2022, 203:107478.
|
[11] |
许晴, 张锦水, 张凤, 等. 深度学习农作物分类的弱样本适用性[J]. 遥感学报, 2022, 26(7):1395-1409.
|
[11] |
Xu Q, Zhang J S, Zhang F, et al. Applicability of weak samples to deep learning crop classification[J]. National Remote Sensing Bulletin, 2022, 26(7):1395-1409.
|
[12] |
Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[J/OL]. arXiv, 2015. https://arxiv.org/abs/1505.04597.pdf.
url: https://arxiv.org/abs/1505.04597.pdf
|
[13] |
Diakogiannis F I, Waldner F, Caccetta P, et al. ResUNet-a:A deep learning framework for semantic segmentation of remotely sensed data[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 162:94-114.
|
[14] |
Guo M H, Lu C Z, Hou Q, et al. SegNext:Rethinking convolutional attention design for semantic segmentation[J/OL]. arXiv: 2022(2022-9-18)[2023-8-26]. https://arxiv.org/abs/2209.08575.
url: https://arxiv.org/abs/2209.08575
|
[15] |
Zhang Z, Liu Q, Wang Y. Road extraction by deep residual U-net[J/OL]. arXiv, 2017. https://arxiv.org/abs/1711.10684.pdf.
url: https://arxiv.org/abs/1711.10684.pdf
|
[16] |
Baatz M, Schäpe A. Multiresolution segmentation:An optimization approach for high quality multi-scale image segmentation[C]// Angewandte Geographische Informations Verarbeitung XII,Wichmann Verlag, 2000:12-23.
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|