Please wait a minute...
 
Remote Sensing for Natural Resources    2024, Vol. 36 Issue (3) : 248-258     DOI: 10.6046/zrzyyg.2023093
|
Identification and yield prediction of sugarcane in the south-central part of Guangxi Zhuang Autonomous Region, China based on multi-source satellite-based remote sensing images
LUO Wei1(), LI Xiuhua1,2(), QIN Huojuan1, ZHANG Muqing2, WANG Zeping3, JIANG Zhuhui4
1. School of Electrical Engineering, Guangxi University, Nanning 530004, China
2. Guangxi Key Laboratory of Sugarcane Biology, Guangxi University, Nanning 530004, China
3. Sugarcane Research Institute, Guangxi Academy of Agricultural Sciences, Nanning 530007, China
4. Guangxi Sugar Industry Group, Nanning 530022, China
Download: PDF(5523 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

This study aims to solve the challenges faced in the prediction of sugarcane yield in Guangxi, such as varied crops, complex investigations in the sugarcane planting areas, and difficult acquisition of remote-sensing images caused by the changeable weather. To this end, an improved semantic segmentation algorithm based on Sentinel-2 images was proposed to automatically identify sugarcane planting areas, and an extraction method for representative spectral features was developed to build a sugarcane yield prediction model based on multi-temporal Sentinel-2 and Landsat8 images. First, an ECA-BiseNetV2 identification model for sugarcane planting areas was constructed by introducing an efficient channel attention (ECA) module into the BiseNetV2 lightweight unstructured network. As a result, the overall pixel classification accuracy reached up to 91.54%, and the precision for sugarcane pixel identification was up to 95.57%. Then, multiple vegetation indices of different growth periods of the identified sugarcane planting areas were extracted, and the Landsat8 image-derived vegetation indices were converted into Sentinel-2 image-based ones using a linear regression model to reduce the differences of the indices derived using images from the two satellites. Subsequently, after the fitting of time-series data of the extracted vegetation indices using a cubic curve, the maximum indices were obtained as the representative spectral features. Finally, a yield prediction model was built using multiple machine learning algorithms. The results indicate that the test set of the decision tree model built using the fitted maximum values of the vegetation indices yielded R? of up to 0.759, 4.3%, higher than that (0.792) of the model built using the available actual maximum values. Therefore, this method can effectively resolve the difficulty in developing an accurate sugarcane yield prediction model caused by changeable weather-induced lack of remote sensing images of sugarcane of the key growth periods.

Keywords semantic segmentation      vegetation index      sugarcane yield prediction      satellite remote sensing      time-series     
ZTFLH:  S127  
  TP751  
  TP79  
Issue Date: 03 September 2024
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Wei LUO
Xiuhua LI
Huojuan QIN
Muqing ZHANG
Zeping WANG
Zhuhui JIANG
Cite this article:   
Wei LUO,Xiuhua LI,Huojuan QIN, et al. Identification and yield prediction of sugarcane in the south-central part of Guangxi Zhuang Autonomous Region, China based on multi-source satellite-based remote sensing images[J]. Remote Sensing for Natural Resources, 2024, 36(3): 248-258.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2023093     OR     https://www.gtzyyg.com/EN/Y2024/V36/I3/248
Fig.1  Distribution of the selected sugarcane regions
样本数量 最大值/
(t·hm-2)
最小值/
(t·hm-2)
均值/
(t·hm-2)
方差/
(t·hm-2)2
64 161.60 49.06 85.94 602.71
Tab.1  Statistical information of sugarcane yields
敏感
波段
Landsat8 Sentinel-2
波长/nm 分辨率/m 波长/nm 分辨率/m
蓝光 450~510 30 492 10
绿光 530~590 30 560 10
红光 640~670 30 665 10
近红外 850~880 30 833 10
Tab.2  Parameters of the selected bands from satellite images
Fig.2  Satellite images of a sugarcane planting region in different growth periods
影像编号 地理位置 成像时间 云量/% 像素分辨率/m
T48QZL 蔗区1号 2018-10-03 <5 10
T48QYL 蔗区1号 2019-09-28 <5 10
T49QBF 蔗区2号 2019-09-28 <5 10
T48QYK 蔗区3号 2020-10-22 <5 10
Tab.3  Basic information of the satellite remote sensing images
Fig.3  Central coordinate distribution of the sugarcane fields in Liangqi farm
Fig.4  Architecture of the efficient channel attention module
Fig.5  Network architecture of ECA-BiseNetV2
Fig.6  Diagram of classification results convert to mask
植被指数 计算公式
RVI N I R R E D
EVI 2.5 × ( N I R - R E D ) N I R + 6 R E D - 7.5 B L U E + 1
ARVI N I R - ( 2 R E D - B L U E ) N I R + ( 2 R E D - B L U E )
NDVI N I R - R E D N I R + R E D
MSAVI 2 ( N I R + 1 ) - 2 ( N I R + 1 ) 2 - 8 ( N I R - R E D ) 2
OSAVI N I R - R E D N I R + R E D + 0.16
Tab.4  Vegetation indices and calculation formulas
组别 卫星 成像时间(格林尼治标准时间) 太阳方位角/(°)
第1组 Sentinel-2 2019-09-23 03:31:36.497 072 142.284 368 98
Landsat8 2019-09-23 03:17:09.454 461 134.876 687 23
第2组 Sentinel-2 2019-09-25 03:21:28.373 447 142.049 097 88
Landsat8 2019-09-25 03:04:48.402 126 136.096 298 00
第3组 Sentinel-2 2019-11-10 03:41:53.132 702 160.935 621 44
Landsat8 2019-11-10 03:17:36.267 949 152.188 601 14
第4组 Sentinel-2 2020-04-27 03:21:37.616 403 112.802 340 64
Landsat8 2020-04-27 03:10:09.707 394 108.816 244 32
Tab.5  Imaging information of four sets
Fig.7  Curves of loss function during training and overall accuracy in the test set
模型 Kappa 总体精
度/%
查准
率/%
查全
率/%
单张推理
时间/ms
BiseNetV2 0.798 6 90.43 94.62 89.87 48
ECA-BiseNetV2 0.806 9 91.54 95.57 90.78 50
U-Net 0.681 5 83.67 94.61 77.13 101
SegNet 0.642 7 81.95 95.69 73.84 95
DeepLabV3+ 0.801 7 91.09 94.90 90.36 105
Fast-SCNN 0.675 2 84.19 92.32 81.52 39
BiseNetV1 0.744 8 87.68 94.27 85.51 55
Tab.6  Evaluating indicators of the best models
Fig.8  Output results of BiseNetV2 and ECA-BiseNetV2
Fig.9  ECA-BiseNetV2 identification diagram
波段 R2 MRE/%
蓝光 0.885 12.26
绿光 0.950 4.96
红光 0.977 5.37
近红外 0.925 2.80
Tab.7  Difference analysis of the values of Landsat8 and Sentinel-2 bands
植被指数 线性转化模型 R2 MRE/%
转化前 转化后
RVI y = 0.976 9x-0.200 9 0.994 6.71 2.82
NDVI y = 1.044 7x-0.046 9 0.996 2.87 1.03
EVI y = 0.961 7x+0.032 4 0.945 4.66 3.84
MSAVI y = 0.968 9x+0.026 6 0.985 1.09 0.99
ARVI y = 1.029 3x-0.014 6 0.999 2.53 2.43
OSAVI y = 1.014 8x-0.013 4 0.993 1.90 1.16
Tab.8  The linear transformation models of different vegetation indices and their transformation results in test sets
Fig.10  Fitting results of NDVI in 2014 for sugarcane region 1—4
植被指数 R2分布范围 MRE分布范围/%
RVI 0.706~0.934 0.69~3.57
NDVI 0.612~0.849 0.43~3.29
EVI 0.657~0.897 1.67~2.98
MSAVI 0.732~0.953 0.35~1.66
ARVI 0.682~0.861 0.89~2.72
OSAVI 0.714~0.908 0.25~1.96
Tab.9  Error analysis of curve equations fitted values and time series data
植被指数 Pearson相关系数
RVI 0.725
NDVI 0.718
EVI 0.424
MSAVI 0.657
ARVI 0.504
OSAVI 0.756
Tab.10  Correlation analysis between actual maximum values of vegetation indices and yields
模型 训练集 测试集
MRE/% R2 MRE/% R2
逻辑回归 8.23 0.622 10.34 0.621
支持向量机 8.87 0.586 5.75 0.561
决策树 3.74 0.777 9.03 0.759
随机森林 3.79 0.769 10.80 0.758
Tab.11  Training results of each model using the actual maximum values
植被指数 Pearson相关系数
RVI 0.773
NDVI 0.831
EVI 0.444
MSAVI 0.632
ARVI 0.535
OSAVI 0.757
Tab.12  Correlation analysis between the maximum fitting values of vegetation index and yields
模型 训练集 测试集
MRE/% R2 MRE/% R2
逻辑回归 9.10 0.716 6.14 0.697
支持向量机 9.30 0.561 8.84 0.559
决策树 3.65 0.820 8.59 0.792
随机森林 3.56 0.798 7.34 0.774
Tab.13  Training results of each model using the fitted maximum values
[1] 邓宇驰, 刘晓婷, 黄莹, 等. 广西崇左蔗区2022年糖料蔗生产调查[J]. 中国种业, 2022(10):48-51.
[1] Deng Y C, Liu X T, Huang Y, et al. Investigation on sugar cane production in Chongzuo sugarcane area of Guangxi in 2022[J]. China Seed Industry, 2022(10):48-51.
[2] Taufik A, Syed Ahmad S S, Azmi E F. Classification of Landsat8 satellite data using unsupervised methods[C]// Piuri V,Balas V,Borah S,et al.Intelligent and Interactive Computing.Singapore:Springer, 2019:275-284.
[3] 陈洋波, 窦鹏, 张涛. 基于Landsat的多分类器集成遥感影像分类[J]. 测绘科学, 2018, 43(8):97-103,109.
[3] Chen Y B, Dou P, Zhang T. Multiple classifiers integrated classification based on Landsat imagery[J]. Science of Surveying and Mapping, 2018, 43(8):97-103,109.
[4] 栗旭升, 李虎, 陈冬花, 等. 联合GF-5与GF-6卫星数据的多分类器组合亚热带树种识别[J]. 林业科学, 2020, 56(10):93-104.
[4] Li X S, Li H, Chen D H, et al. Multiple classifiers combination method for tree species identification based on GF-5 and GF-6[J]. Scientia Silvae Sinicae, 2020, 56(10):93-104.
[5] 吴玉鑫, 王卷乐, 韩保民, 等. 基于时空谱特征的墨脱县森林分类方法与实现[J]. 自然资源遥感, 2023, 35(1):180-188.doi:10.6046/zrzyyg.2022016.
[5] Wu Y X, Wang J L, Han B M, et al. Forest classification for Motuo County:A method based on spatio-temporal-spectral characteristics[J]. Remote Sensing for Natural Resources, 2023, 35(1):180-188.doi:10.6046/zrzyyg.2022016.
[6] 李玉峰, 林辉. 基于CNN的多光谱遥感图像地物覆盖分类[J]. 微处理机, 2019, 40(1):43-48.
[6] Li Y F, Lin H. Multi-spectral remote sensing image classification of ground coverage based on CNN[J]. Microprocessors, 2019, 40(1):43-48.
[7] Nascimento B H, Garcia F L M, Schwieder M, et al. Detailed agricultural land classification in the Brazilian cerrado based on phenological information from dense satellite image time series[J]. International Journal of Applied Earth Observation and Geoinformation, 2019, 82:101872.
[8] Liu Q, Basu S, Ganguly S, et al. DeepSat V2:Feature augmented convolutional neural nets for satellite image classification[J]. Remote Sensing Letters, 2020, 11(2):156-165.
[9] 姜楠, 张雪红, 汶建龙, 等. 基于高分六号宽幅影像的油菜种植分布区域提取方法[J]. 地球信息科学学报, 2021, 23(12):2275-2291.
doi: 10.12082/dqxxkx.2021.210233
[9] Jiang N, Zhang X H, Wen J L, et al. Extraction method of rapeseed planting distribution area based on GF-6 WFV image[J]. Journal of Geo-Information Science, 2021, 23(12):2275-2291.
[10] 伊尔潘·艾尼瓦尔, 买买提·沙吾提, 买合木提·巴拉提. 基于GF-2影像和Unet模型的棉花分布识别[J]. 自然资源遥感, 2022, 34(2):242-250.doi:10.6046/zrzyyg.2021135.
[10] Erpaw A, Mamat S, Maihemuti B. Recognition of cotton distribution based on GF-2 images and Unet model[J]. Remote Sensing for Natural Resources, 2022, 34(2):242-250.doi:10.6046/zrzyyg.2021135.
[11] Poortinga A, Thwal N S, Khanal N, et al. Mapping sugarcane in Thailand using transfer learning,a lightweight convolutional neural network,NICFI high resolution satellite imagery and Google Earth Engine[J]. ISPRS Open Journal of Photogrammetry and Remote Sensing, 2021, 1:100003.
[12] Shang S, Zhang J, Wang X, et al. Faster and lighter meteorological satellite image classification by a lightweight channel-dilation-concatenation net[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16:2301-2317.
[13] 黄冬梅, 陈燕丽, 莫建飞, 等. 广西典型地貌植被覆盖度随地形的变化特征[J]. 广西林业科学, 2022, 51(5):626-633.
doi: 10.19692/j.issn.1006-1126.20220505
[13] Huang D M, Chen Y L, Mo J F, et al. Vegetation coverage change characteristics of typical landforms in Guangxi with topography[J]. Guangxi Forestry Science, 2022, 51(5):626-633.
doi: 10.19692/j.issn.1006-1126.20220505
[14] 黄基杰. 甘蔗地快速识别的样本库建设——以广西崇左为例[J]. 南方国土资源, 2019(4):45-47.
[14] Huang J J. Construction of sample database for rapid identification of sugarcane fields:A case study of Chongzuo,Guangxi[J]. Nanfang Guotu Ziyuan, 2019(4):45-47.
[15] Wang M, Liu Z, Ali Baig M H, et al. Mapping sugarcane in complex landscapes by integrating multi-temporal Sentinel-2 images and machine learning algorithms[J]. Land Use Policy, 2019, 88:104190.
[16] Lofton J, Tubana B S, Kanke Y, et al. Estimating sugarcane yield potential using an in-season determination of normalized difference vegetative index[J]. Sensors, 2012, 12(6):7529-7547.
doi: 10.3390/s120607529 pmid: 22969359
[17] Fernandes J L, Rocha J V, Lamparelli R A C. Sugarcane yield estimates using time series analysis of spot vegetation images[J]. Scientia Agricola, 2011, 68(2):139-146.
[18] Rahman M M, Robson A. Integrating Landsat-8 and sentinel-2 time series data for yield prediction of sugarcane crops at the block level[J]. Remote Sensing, 2020, 12(8):1313.
[19] 张淮栋, 陈争光, 张成龙. 基于高分二号-NDVI的大豆遥感估产的时相选择[J]. 湖北农业科学, 2018, 57(6):103-108.
[19] Zhang H D, Chen Z G, Zhang C L. Time selection of estimation yield with remote sensing of soybean based on GF-2-NDVI[J]. Hubei Agricultural Sciences, 2018, 57(6):103-108.
[20] 朱再春, 陈联裙, 张锦水, 等. 基于信息扩散和关键期遥感数据的冬小麦估产模型[J]. 农业工程学报, 2011, 27(2):187-193,398.
[20] Zhu Z C, Chen L Q, Zhang J S, et al. Winter wheat yield estimation model based on information diffusion and remote sensing data at major growth stages[J]. Transactions of the Chinese Society of Agricultural Engineering, 2011, 27(2):187-193,398.
[21] 王恺宁, 王修信. 多植被指数组合的冬小麦遥感估产方法研究[J]. 干旱区资源与环境, 2017, 31(7):44-49.
[21] Wang K N, Wang X X. Research on winter wheat yield estimation with the multiply remote sensing vegetation index combination[J]. Journal of Arid Land Resources and Environment, 2017, 31(7):44-49.
[22] 娄正方, 李国清, 张向军, 等. 一种顾及多参数的冬小麦遥感估产方法与应用[J]. 测绘通报, 2021(s1):154-158.
[22] Lou Z F, Li G Q, Zhang X J, et al. A method of winter wheat yield estimation by considering multiple parameters andits application[J]. Bulletin of Surveying and Mapping, 2021(s1):154-158.
[23] 贾艳红, 苏筱茜, 李鹤冉. 1991—2020年南宁市气候变化特征分析[J]. 广西师范大学学报(自然科学版), 2023, 41(2):190-200.
[23] Jia Y H, Su X Q, Li H R. Characteristic analysis of climate change in Nanning from 1991 to 2020[J]. Journal of Guangxi Normal University (Natural Science Edition), 2023, 41(2):190-200.
[24] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[C]// IEEE Transactions on Pattern Analysis and Machine Intelligence.IEEE, 2017:640-651.
[25] Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[M]// Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015:234-241.
[26] Yu C, Gao C, Wang J, et al. BiSeNet V2:Bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129(11):3051-3068.
[27] Wang Q, Wu B, Zhu P, et al. ECA-net:Efficient channel attention for deep convolutional neural networks[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Seattle,WA,USA.IEEE, 2020:11531-11539.
[28] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
doi: 10.1109/TPAMI.2016.2644615 pmid: 28060704
[29] Chen L C, Zhu Y, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]// European Conference on Computer Vision.Cham:Springer, 2018:833-851.
[30] Poudel R P K, Liwicki S, Cipolla R. Fast-SCNN:Fast semantic segmentation network[EB/OL]. 2019:arXiv:1902.04502. https://arxiv.org/abs/1902.04502.pdf
url: https://arxiv.org/abs/1902.04502.pdf
[31] Yu C, Wang J, Peng C, et al. BiSeNet:Bilateral segmentation network for real-time semantic segmentation[C]// European Conference on Computer Vision.Cham:Springer, 2018:334-349.
[32] Hosmer D W Jr, Lemeshow S, Sturdivant R X. Applied logistic regression[M]. Nev Hoboken: John Wiley and Sons Inc, 2013.
[33] Pisner D A, Schnyer D M. Support vector machine[M]//Machine Learning. Amsterdam: Elsevier, 2020:101-121.
[34] Priyanka N A, Kumar D. Decision tree classifier:A detailed survey[J]. International Journal of Information and Decision Sciences, 2020, 12(3):246.
[35] Breiman L. Random forests[J]. Machine Learning, 2001, 45(1):5-32.
[1] ZHANG Chungui, PENG Jida. Air freshness monitoring technology based on meteorology and remote sensing[J]. Remote Sensing for Natural Resources, 2024, 36(3): 163-173.
[2] BAI Shi, TANG Panpan, MIAO Zhao, JIN Caifeng, ZHAO Bo, WAN Haoming. Information extraction of landslides based on high-resolution remote sensing images and an improved U-Net model: A case study of Wenchuan, Sichuan[J]. Remote Sensing for Natural Resources, 2024, 36(3): 96-107.
[3] ZHOU Xiaojia. Application of remote sensing monitoring in abandoned arable land in a hilly region[J]. Remote Sensing for Natural Resources, 2024, 36(1): 235-241.
[4] ZHAO Xia, MA Xinyan, YU Qian, WANG Zhaobing. Application of high-resolution InSAR technique in monitoring deformations in the Beijing Daxing International Airport[J]. Remote Sensing for Natural Resources, 2024, 36(1): 49-57.
[5] WANG Ning, JIANG Decai, ZHENG Xiangxiang, ZHONG Chang. Assessing the susceptibility of slope geological hazards based on multi-source heterogeneous data: A case study of Longgang District, Shenzhen City[J]. Remote Sensing for Natural Resources, 2023, 35(4): 122-129.
[6] ZHANG Zhimei, FAN Yanguo, JIAO Zhijun, GUAN Qingchun. Impact of soil salinization on the eco-environment quality of coastal wetlands:A case study of Yellow River Delta[J]. Remote Sensing for Natural Resources, 2023, 35(4): 226-235.
[7] LIU Li, DONG Xianmin, LIU Juan. A performance evaluation method for semantic segmentation models of remote sensing images considering surface features[J]. Remote Sensing for Natural Resources, 2023, 35(3): 80-87.
[8] LIN Jiahui, LIU Guang, FAN Jinghui, ZHAO Hongli, BAI Shibiao, PAN Hongyu. Extracting information about mining subsidence by combining an improved U-Net model and D-InSAR[J]. Remote Sensing for Natural Resources, 2023, 35(3): 145-152.
[9] LIU Hui, XU Xinyue, CHEN Mi, CHEN Fulong, DING Ruili, LIU Fei. Time-series InSAR-based dynamic remote sensing monitoring of the Great Wall of the Ming Dynasty in Qinhuangdao[J]. Remote Sensing for Natural Resources, 2023, 35(2): 202-211.
[10] ZHAO Linghu, YUAN Xiping, GAN Shu, HU Lin, QIU Mingyu. An information extraction model of roads from high-resolution remote sensing images based on improved Deeplabv3+[J]. Remote Sensing for Natural Resources, 2023, 35(1): 107-114.
[11] LI Kailin, LIAO Kuo, DANG Haofei. Recent progress in chromaticity remote sensing of inland and nearshore water bodies[J]. Remote Sensing for Natural Resources, 2023, 35(1): 15-26.
[12] MENG Congtang, ZHAO Yindi, HAN Wenquan, HE Chenyang, CHEN Xiqiu. RandLA-Net-based detection of urban building change using airborne LiDAR point clouds[J]. Remote Sensing for Natural Resources, 2022, 34(4): 113-121.
[13] SHEN Jun’ao, MA Mengting, SONG Zhiyuan, LIU Tingzhou, ZHANG Wei. Water information extraction from high-resolution remote sensing images using the deep-learning based semantic segmentation model[J]. Remote Sensing for Natural Resources, 2022, 34(4): 129-135.
[14] WANG Huajun, GE Xiaosan. Lightweight DeepLabv3+ building extraction method from remote sensing images[J]. Remote Sensing for Natural Resources, 2022, 34(2): 128-135.
[15] LIAO Kuo, NIE Lei, YANG Zeyu, ZHANG Hongyan, WANG Yanjie, PENG Jida, DANG Haofei, LENG Wei. Classification of tea garden based on multi-source high-resolution satellite images using multi-dimensional convolutional neural network[J]. Remote Sensing for Natural Resources, 2022, 34(2): 152-161.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech