Please wait a minute...
 
Remote Sensing for Natural Resources    2022, Vol. 34 Issue (2) : 242-250     DOI: 10.6046/zrzyyg.2021135
|
Recognition of cotton distribution based on GF-2 images and Unet model
ERPAN Anwar1(), MAMAT Sawut1,2,3(), MAIHEMUTI Balati1,2,3
1. College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China
2. Xinjiang Key Laboratory of Oasis Ecology, Xinjiang University, Urumqi 830046, China
3. Key Laboratory for Smart City and Environment Modelling of Higher Education Institute, Xinjiang University, Urumqi 830046, China
Download: PDF(5774 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

The typical crop cotton in the Ugan-Kuqa River Delta Oasis was used as the research object to study the applicability and optimization process of the deep learning method in the identification of cotton distribution in arid areas. Based on the domestic GF-2 images and the field survey data, the Unet deep learning method was adopted, in which the characteristics of the Unet network’s multiple convolution operations were fully utilized to explore the deep-level characteristics of cotton in remote sensing images, thereby improving the precision of cotton extraction. The results show that the recognition effect of the Unet model to extract cotton, corn, and peppers in the study area is better than the classification results of the object-oriented method and the traditional machine learning algorithms. The overall precision is 84.22%, and the Kappa coefficient is 0.804 7. Compared with the object-oriented method and the traditional machine learning algorithms SVM and RF, the overall precision has increased by 7.94 percentage points,11.93 percentage points, and 11.73 percentage points, respectively, and the Kappa coefficient has increased by 10.13%, 14.72%, and 14.60%, respectively. In the classification results of the Unet model, both the mapping precision and the user precision of cotton are higher than those of the other three methods, which are 94.95% and 89.07%, respectively. Therefore, it is feasible and reliable to use the Unet model to extract high-precision cotton spatial distribution information of arid areas on GF-2 high-resolution remote sensing images.

Keywords deep learning      cotton recognition      Unet model      GF-2 images     
ZTFLH:  TP79  
Corresponding Authors: MAMAT Sawut     E-mail: erpan_edu@163.com;korxat@xju.edu.cn
Issue Date: 20 June 2022
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Anwar ERPAN
Sawut MAMAT
Balati MAIHEMUTI
Cite this article:   
Anwar ERPAN,Sawut MAMAT,Balati MAIHEMUTI. Recognition of cotton distribution based on GF-2 images and Unet model[J]. Remote Sensing for Natural Resources, 2022, 34(2): 242-250.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2021135     OR     https://www.gtzyyg.com/EN/Y2022/V34/I2/242
Fig.1  Schematic diagram of the geographic location, interpretation points, sample distribution and target categories of the study area
Fig.2  Unet model structure diagram
Fig.3  Training loss function value of different optimizers varies with the number of iterations
Fig.4  Loss function and accuracy change curves during model training
地类 Unet 面向对象 SVM RF
PA/% UA/% PA/% UA/% PA/% UA/% PA/% UA/%
棉花 94.95 89.07 69.31 79.86 64.36 79.08 69.48 72.86
玉米 94.81 91.38 88.88 94.81 68.75 65.90 68.92 67.00
辣椒 81.69 91.55 47.06 57.54 69.69 59.16 58.30 45.44
果园 68.30 79.70 74.05 56.51 59.50 64.24 57.33 67.33
林地 61.25 68.30 63.82 87.39 62.21 63.01 64.91 72.24
其他 82.36 72.06 98.23 77.43 97.76 88.45 98.73 88.46
总体精度/% 84.22 76.28 72.29 72.49
Kappa系数 0.804 7 0.703 4 0.657 5 0.658 7
Tab.1  Classification result accuracy evaluation
Fig.5  Classification results in the study area
作物 Unet 面向对象 SVM RF
棉花 0.895 7 0.547 8 0.717 2 0.675 5
玉米 0.847 9 0.819 5 0.561 0 0.577 0
辣椒 0.826 9 0.502 0 0.359 9 0.409 1
果园 0.744 1 0.414 3 0.511 0 0.504 1
林地 0.545 8 0.194 7 0.280 8 0.303 8
其他 0.520 1 0.200 7 0.270 2 0.294 1
平均值 0.730 1 0.446 5 0.450 0 0.460 6
Tab.2  IOU statistic of local areas and objects
区域 假彩色影像 Unet结果 面向对象结果 SVM结果 RF结果
Tab.3  Schematic diagram of local area classification results
[1] 王文静. 基于多时相Landsat8数据的综合多特征棉田提取研究[D]. 徐州: 中国矿业大学, 2016.
[1] Wang W J. Research on cottonfield extraction method of integrated multi-features based on multi-temporal Landasat 8 image[D]. Xuzhou: China University of Mining and Technology, 2016.
[2] 王玉. 基于时序光谱库的棉花种植面积信息提取研究[D]. 北京: 中国地质大学(北京), 2013.
[2] Wang Y. Research on cotton planting areas extraction based on MODIS-EVI time-series data base[D]. Beijing: China University of Geosciences(Beijing), 2013.
[3] 张明伟. 基于MODIS数据的作物物候期监测及作物类型识别模式研究[D]. 武汉: 华中农业大学, 2006.
[3] Zhang M W. Study on the method of crop pehnology detection on and crop types discrimination based on MODIS data[D]. Wuhan: Huazhong Agricultural University, 2006.
[4] 张佳琪. 基于多源卫星遥感的昌吉市棉花种植面积信息提取研究[D]. 乌鲁木齐: 新疆农业大学, 2017.
[4] Zhang J Q. Research on cotton planting areas extraction of Changji City based on multi-source satellite remote sensing image[D]. Urumqi: Xinjiang Agricultural University, 2017.
[5] 郝鹏宇. 基于多时相遥感数据的农作物分类研究[D]. 北京: 中国科学院大学(中国科学院遥感与数字地球研究所), 2017.
[5] Hao P Y. Crop classification using time series remote sensing data[D]. Beijing: Institute of Remote Sensing and Digital Earth,Chinese Academy of Sciences, 2017.
[6] 马春玥. 基于遥感的新疆棉花种植面积时空变化及驱动力研究[D]. 乌鲁木齐: 新疆大学, 2020.
[6] Ma C Y. Spatial-temporal changes and driving forces of cotton planting area in Xinjiang based on remote sensing[D]. Urumqi: Xinjiang University, 2020.
[7] 古丽努尔·依沙克, 买买提·沙吾提, 马春玥. 基于多时相双极化SAR数据的作物种植面积提取[J]. 作物学报, 2020, 46(7):1099-1111.
doi: 10.3724/SP.J.1006.2020.94134
[7] Gulnur I, Mamat S, Ma C Y. Extraction of crop acreage based on multi-temporal and dual-polarization SAR data[J]. Acta Agronomica Sinica, 2020, 46(7):1099-1111.
doi: 10.3724/SP.J.1006.2020.94134
[8] Mathur A, Foody G M. Crop classification by support vector machine with intelligently selected training data for an operational application[J]. International Journal of Remote Sensing, 2008, 29(7-8):2227-2240.
doi: 10.1080/01431160701395203 url: https://www.tandfonline.com/doi/full/10.1080/01431160701395203
[9] 贾坤, 李强子, 田亦陈, 等. 微波后向散射数据改进农作物光谱分类精度研究[J]. 光谱学与光谱分析, 2011, 31(2):483-487.
[9] Jia K, Li Q Z, Tian Y C, et al. Accuracy improvement of spectral classification of crop using microwave backscatter data[J]. Spectroscopy and Spectral Analysis, 2011, 31(2):483-487.
[10] 顾晓鹤, 潘耀忠, 何馨, 等. 以地块分类为核心的冬小麦种植面积遥感估算[J]. 遥感学报, 2010, 14(4):789-805.
[10] Gu X H, Pan Y Z, He X, et al. Measurement of sown area of winter wheat based on per-field classification and remote sensing imagery[J]. Journal of Remote Sensing, 2010, 14(4):789-805.
[11] 张雨果, 王飞, 孙文义, 等. 基于面向对象的SPOT卫星影像梯田信息提取研究[J]. 水土保持研究, 2016, 23(6):345-351.
[11] Zhang Y G, Wang F, Sun W Y, et al. Terrace information extraction from SPOT remote sensing image based on object-oriented classification method[J]. Research of Soil and Water Conservation, 2016, 23(6):345-351.
[12] Smith G M, Fuller R M. An integrated approach to land cover classification:An example in the island of Jersey[J]. International Journal of Remote Sensing, 2001, 22(16):3123-3142.
doi: 10.1080/01431160152558288 url: https://www.tandfonline.com/doi/full/10.1080/01431160152558288
[13] 黄健熙, 侯矞焯, 苏伟, 等. 基于GF-1 WFV数据的玉米与大豆种植面积提取方法[J]. 农业工程学报, 2017, 33(7):164-170.
[13] Huang J X, Hou Y Z, Su W, et al. Mapping corn and soybean cropped area with GF-1 WFV data[J]. Transactions of the Chinese Society of Agricultural Engineering, 2017, 33(7):164-170.
[14] 王霄煜, 雷钧. 基于高分一号卫星数据的新疆棉花种植面积信息提取研究[J]. 新疆农业科技, 2021(1):23-26.
[14] Wang X Y, Lei J. Research on the extraction of Xinjiang cotton planting area information based on Gaofen-1 satellite data[J]. Xinjiang Agricultural Science and Technology, 2021(1):23-26.
[15] 段春华, 张汛, 尹凡. 基于国产高分卫星影像的水稻种植区提取——以江苏苏中地区为例[J]. 卫星应用, 2021(2):59-64.
[15] Duan C H, Zhang X, Yin F. Extraction of rice planting areas based on domestic high-resolution satellite images:A case study of Suzhong region[J]. Satellite Application, 2021(2):59-64.
[16] 徐知宇, 周艺, 王世新, 等. 面向GF-2遥感影像的U-Net城市绿地分类[J]. 中国图象图形学报, 2021, 26(3):700-713.
[16] Xu Z Y, Zhou Y, Wang S X, et al. U-Net for urban green space classification in Gaofen-2 remote sensing images[J]. Journal of Image and Graphics, 2021, 26(3):700-713.
[17] 戴子兵. 基于语义分割的水稻病害检测技术[D]. 成都: 西华大学, 2020.
[17] Dai Z B. Rice disease detection technology based on semantic segmentation[D]. Chengdu: Xihua University, 2020.
[18] 李越帅, 郑宏伟, 罗格平, 等. 集成U-Net方法的无人机影像胡杨树冠提取和计数[J]. 遥感技术与应用, 2019, 34(5):939-949.
[18] Li Y S, Zheng H W, Luo G P, et al. Extraction and counting of populus euphratica crown using UAV images integrated with U-Net method[J]. Remote Sensing Technology and Application, 2019, 34(5):939-949.
[19] 刘玫岑. 基于遥感和GIS的棉花面积提取和产量估测研究[D]. 乌鲁木齐: 新疆农业大学, 2006.
[19] Liu M C. The study on the extracting area and yield estimate of cotton based on RS and GIS[D]. Urumqi: Xinjiang Agricultural University, 2006.
[20] 玉苏甫·买买提, 吐尔逊·艾山, 买合皮热提·吾拉木. 新疆渭—库绿洲棉花种植面积遥感监测研究[J]. 农业现代化研究, 2014, 35(2):240-243.
[20] Yusup M, Tursun H, Magpirat G. Remote sensing of cotton plantation areas monitoring in delta oasis of Ugan-Kucha River,Xinjiang[J]. Research of Agricultural Modernization, 2014, 35(2):240-243.
[21] 樊湘鹏, 周建平, 许燕, 等. 基于优化Faster R-CNN的棉花苗期杂草识别与定位[J]. 农业机械学报, 2021, 52(5):26-34.
[21] Fan X P, Zhou J P, Xu Y, et al. Identification and localization of weeds based on optimized Faster R-CNN in cotton seedling stage under natural conditions[J]. Transactions of the Chinese Society for Agricultural Machinery, 2021, 52(5):26-34.
[22] 闫帅. 基于深度学习的冰川提取方法研究[D]. 北京: 中国地质大学(北京), 2020.
[22] Yan S. Research on glacier extraction algorithms based on data-driven deep learning[D]. Beijing: China University of Geosciences(Beijing), 2020.
[23] 赵阳. 基于深度学习的遥感图像道路提取[D]. 西安: 西安电子科技大学, 2019.
[23] Zhao Y. Remote sensing image road extraction based on deep learning[D]. Xi’an: Xidian University, 2019.
[24] 单磊. 深度学习算法并行优化技术及应用研究[D]. 长沙: 国防科技大学, 2019.
[24] Shan L. Research on parallel optimizaion for deep learning algorithms and applications[D]. Changsha: National University of Defense Technology, 2019.
[25] 李哲. 基于高分二号遥感影像的树种分类研究[D]. 北京: 北京林业大学, 2020.
[25] Li Z. Research on classification of tree species based on GF-2[D]. Beijing: Beijing Forestry University, 2020.
[26] 王俊强, 李建胜, 丁波, 等. 深度学习语义分割方法在遥感影像分割中的性能分析[J]. 计算机测量与控制, 2019, 27(7):231-235.
[26] Wang J Q, Li J S, Ding B, et al. Performance analysis of semantic segmentation method based on deep learning in remote sensing image segmentation[J]. Computer Measurement and Control, 2019, 27(7):231-235.
[27] 何昭欣, 张淼, 吴炳方, 等. Google Earth Engine支持下的江苏省夏收作物遥感提取[J]. 地球信息科学学报, 2019, 21(5):752-766.
doi: 10.12082/dqxxkx.2019.180420
[27] He Z X, Zhang M, Wu B F, et al. Extraction of summer crop in Jiangsu based on Google Earth Engine[J]. Journal of Geo-information Science, 2019, 21(5):752-766.
[28] Wei M F, Qiao B J, Zhao J H, et al. The area extraction of winter wheat in mixed planting area based on Sentinel-2 a remote sensing satellite images[J]. International Journal of Parallel,Emergent and Distributed Systems, 2020, 35(3):297-308.
doi: 10.1080/17445760.2019.1597084 url: https://www.tandfonline.com/doi/full/10.1080/17445760.2019.1597084
[29] 张鹏, 胡守庚. 地块尺度的复杂种植区作物遥感精细分类[J]. 农业工程学报, 2019, 35(20):125-134.
[29] Zhang P, Hu S G. Fine crop classification by remote sensing in complex planting areas based on field parcel[J]. Transactions of the Chinese Society of Agricultural Engineering, 2019, 35(20):125-134.
[30] 马永建. 基于CNN的高分辨率遥感影像典型农作物分类方法研究[D]. 石河子: 石河子大学, 2020.
[30] Ma Y J. Study on typical crops classification with high-resolution remote sensing images based on CNN[D]. Shihezi: Shihezi University, 2020.
[31] 曹卫彬, 杨邦杰, 宋金鹏. TM影像中基于光谱特征的棉花识别模型[J]. 农业工程学报, 2004(4):112-116.
[31] Cao W B, Yang B J, Song J P. Spectral information based model for cotton identification on Landsat TM image[J]. Transactions of the Chinese Society of Agricultural Engineering, 2004(4):112-116.
[32] 杨蜀秦, 宋志双, 尹瀚平, 等. 基于深度语义分割的无人机多光谱遥感作物分类方法[J]. 农业机械学报, 2021, 52(3):185-192.
[32] Yang S Q, Song Z S, Yin H P, et al. Crop classification method of UVA multispectral remote sensing based on deep semantic segmentation[J]. Transactions of the Chinese Society for Agricultural Machinery, 2021, 52(3):185-192.
[33] 梁继, 郑镇炜, 夏诗婷, 等. 高分六号红边特征的农作物识别与评估[J]. 遥感学报, 2020, 24(10):1168-1179.
[33] Liang J, Zheng Z W, Xia S T, et al. Crop recognition and evaluationusing red edge features of GF-6 satellite[J]. Journal of Remote Sensing, 2020, 24(10):1168-1179.
[34] 刘钊, 廖斐凡, 赵桐. 基于PSPNet的遥感影像城市建成区提取及其优化方法[J]. 国土资源遥感, 2020, 32(4):84-89.doi: 10.6046/gtzyyg.2020.04.12.
doi: 10.6046/gtzyyg.2020.04.12
[34] Liu Z, Liao F F, Zhao T. Remote sensing image urban built-up area extraction and optimization method based on PSPNet[J]. Remote Sensing for Land and Resources, 2020, 32(4):84-89.doi: 10.6046/gtzyyg.2020.04.12.
doi: 10.6046/gtzyyg.2020.04.12
[35] 师超, 姜琦刚, 段富治, 等. 基于Unet+CRF的GF-2土地利用分类[J]. 世界地质, 2021, 40(1):146-153.
[35] Shi C, Jiang Q G, Duan F Z, et al. GF-2 landuse classification based on Unet+CRF[J]. Global Geology, 2021, 40(1):146-153.
[36] Zhong Y F, Hu X, Luo C, et al. WHU-Hi:UAV-borne hyperspectral with high spatial resolution (H2) benchmark datasets and classifier for precise crop identification based on deep convolutional neural network with CRF[J]. Remote Sensing of Environment, 2020, 250:112012.
doi: 10.1016/j.rse.2020.112012 url: https://linkinghub.elsevier.com/retrieve/pii/S0034425720303825
[37] 张新长, 江鑫. 多深度神经网络集成的道路提取[J]. 地理信息世界, 2020, 27(6):87-92.
[37] Zhang X C, Jiang X. Road extraction based on integration of multiple deep neural networks[J]. Geomatics World, 2020, 27(6):87-92.
[1] NIU Xianghua, HUANG Wei, HUANG Rui, JIANG Sili. A high-fidelity method for thin cloud removal from remote sensing images based on attentional feature fusion[J]. Remote Sensing for Natural Resources, 2023, 35(3): 116-123.
[2] LIU Li, DONG Xianmin, LIU Juan. A performance evaluation method for semantic segmentation models of remote sensing images considering surface features[J]. Remote Sensing for Natural Resources, 2023, 35(3): 80-87.
[3] ZHANG Xian, LI Wei, CHEN Li, YANG Zhaoying, DOU Baocheng, LI Yu, CHEN Haomin. Research progress and prospect of remote sensing-based feature extraction of opencast mining areas[J]. Remote Sensing for Natural Resources, 2023, 35(2): 25-33.
[4] DIAO Mingguang, LIU Yong, GUO Ningbo, LI Wenji, JIANG Jikang, WANG Yunxiao. Mask R-CNN-based intelligent identification of sparse woods from remote sensing images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 97-104.
[5] QIU Lei, ZHANG Xuezhi, HAO Dawei. VideoSAR moving target detection and tracking algorithm based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(2): 157-166.
[6] ZHANG Ke, ZHANG Gengsheng, WANG Ning, WEN Jing, LI Yu, YANG Jun. A forecasting method for water table depths in areas with power transmission lines based on remote sensing and deep learning models[J]. Remote Sensing for Natural Resources, 2023, 35(1): 213-221.
[7] HU Jianwen, WANG Zeping, HU Pei. A review of pansharpening methods based on deep learning[J]. Remote Sensing for Natural Resources, 2023, 35(1): 1-14.
[8] ZHAO Linghu, YUAN Xiping, GAN Shu, HU Lin, QIU Mingyu. An information extraction model of roads from high-resolution remote sensing images based on improved Deeplabv3+[J]. Remote Sensing for Natural Resources, 2023, 35(1): 107-114.
[9] LYU Yanan, ZHU Hong, MENG Jian, CUI Chengling, SONG Qiqi. A review and adaptability study of deep learning models for vehicle detection based on high-resolution remote sensing images[J]. Remote Sensing for Natural Resources, 2022, 34(4): 22-32.
[10] TAN Hai, ZHANG Rongjun, FAN Wenfeng, ZHANG Yifang, XU Hang. Classification and detection of radiation anomalies in Chinese optical satellite images by integrating multi-scale features[J]. Remote Sensing for Natural Resources, 2022, 34(4): 97-104.
[11] SU Wei, LIN Yangyang, YUE Wen, CHEN Yingbiao. Identification of mariculture areas in Guangdong Province and remote sensing monitoring of their spatial and temporal changes based on the U-Net convolutional neural network[J]. Remote Sensing for Natural Resources, 2022, 34(4): 33-41.
[12] ZHANG Pengqiang, GAO Kuiliang, LIU Bing, TAN Xiong. Classification of hyperspectral images based on deep Transformer network combined with spatial-spectral information[J]. Remote Sensing for Natural Resources, 2022, 34(3): 27-32.
[13] WANG Huajun, GE Xiaosan. Lightweight DeepLabv3+ building extraction method from remote sensing images[J]. Remote Sensing for Natural Resources, 2022, 34(2): 128-135.
[14] SUN Yu, HUANG Liang, ZHAO Junsan, CHANG Jun, CHEN Pengdi, CHENG Feifei. High spatial resolution automatic detection of bridges with high spatial resolution remote sensing images based on random erasure and YOLOv4[J]. Remote Sensing for Natural Resources, 2022, 34(2): 97-104.
[15] XUE Bai, WANG Yizhe, LIU Shuhan, YUE Mingyu, WANG Yiying, ZHAO Shihu. Change detection of high-resolution remote sensing images based on Siamese network[J]. Remote Sensing for Natural Resources, 2022, 34(1): 61-66.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech