Please wait a minute...
 
Remote Sensing for Natural Resources    2025, Vol. 37 Issue (3) : 95-103     DOI: 10.6046/zrzyyg.2024031
|
A MobileNet-based lightweight cloud detection model
YE Wujian1(), XIE Linfeng2, LIU Yijun1, WEN Xiaozhuo1, Li Yang1
1. School of Integrated Circuits, Guangdong University of Technology, Guangzhou 510006, China
2. School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China
Download: PDF(4077 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  

The high computational complexity and large model scales of existing cloud detection algorithms render their deployment on edge devices almost infeasible. To address this challenge, this study proposed a MobileNet-based lightweight cloud detection model. In the downsampling stage, a residual module based on the attention mechanism was employed to reduce model parameters through group convolution. The channel shuffling mechanism and the squeeze-and-excitation (SE) channel attention were integrated to enhance the information exchange between channels. These approaches reduced parameters and computational complexity while maintaining the ability to extract significant features. In the upsampling stage, the RepConv module and the improved atrous spatial pyramid pooling (ASPP) module were used to enhance the network’s learning capability and its ability to capture image details and spatial information. Experimental results demonstrate that the proposed model can achieve higher cloud detection accuracy while reducing parameters and model complexity, substantiating its practicality and feasibility.

Keywords cloud detection      MobileNet      attention mechanism      multi-scale feature      atrous spatial pyramid pooling (ASPP) module     
ZTFLH:  TP79  
Issue Date: 01 July 2025
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
Wujian YE
Linfeng XIE
Yijun LIU
Xiaozhuo WEN
Yang Li
Cite this article:   
Wujian YE,Linfeng XIE,Yijun LIU, et al. A MobileNet-based lightweight cloud detection model[J]. Remote Sensing for Natural Resources, 2025, 37(3): 95-103.
URL:  
https://www.gtzyyg.com/EN/10.6046/zrzyyg.2024031     OR     https://www.gtzyyg.com/EN/Y2025/V37/I3/95
输入形状 结构层名 扩展因子 输出通道数 数量 步距
2242×3 卷积层 32 1 2
1122×32 IRB 1 16 1 1
1122×16 IRB 6 24 2 2
562×24 IRB 6 32 3 2
282×32 IRB 6 64 4 2
142×64 IRB 6 96 3 1
142×96 IRB 6 160 3 2
72×160 IRB 6 320 1 1
72×320 1×1
卷积层
1 280 1 1
72×1 280 平均池
化7×7
1
1×1×1 280 线性层
Tab.1  MobileNetV2 model structure
Fig.1  Overall structure of the network model
Fig.2  Structure diagram of lightweight residual module
Fig.3  Structure diagram of atrous spatial pyramid pooling
Fig.4  Structure diagram of improved atrous spatial pyramid pooling
Fig.5  Reparameterization process
Fig.6  38Cloud patch image
Fig.7  Fake color image
实验环境 实验配置
操作系统 Linux
编程语言 Python 3.6.13
Pytorch 1.10.0_cu112
CPU型号 2.0 GHz Xeon处理器
GPU型号 GeForce RTX 2080Ti 11 GB
Tab.2  Experimental environment
Fig.8  The curve of accuracy changing with number of iterations
序号 假彩图像 真实值 MobileNetV2 ICNet DFANet LEDNet 本文模型
1
2
3
4
5
6
Tab.3  Cloud detection segmentation results of different models
网络模型 准确率 召回率 精确率 F1分数 Jaccard
MobileNetV2 93.28 87.29 87.66 85.79 79.11
ICNet 91.35 92.34 83.91 85.07 79.12
DFANet 92.23 88.17 86.13 84.89 78.20
LEDNet 91.62 88.76 85.68 84.39 77.96
本文模型 93.24 90.82 87.03 86.27 80.49
Tab.4  Comparison of performance indicators of different models(%)
网络模型 参数量/106 GFLOPS
MobileNetV2 4.68 1.93
ICNet 26.24 5.70
DFANet 2.15 1.00
LEDNet 0.92 3.54
本文模型 1.43 1.04
Tab.5  Comparison of parameter quantity and complexity of different models
网络模型 A B C 准确
召回
精确
F1
分数
Jac-
card
MobileNetV2 93.28 87.29 87.66 85.79 79.11
模型A 92.77 85.51 89.53 84.95 78.37
模型B 92.93 88.09 88.50 85.42 79.33
提出的模型 93.24 90.82 87.03 86.27 80.49
Tab.6  Comparison of ablation experiment results(%)
[1] Lu C L, Bai Z G.Characteristics and typical applications of GF-1 satellite[C]// 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).IEEE, 2015:1246-1249.
[2] Xu X, Shi Z W, Pan B. ℓ0-based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018,141:46-58.
[3] Wei J, Huang W, Li Z Q, et al. Estimating 1-km-resolution PM2.5 concentrations across China using the space-time random forest approach[J]. Remote Sensing of Environment, 2019,231:111221.
[4] Wang Z G, Kang Q, Xun Y J, et al. Military reconnaissance application of high-resolution optical satellite remote sensing[C]// International Symposium on Optoelectronic Technology and Application 2014:Optical Remote Sensing Technology and Applications.SPIE, 2014,9299:301-305.
[5] Magney T S, Vierling L A, Eitel J U H. et al. Response of high frequency Photochemical Reflectance Index (PRI) measurements to environmental conditions in wheat[J]. Remote Sensing of Environment, 2016,173:84-97.
[6] Zhang Y C, Rossow W B, Lacis A A, et al. Calculation of radiative fluxes from the surface to top of atmosphere based on ISCCP and other global data sets:Refinements of the radiative transfer model and the input data[J]. Journal of Geophysical Research:Atmospheres, 2004,109:D19105.
[7] 邓丁柱. 基于深度学习的多源卫星遥感影像云检测方法[J]. 自然资源遥感, 2023, 35(4):9-16.doi:10.6046/zrzyyg.2022317.
[7] Deng D Z. Deep learning-based cloud detection method for multi-source satellite remote sensing images[J]. Remote Sensing for Na-tural Resources, 2023, 35(4):9-16.doi:10.6046/zrzyyg.2022317.
[8] 刘紫涵, 吴艳兰. 遥感图像云检测方法研究进展[J]. 国土资源遥感, 2017, 29(4):6-12.doi:10.6046/gtzyyg.2017.04.02.
[8] Liu Z H, Wu Y L. A review of cloud detection methods in remote sensing images[J]. Remote Sensing for Land and Resources, 2017, 29(4):6-12.doi:10.6046/gtzyyg.2017.04.02.
[9] 刘小波, 刘鹏, 蔡之华, 等. 基于深度学习的光学遥感图像目标检测研究进展[J]. 自动化学报, 2021, 47(9):2078-2089.
[9] Liu X B, Liu P, Cai Z H, et al. Research progress of optical remote sensing image object detection based on deep learning[J]. ActaAutomatica Sinica, 2021, 47(9):2078-2089.
[10] He Q B, Sun X, Yan Z Y, et al. DABNet:Deformable contextual and boundary-weighted network for cloud detection in remote sen-sing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021,60:5601216.
[11] Wu X, Shi Z W. Scene aggregation network for cloud detection on remote sensing imagery[J]. IEEE Geoscience and Remote Sensing Letters, 2022,19:1000505.
[12] 刘广进, 王光辉, 毕卫华, 等. 基于DenseNet与注意力机制的遥感影像云检测算法[J]. 自然资源遥感, 2022, 34(2):88-96.doi:10.6046/zrzyyg.2021128.
[12] Liu G J, Wang G H, Bi W H, et al. Cloud detection algorithm of remote sensing image based on DenseNet and attention mechanism[J]. Remote Sensing for Natural Resources, 2022, 34(2):88-96.doi:10.6046/zrzyyg.2021128.
[13] 孙盛, 蒙芝敏, 胡忠文, 等. 多尺度轻量化CNN在SAR图像地物分类中的应用[J]. 自然资源遥感, 2023, 35(1):27-34.doi: 10.6046/zrzyyg.2021421.
[13] Sun S, Meng Z M, Hu Z W, et al. Application of multi-scale and lightweight CNN in SAR image-based surface feature classification[J]. Remote Sensing for Natural Resources, 2023, 35(1):27-34.doi: 10.6046/zrzyyg.2021421.
[14] Yin Y F, Cheng X, Shi F, et al. High-order spatial interactions enhanced lightweight model for optical remote sensing image-based small ship detection[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024,62:4201416.
[15] Howard A G, Zhu M L, Chen B, et al. MobileNets:Efficient convolutional neural networks for mobile vision applications[J/OL]. arXiv, 2017(2017-04-17). https://arxiv.org/abs/1704.04861.
url: https://arxiv.org/abs/1704.04861
[16] Sandler M, Howard A, Zhu M L, et al. MobileNetV2:Inverted residuals and linear bottlenecks[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2018:4510-4520.
[17] Chen L C, Papandreou G, Schroff F, et al. Rethinkingatrous convolution for semantic image segmentation[J/OL]. arXiv, 2017(2017-06-17). https://arxiv.org/abs/1706.05587.
url: https://arxiv.org/abs/1706.05587
[18] Ding X H, Zhang X Y, Ma N N, et al. RepVGG:Making VGG-style ConvNets great again[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).IEEE, 2021:13728-13737.
[19] Mohajerani S, Saeedi P. Cloud-net:An end-to-end cloud detection algorithm for Landsat8 imagery[C]// IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium.IEEE, 2019:1029-1032.
[20] Zhao H S, Qi X J, Shen X Y, et al. ICNet for real-time semantic segmentation on high-resolution images[J/OL]. arXiv, 2017(2017-04-27). http://arxiv.org/abs/1704.08545v2.
url: http://arxiv.org/abs/1704.08545v2
[21] Li H C, Xiong P F, Fan H Q, et al.DFANet:Deep feature aggregation for real-time semantic segmentation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).IEEE, 2019:9514-9523.
[22] Wang Y, Zhou Q, Liu J, et al. Lednet:A lightweight encoder-decoder network for real-time semantic segmentation[C]// 2019 IEEE International Conference on Image Processing (ICIP).IEEE, 2019:1860-1864.
[1] GUO Wei, LI Yu, JIN Haibo. Detecting ships from SAR images based on high-dimensional contextual attention and dual receptive field enhancement[J]. Remote Sensing for Natural Resources, 2025, 37(3): 104-112.
[2] WEI Lin, RAN Haoxiang, YIN Yuping. A classification network of hyperspectral images with multi-scale feature fusion[J]. Remote Sensing for Natural Resources, 2025, 37(3): 113-122.
[3] PANG Min. An intelligent platform for extracting patches from multisource domestic satellite images and its application[J]. Remote Sensing for Natural Resources, 2025, 37(2): 148-154.
[4] LIU Chenchen, GE Xiaosan, WU Yongbin, YU Haikun, ZHANG Beibei. A method for information extraction of buildings from remote sensing images based on hybrid attention mechanism and Deeplabv3+[J]. Remote Sensing for Natural Resources, 2025, 37(1): 31-37.
[5] QU Haicheng, WANG Ying, LIU Lamei, HAO Ming. Information extraction of roads from remote sensing images using CNN combined with Transformer[J]. Remote Sensing for Natural Resources, 2025, 37(1): 38-45.
[6] ZHENG Zongsheng, WANG Zhenghan, WANG Zhenhua, LU Peng, GAO Meng, HUO Zhijun. An improved 3D Octave convolution-based method for hyperspectral image classification[J]. Remote Sensing for Natural Resources, 2024, 36(4): 82-91.
[7] QU Haicheng, LIANG Xu. Building extraction from high-resolution images using a hybrid attention mechanism combined with multi-scale feature enhancement[J]. Remote Sensing for Natural Resources, 2024, 36(4): 107-116.
[8] DENG Dingzhu. Deep learning-based cloud detection method for multi-source satellite remote sensing images[J]. Remote Sensing for Natural Resources, 2023, 35(4): 9-16.
[9] CHEN Di, PENG Qiuzhi, HUANG Peiyi, LIU Yaxuan. Detecting land for photovoltaic development based on the attention mechanism and improved YOLOv5[J]. Remote Sensing for Natural Resources, 2023, 35(4): 90-95.
[10] NIU Xianghua, HUANG Wei, HUANG Rui, JIANG Sili. A high-fidelity method for thin cloud removal from remote sensing images based on attentional feature fusion[J]. Remote Sensing for Natural Resources, 2023, 35(3): 116-123.
[11] WU Weichao, YE Fawang. Cloud detection of Sentinel-2 images for multiple backgrounds[J]. Remote Sensing for Natural Resources, 2023, 35(3): 124-133.
[12] ZHENG Zongsheng, LIU Haixia, WANG Zhenhua, LU Peng, SHEN Xukun, TANG Pengfei. Improved 3D-CNN-based method for surface feature classification using hyperspectral images[J]. Remote Sensing for Natural Resources, 2023, 35(2): 105-111.
[13] JIN Yuanhang, XU Maolin, ZHENG Jiayuan. A dead tree detection algorithm based on improved YOLOv4-tiny for UAV images[J]. Remote Sensing for Natural Resources, 2023, 35(1): 90-98.
[14] SHEN Jun’ao, MA Mengting, SONG Zhiyuan, LIU Tingzhou, ZHANG Wei. Water information extraction from high-resolution remote sensing images using the deep-learning based semantic segmentation model[J]. Remote Sensing for Natural Resources, 2022, 34(4): 129-135.
[15] ZHANG Pengqiang, GAO Kuiliang, LIU Bing, TAN Xiong. Classification of hyperspectral images based on deep Transformer network combined with spatial-spectral information[J]. Remote Sensing for Natural Resources, 2022, 34(3): 27-32.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
Copyright © 2017 Remote Sensing for Natural Resources
Support by Beijing Magtech