自然资源遥感, 2023, 35(4): 9-16 doi: 10.6046/zrzyyg.2022317

技术方法

基于深度学习的多源卫星遥感影像云检测方法

邓丁柱,

内蒙古自治区测绘地理信息中心,呼和浩特 010051

Deep learning-based cloud detection method for multi-source satellite remote sensing images

DENG Dingzhu,

Inner Mongolia Autonomous Region Surveying, Mapping and Geographic Information Center, Hohhot 010051, China

责任编辑: 陈理

收稿日期: 2022-08-1   修回日期: 2022-09-8  

Received: 2022-08-1   Revised: 2022-09-8  

作者简介 About authors

邓丁柱(1976-),男,高级工程师,主要从事地理信息数据处理、系统建设及遥感应用等研究。Email: nmchddz@126.com

摘要

云检测是光学卫星影像预处理过程的重要组成部分,对于后续应用分析具有重要意义。随着光学卫星遥感影像的不断丰富,如何实现海量多源卫星遥感影像的快速云检测是一项具有挑战性的任务。针对传统云检测方法精度低、通用性差等问题,本研究提出了一种多尺度特征融合神经网络模型,称为多源遥感云检测网络(multi-source remote sensing cloud detection network, MCDNet),MCDNet采用U型架构及轻量化骨干网络设计,解码器部分运用多尺度特征融合及通道注意力机制提升模型性能。模型在上万个全球分布的多源卫星影像上训练而成,其中不仅包括谷歌、Landsat等常用卫星数据,还包括GF-1,GF-2和GF-5等国产卫星数据。实验中引入多个经典语义分割模型作为对比参考,实验结果显示该文提出的方法在云检测方面具有更好的性能,且在所有不同类型卫星数据上均取得90%以上的检测精度。模型对未参与训练的哨兵数据进行测试,依然取得较好的云检测效果,表明模型具有良好的鲁棒性,在作为中高分辨率卫星影像云检测通用模型方面具有一定潜力。

关键词: 云检测; 深度学习; 多源遥感; 国产卫星; 卷积神经网络; 注意力机制

Abstract

Cloud detection, as a crucial step in preprocessing optical satellite images, plays a significant role in the subsequent application analysis. The increasingly enriched optical satellite remote sensing images pose a challenge in achieving quick cloud detection of numerous multi-source satellite remote sensing images. Given that conventional cloud detection exhibits low accuracy and limited universality, this study proposed a multi-scale feature fusion neural network model, i.e., the multi-source remote sensing cloud detection network (MCDNet). The MCDNet comprises a U-shaped architecture and a lightweight backbone network, and its decoder integrates multi-scale feature fusion and a channel attention mechanism to enhance model performance. The MCDNet model was trained using tens of thousands of globally distributed multi-source satellite images, covering commonly used satellite data like Google and Landsat data and domestic satellite data like GF-1, GF-2, and GF-5 data. Several classic semantic segmentation models were used for comparison with the MCDNet model in the experiment. The experimental results indicate that the MCDNet model exhibited superior performance in cloud detection, achieving detection accuracy of over 90% for all types of satellite data. Additionally, the MCDNet model was tested on the Sentinel data that were not used in training, yielding satisfactory cloud detection effects. This demonstrates the MCDNet model’s robustness and potential for use as a general model for cloud detection of medium- to high-resolution satellite images.

Keywords: cloud detection; deep learning; multi-source remote sensing; domestic satellite; convolutional neural network; attention mechanism

PDF (4339KB) 元数据 多维度评价 相关文章 导出 EndNote| Ris| Bibtex  收藏本文

本文引用格式

邓丁柱. 基于深度学习的多源卫星遥感影像云检测方法[J]. 自然资源遥感, 2023, 35(4): 9-16 doi:10.6046/zrzyyg.2022317

DENG Dingzhu. Deep learning-based cloud detection method for multi-source satellite remote sensing images[J]. Remote Sensing for Land & Resources, 2023, 35(4): 9-16 doi:10.6046/zrzyyg.2022317

0 引言

21世纪,卫星遥感技术作为一种重要地球观测手段被广泛应用。然而,全球年平均云覆盖区域约为66%[1],意味着常规拍摄的光学卫星影像有一半以上受云遮挡难以获取有效信息,因而通过云检测技术生成掩模文件对于有效数据的筛选具有重要意义。近年来,随着卫星遥感技术的不断进步,遥感数据的规模也越来越大,据统计光学卫星影像单日接收量达数百景,传统云检测方式已难以满足大规模数据生产需求,研发快速、高效、准确的云检测方法对于保证后续遥感产品质量与服务效率至关重要[2]。目前,遥感影像云检测方法大致可以分为3种类型: 基于阈值的方法、基于传统机器学习的方法和基于卷积神经网络(convolutional neural network,CNN)的方法。基于阈值的算法常用于多光谱和高光谱图像的云检测[3-5]。这类方法的基本原理是利用云与其他物体在可见光—短波红外光谱范围内的反射率差异,通过人工设计特征提取规则对云进行识别和分割[6-8]。Fmask算法[3] 及其改进版本[9-10]是这类方法的典型代表,并广泛用于特定类型卫星图像的云检测应用中,包括 Landsat[8],Terra/Aqua[11],Sentinel[12],HJ-1[13]和GF-5 多光谱图像[14] 等。但是这类方法仍然存在一些不足。一是算法设计往往适用于具有多波段的特定载荷,不适用于一些只有4个波段的主流高分辨率卫星数据等; 二是基于像素计算很容易导致“椒盐”效应[15-16]; 三是阈值的设定往往依赖于专家知识,难以在复杂场景下推广应用[17-18]

随着机器学习技术的兴起,针对云检测问题提出了人工神经网络[19]、支持向量机[20]和随机森林[21]等传统机器学习算法[22-23]。传统机器学习方法的优势在于提取规则和阈值的设定不再完全依赖专家知识[24-25]。然而,与基于阈值的算法一样仍难以克服 “椒盐” 效应,此外在融合空间和光谱信息方面仍有改进空间。

深度学习技术被逐步应用于遥感领域并取得了显著突破,涉及图像融合、图像配准、场景分类、目标检测、土地覆盖分类和地质体识别等[26-27]。与传统机器学习方法的不同在于,后者利用卷积操作可以更好地提取目标特征[28],且能够实现空间和光谱信息的融合提取。近年来,学者们尝试将深度学习技术应用于卫星影像云检测,对不同结构的网络模型的有效性进行了有益的探索[15,18,23-24,29-32],并针对不同类型的载荷研发了云检测方法[33-38]。然而,针对载荷参数不同的多种主流卫星影像均适用的通用型深度学习模型的研究依然有待于进一步开展。研究的主要难点在于以下3个方面: 一是获取具有代表性的多种类型卫星数据并通过标记形成训练样本集的成本较高; 二是不同载荷波段组合不一致为构建通用的深度学习模型提出了挑战; 三是深度学习模型种类众多且大多以解决计算机视觉领域问题被提出,应用于遥感数据需进行针对性优化。

在本文中,提出了一种适用于多源卫星影像的云检测模型,称为多源遥感云检测网络(multi-source remote sensing cloud detection network,MCDNet),以U型结构为基础,采用轻量化骨干网络设计,通过多尺度特征融合及门控机制提升识别精度。在全球范围内搜集多源卫星影像数据,其中即包括谷歌和Landsat等常用国外卫星数据,也包括GF-1,GF-2和 GF-5等国产卫星数据。为了保证模型的通用型,所有卫星影像均只保留真彩色3个波段用于构建云检测数据集。选取多个经典的语义分割网络作为对比开展实验,结果显示本文提出方法在云检测方面具有很好的应用潜力。

1 研究方法

1.1 MCDNet主体架构

MCDNet采用U型架构,由编码器、解码器和多尺度融合模块3部分构成(图1)。编码器由6组DCB(deep-wise convolutional block)卷积模块(深度卷积、归一化层和1×1卷积组合)构成,共包含3个下采样层,未采用更多下采样层的原因是考虑尽可能保留浅层的空间信息以提高识别结果的边界精度。DCB卷积模块由1个3×3的深度卷积和2个1×1普通卷积以及激活层组成,不同于传统轻量化模型中的瓶颈结构设计,这里处于中间位置的1×1卷积核数较大,呈现出“两头细,中间粗”的反瓶颈结构,经实验验证这种结构能够在不降低模型整体性能的前提下减少模型参数。解码器的开始部分参考了UNet [39]的设计,自下而上进行上采样的过程中用跳层连接叠加来自于编码器的特征。

图1

图1   MCDNet网络结构示意图

Fig.1   Architecture diagram of MCDNet


1.2 多尺度融合模块

CNN利用降采样操作一方面降低了模型的参数量; 另一方面使得模型具有一定深度的同时饱有足够的感受野。降采样操作的最大缺点在于采样过程导致有效信息的丢失,在识别结果上表现为目标边界识别准确性差,如早期语义分割网络(fully convolutional networks,FCN)。以降采样操作形成的多个尺度的特征相结合被证实在计算机视觉任务上是有效的。本文提出一种适用于U型架构的多尺度融合模块,使模型能够在模型训练中根据识别场景对相关的尺度特征进行加权识别性能。首先,利用常规的CBR模块(卷积层、归一化层及激活层组合)对解码器输出的各个尺度的特征进行卷积操作,再将所有特征上采样至原始输入尺寸,最后将所有特征进行堆叠,利用通道注意力模块对有效特征进行加权。

进入到通道注意力中的特征首先经过全局池化层被抽象为具有C(通道个数)个单元的向量,利用多层感知机对该向量进行非线性变换,再利用Sigmoid函数将变换后的向量映射到[0,1]范围内,此时可将向量中的每一个数值看作是对应通道的权重,利用权重向量乘以原始数据即可实现对各个通道的加权处理。具体公式为:

Fc=σ{MLP[GAP(F)]}·F

式中: F为输入特征; Fc为加权处理后的特征; GAP表示全局池化操作; MLP表示多层感知机; σ为Sigmoid激活层。

1.3 评价方法

采用精确率、召回率、总体分类精度、F1值及交并比作为云检测精度的评价指标。各评价参数的计算方法分别为:

P =TP/(TP+FP),

R =TP/(TP+PN),

OA =(TP+TN)/(TP+TN+FP+FN),

F1=2(1/P+1/R),

IoU =TP/(TP+FP+FN),

式中: P为精确率; R为召回率; OA为总体分类精度; F1为 F1得分; IoU为交并比; TP为判正的正样本; FN为判负的正样本; FP为判正的负样本; TN为判负的负样本。

2 多源卫星影像及数据处理

搜集了目前已开源的具有代表性的云检测数据集,包括谷歌影像、Landsat系列影像、GF-1、GF-2及GF-5卫星影像,空间分辨率覆盖0.5~30 m(表1)。由于各个数据集组织方式略有不同,在原始影像的基础上进行二次裁切,获得256像素×256像素大小的切片数据,裁切方式为固定间隔顺序裁切。由于GF-1数据量较大,为了尽可能平衡不同空间分辨率数据的数量,对GF-1的切片影像进行筛选,剔除其中无云的纯背景影像,从剩余影像中随机选择8 400个作为训练数据。最终形成的样本数据共计2万个切片(256×256×3)。

表1   多源卫星遥感影像云检测数据集

Tab.1  Dataset for cloud detection from multi-source remote sensing images

序号卫星数据类型空间分辨率/m影像尺寸/像素影像数量/个切片数量/个参考文献
1Google0.5~1.51 280×720150600[25]
2Landsat5/7/830384×3848 4008 400[40]
3GF-1161 200×1 3004 1685 000[41]
4GF-247 300×6 908341 000[42]
5GF-5302 008×2 0834805 000[37]

新窗口打开| 下载CSV


在进行深度学习模型训练之前需要对影像切片进行预处理,首先按照载荷类型对数据进行统计分析,剔除数据中存在的负值及异常高值,将所有数据归一化至[0,1]范围内。所有数据经随机打乱后按照6∶2∶2划分为训练集、验证集和测试集,对比实验中精度评价均基于测试集进行。

3 实验及结果分析

3.1 实验设置

本次实验中引入多个经典语义分割网络作为对比模型,包括SegNet[43],UNet[39],PSPNet (pyramid scene parsing network)[44],BiSeNet (bilateral segmentation network)[45],HRNetV2(high-resolution representation network v2)[46],DeepLabV3+[47]和MFGNet(multiscale fusion gated network)[36]。所有模型均在相同数据集以相同的训练参数设置进行实验。实验在 TensorFlow(1.13.1)框架及NVIDIA Tesla V100 GPU下进行,训练过程中使用自适应学习率优化算法(Adam)作为优化器以0.000 5为初始学习率进行优化,以“Categorical_Crossentropy”作为损失函数,所有模型经过30轮训练后取其中最优模型进行对比。

3.2 模型性能对比分析

图2展示了训练过程中模型的训练和验证精度。随着训练的进行,训练集的准确率曲线迅速上升,同时损失曲线迅速下降,20个轮次后逐渐到稳定,并且在此期间训练集和验证集的精度曲线趋势相同,表明模型没有过拟合。

图2

图2   MCDNet网络训练过程曲线

Fig.2   Training curve of the MCDNet


表2展示了各类模型在云检测任务上取得的实验结果。总的来说,所有参与实验的CNN对于云检测都是有效的,识别准确度均达到了90%以上。MCDNet在所有评估指标中均取得了最优的成绩,其中准确率达到0.97。值得一提的是,MCDNet 的召回值显著优于其他模型为0.95,表明该模型识别的假阴性率较低。F1得分综合了精确率和召回率的评价结果,可以更好地代表模型的整体性能。

表2   多源遥感影像云检测精度

Tab.2  Evaluation for cloud detection on multi-source remote sensing images

模型PRF1OAIoU
SegNet0.890.830.860.930.75
PSPNet0.870.860.860.930.76
HRNetV20.940.850.890.940.80
UNet0.850.940.890.950.80
BiSeNet0.840.960.890.950.81
DeeplabV3+0.910.900.910.950.83
MFGNet0.910.910.910.960.84
MCDNet0.930.950.940.970.89

新窗口打开| 下载CSV


而交并比用于检测预测结果与真实值的重合程度,对于分割任务更有说服力。MCDNet的F1得分为0.94,交并比达到0.89明显优于其他方法。总的来说,所提出的模型一方面比其他方法具有更好的性能,同时也意味着其可以在包含多种卫星数据上表现得更加鲁棒。

3.3 模型消融实验

为了进一步验证所提出模型中关键结构的有效性,以MCDNet为基准,设计3组模型消融实验(表3)。MCDNet-withoutAT代表去除了注意力机制的MCDNet,MCDNet-Xcep是将编码器替换为Xcep-tion的MCDNet,MCDNet-withoutDC为将模型中深度卷积替换为普通卷积的MCDNet。结果显示去掉模型中任意结构都会导致模型性能的下降,其中影响最大的是通道注意力融合模块,去除后模型召回率和交并比降低了3~4个百分点。

表3   MCDNet消融实验精度评价

Tab.3  Evaluation for ablation experiments of MCDNet

模型PRF1OAIoU
MCDNet-withoutAT0.920.920.920.960.85
MCDNet-Xcep0.910.960.930.970.87
MCDNet-withoutDC0.920.950.930.970.88
MCDNet0.930.950.940.970.89

新窗口打开| 下载CSV


3.4 云检测效果视觉评价

为了更好地展示MCNNet在云检测任务上的优势,本文从测试数据随机选出一组覆盖各类载荷且有不同程度云覆盖的卫星影像进行测试。从表4中各类卫星影像可见,测试影像中包含沙漠、雪地、森林、农田、河流和裸地等多种地物场景,场景中云的覆盖度不同,类型也不同。利用MCDNet预测得到的云掩模结果显示MCDNet在各类场景下均可实现对云的准确识别,值得一提测试数据中包含的一些易混淆目标如雪地等也能够被准确识别,且模型对于一些薄云的判断也较为恰当,表明模型通过海量的多源卫星影像样本的训练已经充分学习到云的语义表达。

表4   不同云覆盖度的多源遥感真彩色影像及MCDNet云检测结果

Tab.4  Multi-source remote sensing true color images and MCDNet cloud detection results under different cloud coverage conditions

卫星0%云覆盖20%云覆盖
真彩色影像云检测结果真彩色影像云检测结果
Landsat
GF-5
GF-2
GF-1
Google
卫星45%云覆盖80%云覆盖
真彩色影像云检测结果真彩色影像云检测结果
Landsat
GF-5
GF-2
GF-1
Google

新窗口打开| 下载CSV


3.5 模型通用性测试

构建通用云检测模型的2个关键前提分别是拥有大量的样本和通用有效的模型设计,常用光学卫星应用的侧重点不同,且数据通道数不一致,因而以往云检测模型主要是针对载荷特点进行定制开发,模型迁移性不强。本文采用RGB真彩色影像作为输入源,一方面可以最大限度提升样本数量的同时降低制作成本; 另一方面的原因是验证不同载荷数据统一为3个通道数据对于模型的设计是否有利。实验数据中既包含DN值数据也包含反射率数据,所有数据均在剔除异常值后归一化至[0,1]范围内,因而使得不同传感器获取的不同时相的数据都具有相对接近的视觉表达(图3)。而通过CNN可以从真彩色影像中云的分布规律、纹理特征和空间关系等习得用于云检测的通用语义表达能力。为了进一步验证这一结论,利用训练样本中不存在的载荷类型数据作为实验目标进行测试。将之前训练的带有权重的MCDNet模型直接应用于5景哨兵2号卫星数据进行云检测,从图3中可见,识别结果依然很准确,F1得分为0.90,交并比为0.82。

图3

图3   MCDNet在哨兵2号数据上的云检测结果

Fig.3   Cloud detection results of MCDNet on Sentinel-2 images


4 结论

本研究提出了一种多尺度特征融合神经网络模型,通过轻量化骨干网络设计结合多尺度特征融合及通道注意力机制实现了对多种类型卫星影像的高精度云检测。对比于多个经典的语义分割网络取得了最优的检测性能,得出如下结论:

1)采用真彩色影像结合语义分割网络可以实现对中高空间分辨率卫星影像高效准确的云检测。

2)本研究提出的模型在传统U型架构的基础上应用轻量化骨干设计、多尺度特征融合及注意力机制能够有效地提升模型性能,相较于经典语义分割模型具有明显优势。

3)MCDNet由全球分布的数万计样本训练而成,在多类卫星影像云检测中均实现了90%以上的识别精度,展现出较好的鲁棒性,为通用的卫星云检测模型设计提供参考。

但是,目前本研究只针对国产卫星中的GF-1,GF-2及GF-5影像开展了实验,今后的研究重点是将模型应用于更多类型的国产卫星数据,逐步优化完善尽早提出满足业务化生产需求的通用模型。

参考文献

King M D, Platnick S, Menzel W P, et al.

Spatial and temporal distribution of clouds observed by MODIS onboard the Terra and Aqua satellites

[J]. IEEE Transactions on Geoscience and Remote Sensing, 2013, 51(7):3826-3852.

DOI:10.1109/TGRS.2012.2227333      URL     [本文引用: 1]

Yu J, Yan B.

Efficient solution of large-scale domestic hyperspectral data processing and geological application

[C]// 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP).IEEE, 2017:1-4.

[本文引用: 1]

Irish R R, Barker J L, Goward S N, et al.

Characterization of the Landsat7 ETM+ automated cloud-cover assessment (ACCA) algorithm

[J]. Photogrammetric Engineering and Remote Sensing, 2006, 72(10):1179-1188.

DOI:10.14358/PERS.72.10.1179      URL     [本文引用: 2]

Zhu Z, Woodcock C E.

Object-based cloud and cloud shadow detection in Landsat imagery

[J]. Remote Sensing of Environment, 2012, 118:83-94.

DOI:10.1016/j.rse.2011.10.028      URL     [本文引用: 1]

Zhu Z, Wang S, Woodcock C E.

Improvement and expansion of the Fmask algorithm:Cloud,cloud shadow,and snow detection for Landsats4-7,8,and Sentinel-2 images

[J]. Remote Sensing of Environment, 2015, 159:269-277.

DOI:10.1016/j.rse.2014.12.014      URL     [本文引用: 1]

Rossow W B, Garder L C.

Cloud detection using satellite measurements of infrared and visible radiances for ISCCP

[J]. Journal of Climate, 1993, 6(12):2341-2369.

DOI:10.1175/1520-0442(1993)006<2341:CDUSMO>2.0.CO;2      URL     [本文引用: 1]

Gesell G.

An algorithm for snow and ice detection using AVHRR data:An extension to the APOLLO software package

[J]. International Journal of Remote Sensing, 1989, 10(4/5):897-905.

DOI:10.1080/01431168908903929      URL     [本文引用: 1]

Stowe L L, McClain E P, Carey R, et al.

Global distribution of cloud cover derived from NOAA/AVHRR operational satellite data

[J]. Advances in Space Research, 1991, 11(3):51-54.

[本文引用: 2]

Qiu S, He B, Zhu Z, et al.

Improving Fmask cloud and cloud shadow detection in mountainous area for Landsats4-8 images

[J]. Remote Sensing of Environment, 2017, 199:107-119.

DOI:10.1016/j.rse.2017.07.002      URL     [本文引用: 1]

Qiu S, Zhu Z, He B.

Fmask 4.0:Improved cloud and cloud shadow detection in Landsats4-8 and Sentinel-2 imagery

[J]. Remote Sensing of Environment, 2019, 231:111205.

DOI:10.1016/j.rse.2019.05.024      URL     [本文引用: 1]

Zhu X, Helmer E H.

An automatic method for screening clouds and cloud shadows in optical satellite time series in cloudy regions

[J]. Remote Sensing of Environment, 2018, 214:135-153.

DOI:10.1016/j.rse.2018.05.024      URL     [本文引用: 1]

Frantz D, Haß E, Uhl A, et al.

Improvement of the Fmask algorithm for Sentinel-2 images:Separating clouds from bright surfaces based on parallax effects

[J]. Remote Sensing of Environment, 2018, 215:471-481.

DOI:10.1016/j.rse.2018.04.046      URL     [本文引用: 1]

Bian J, Li A, Liu Q, et al.

Cloud and snow discrimination for CCD images of HJ-1A/B constellation based on spectral signature and spatio-temporal context

[J]. Remote Sensing, 2016, 8(1):31.

DOI:10.3390/rs8010031      URL     [本文引用: 1]

葛曙乐, 董胜越, 孙根云, .

一种适用于高分五号全谱段光谱成像仪影像的云检测算法

[J]. 上海航天, 2019, 36(s2):204-209.

[本文引用: 1]

Ge S L, Dong S Y, Sun G Y, et al.

Cloud detection algorithm for images of visual and infrared multispectral imager

[J]. Aerospace Shanghai, 2019, 36(s2):204-209.

[本文引用: 1]

Zhan Y, Wang J, Shi J, et al.

Distinguishing cloud and snow in satellite images via deep convolutional network

[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(10):1785-1789.

DOI:10.1109/LGRS.2017.2735801      URL     [本文引用: 2]

Wang L, Chen Y, Tang L, et al.

Object-based convolutional neural networks for cloud and snow detection in high-resolution multispectral imagers

[J]. Water, 2018, 10(11):1666.

DOI:10.3390/w10111666      URL     [本文引用: 1]

Cloud and snow detection is one of the most significant tasks for remote sensing image processing. However, it is a challenging task to distinguish between clouds and snow in high-resolution multispectral images due to their similar spectral distributions. The shortwave infrared band (SWIR, e.g., Sentinel-2A 1.55–1.75 µm band) is widely applied to the detection of snow and clouds. However, high-resolution multispectral images have a lack of SWIR, and such traditional methods are no longer practical. To solve this problem, a novel convolutional neural network (CNN) to classify cloud and snow on an object level is proposed in this paper. Specifically, a novel CNN structure capable of learning cloud and snow multiscale semantic features from high-resolution multispectral imagery is presented. In order to solve the shortcoming of “salt-and-pepper” in pixel level predictions, we extend a simple linear iterative clustering algorithm for segmenting high-resolution multispectral images and generating superpixels. Results demonstrated that the new proposed method can with better precision separate the cloud and snow in the high-resolution image, and results are more accurate and robust compared to the other methods.

Oishi Y, Ishida H, Nakamura R.

A new Landsat8 cloud discrimination algorithm using thresholding tests

[J]. International Journal of Remote Sensing, 2018, 39(23):9113-9133.

DOI:10.1080/01431161.2018.1506183      URL     [本文引用: 1]

Shao Z, Pan Y, Diao C, et al.

Cloud detection in remote sensing images based on multiscale features-convolutional neural network

[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(6):4062-4076.

DOI:10.1109/TGRS.36      URL     [本文引用: 2]

Hong Y, Hsu K L, Sorooshian S, et al.

Precipitation estimation from remotely sensed imagery using an artificial neural network cloud classification system

[J]. Journal of Applied Meteorology, 2004, 43(12):1834-1853.

DOI:10.1175/JAM2173.1      URL     [本文引用: 1]

A satellite-based rainfall estimation algorithm, Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) Cloud Classification System (CCS), is described. This algorithm extracts local and regional cloud features from infrared (10.7 μm) geostationary satellite imagery in estimating finescale (0.04° × 0.04° every 30 min) rainfall distribution. This algorithm processes satellite cloud images into pixel rain rates by 1) separating cloud images into distinctive cloud patches; 2) extracting cloud features, including coldness, geometry, and texture; 3) clustering cloud patches into well-organized subgroups; and 4) calibrating cloud-top temperature and rainfall (Tb–R) relationships for the classified cloud groups using gauge-corrected radar hourly rainfall data. Several cloud-patch categories with unique cloud-patch features and Tb–R curves were identified and explained. Radar and gauge rainfall measurements were both used to evaluate the PERSIANN CCS rainfall estimates at a range of temporal (hourly and daily) and spatial (0.04°, 0.12°, and 0.25°) scales. Hourly evaluation shows that the correlation coefficient (CC) is 0.45 (0.59) at a 0.04° (0.25°) grid scale. The averaged CC of daily rainfall is 0.57 (0.63) for the winter (summer) season.

Hall D K, Riggs G A, Salomonson V V.

Development of methods for mapping global snow cover using moderate resolution imaging spectroradiometer data

[J]. Remote Sensing of Environment, 1995, 54(2):127-140.

DOI:10.1016/0034-4257(95)00137-P      URL     [本文引用: 1]

Ghasemian N, Akhoondzadeh M.

Introducing two random forest based methods for cloud detection in remote sensing images

[J]. Advances in Space Research, 2018, 62(2):288-303.

DOI:10.1016/j.asr.2018.04.030      URL     [本文引用: 1]

Egli S, Thies B, Bendix J.

A hybrid approach for fog retrieval based on a combination of satellite and ground truth data

[J]. Remote Sensing, 2018, 10(4):628.

DOI:10.3390/rs10040628      URL     [本文引用: 1]

Fog has a substantial influence on various ecosystems and it impacts economy, traffic systems and human life in many ways. In order to be able to deal with the large number of influence factors, a spatially explicit high-resoluted data set of fog frequency distribution is needed. In this study, a hybrid approach for fog retrieval based on Meteosat Second Generation (MSG) data and ground truth data is presented. The method is based on a random forest (RF) machine learning model that is trained with cloud base altitude (CBA) observations from Meteorological Aviation Routine Weather Reports (METAR) as well as synoptic weather observations (SYNOP). Fog is assumed where the model predicts CBA values below a dynamically derived threshold above the terrain elevation. Cross validation results show good accordance with observation data with a mean absolute error of 298 m in CBA values and an average Heidke Skill Score of 0.58 for fog occurrence. Using this technique, a 10 year fog baseline climatology with a temporal resolution of 15 min was derived for Europe for the period from 2006 to 2015. Spatial and temporal variations in fog frequency are analyzed. Highest average fog occurrences are observed in mountainous regions with maxima in spring and summer. Plains and lowlands show less overall fog occurrence but strong positive anomalies in autumn and winter.

Lee Y, Wahba G, Ackerman S A.

Cloud classification of satellite radiance data by multicategory support vector machines

[J]. Journal of Atmospheric and Oceanic Technology, 2004, 21(2):159-169.

DOI:10.1175/1520-0426(2004)021<0159:CCOSRD>2.0.CO;2      URL     [本文引用: 2]

Ishida H, Oishi Y, Morita K, et al.

Development of a support vector machine based cloud detection method for MODIS with the adjustability to various conditions

[J]. Remote Sensing of Environment, 2018, 205:390-407.

DOI:10.1016/j.rse.2017.11.003      URL     [本文引用: 2]

Li Z, Shen H, Cheng Q, et al.

Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors

[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 150:197-212.

DOI:10.1016/j.isprsjprs.2019.02.017      URL     [本文引用: 2]

Ma L, Liu Y, Zhang X, et al.

Deep learning in remote sensing applications:A meta-analysis and review

[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 152:166-177.

DOI:10.1016/j.isprsjprs.2019.04.015      URL     [本文引用: 1]

Yu J, Zhang L, Li Q, et al.

3D autoencoder algorithm for lithological mapping using ZY-1 02D hyperspectral imagery:A case study of Liuyuan region

[J]. Journal of Applied Remote Sensing, 2021, 15(4):042610.

[本文引用: 1]

Tsagkatakis G, Aidini A, Fotiadou K, et al.

Survey of deep-learning approaches for remote sensing observation enhancement

[J]. Sensors, 2019, 19(18):3929.

DOI:10.3390/s19183929      URL     [本文引用: 1]

Deep Learning, and Deep Neural Networks in particular, have established themselves as the new norm in signal and data processing, achieving state-of-the-art performance in image, audio, and natural language understanding. In remote sensing, a large body of research has been devoted to the application of deep learning for typical supervised learning tasks such as classification. Less yet equally important effort has also been allocated to addressing the challenges associated with the enhancement of low-quality observations from remote sensing platforms. Addressing such channels is of paramount importance, both in itself, since high-altitude imaging, environmental conditions, and imaging systems trade-offs lead to low-quality observation, as well as to facilitate subsequent analysis, such as classification and detection. In this paper, we provide a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others. In addition to the detailed analysis and comparison of recently presented approaches, different research avenues which could be explored in the future are also discussed.

Yang J, Guo J, Yue H, et al.

CDnet:CNN-based cloud detection for remote sensing imagery

[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(8):6195-6211.

DOI:10.1109/TGRS.36      URL     [本文引用: 1]

Jeppesen J H, Jacobsen R H, Inceoglu F, et al.

A cloud detection algorithm for satellite imagery based on deep learning

[J]. Remote Sensing of Environment, 2019, 229:247-259.

DOI:10.1016/j.rse.2019.03.039      [本文引用: 1]

Reliable detection of clouds is a critical pre-processing step in optical satellite based remote sensing. Currently, most methods are based on classifying invidual pixels from their spectral signatures, therefore they do not incorporate the spatial patterns. This often leads to misclassifications of highly reflective surfaces, such as human made structures or snow/ice. Multi-temporal methods can be used to alleviate this problem, but these methods introduce new problems, such as the need of a cloud-free image of the scene. In this paper, we introduce the Remote Sensing Network (RS-Net), a deep learning model for detection of clouds in optical satellite imagery, based on the U-net architecture. The model is trained and evaluated using the Landsat 8 Biome and SPARCS datasets, and it shows state-of-the-art performance, especially over biomes with hardly distinguishable scenery, such as clouds over snowy and icy regions. In particular, the performance of the model that uses only the RGB bands is significantly improved, showing promising results for cloud detection with smaller satellites with limited multi-spectral capabilities. Furthermore, we show how training the RS-Net models on data from an existing cloud masking method, which are treated as noisy data, leads to increased performance compared to the original method. This is validated by using the Fmask algorithm to annotate the Landsat 8 datasets, and then use these annotations as training data for regularized RS-Net models, which then show improved performance compared to the Fmask algorithm. Finally, the classification time of a full Landsat 8 product is 18.0 +/- 2.4 s for the largest RS-Net model, thereby making it suitable for production environments.

Drönner J, Korfhage N, Egli S, et al.

Fast cloud segmentation using convolutional neural networks

[J]. Remote Sensing, 2018, 10(11):1782.

DOI:10.3390/rs10111782      URL     [本文引用: 1]

Information about clouds is important for observing and predicting weather and climate as well as for generating and distributing solar power. Most existing approaches extract cloud information from satellite data by classifying individual pixels instead of using closely integrated spatial information, ignoring the fact that clouds are highly dynamic, spatially continuous entities. This paper proposes a novel cloud classification method based on deep learning. Relying on a Convolutional Neural Network (CNN) architecture for image segmentation, the presented Cloud Segmentation CNN (CS-CNN), classifies all pixels of a scene simultaneously rather than individually. We show that CS-CNN can successfully process multispectral satellite data to classify continuous phenomena such as highly dynamic clouds. The proposed approach produces excellent results on Meteosat Second Generation (MSG) satellite data in terms of quality, robustness, and runtime compared to other machine learning methods such as random forests. In particular, comparing CS-CNN with the CLAAS-2 cloud mask derived from MSG data shows high accuracy (0.94) and Heidke Skill Score (0.90) values. In contrast to a random forest, CS-CNN produces robust results and is insensitive to challenges created by coast lines and bright (sand) surface areas. Using GPU acceleration, CS-CNN requires only 25 ms of computation time for classification of images of Europe with 508 × 508 pixels.

刘广进, 王光辉, 毕卫华, .

基于DenseNet与注意力机制的遥感影像云检测算法

[J]. 自然资源遥感, 2022, 34(2):88-96.doi:10.6046/zrzyyg.2021128.

[本文引用: 1]

Liu G J, Wang G H, Bi W H, et al.

Cloud detection algorithm of remote sensing image based on DenseNet and attention mechanism

[J]. Remote Sensing for Natural Resources, 2022, 34(2):88-96.doi:10.6046/zrzyyg.2021128.

[本文引用: 1]

Chai D, Newsam S, Zhang H K, et al.

Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks

[J]. Remote Sensing of Environment, 2019, 225:307-316.

DOI:10.1016/j.rse.2019.03.007      URL     [本文引用: 1]

Mohajerani S, Saeedi P.

Cloud-net+:A cloud segmentation CNN for Landsat8 remote sensing imagery optimized with filtered jaccard loss function

[J/OL]. arXiv, 2020(2020-01-23)[2021-04-23]. https://arxiv.org/abs/2001.08768v1.

URL     [本文引用: 1]

Guo Z S, Li C H, Wang Z M, et al.

A cloud boundary detection scheme combined with ASLIC and CNN using ZY-3,GF-1/2 satellite imagery

[J]. International Archives of the Photogrammetry,Remote Sensing and Spatial Information Sciences, 2018, 42(3):455-458.

[本文引用: 1]

Zi Y, Xie F, Jiang Z.

A cloud detection method for Landsat8 images based on PCANet

[J]. Remote Sensing, 2018, 10(6):877.

DOI:10.3390/rs10060877      URL     [本文引用: 2]

Yu J C, Li Y, Zheng X, et al.

An effective cloud detection method for Gaofen-5 images via deep learning

[J]. Remote Sensing, 2020, 12(13):2106.

DOI:10.3390/rs12132106      URL     [本文引用: 2]

Recent developments in hyperspectral satellites have dramatically promoted the wide application of large-scale quantitative remote sensing. As an essential part of preprocessing, cloud detection is of great significance for subsequent quantitative analysis. For Gaofen-5 (GF-5) data producers, the daily cloud detection of hundreds of scenes is a challenging task. Traditional cloud detection methods cannot meet the strict demands of large-scale data production, especially for GF-5 satellites, which have massive data volumes. Deep learning technology, however, is able to perform cloud detection efficiently for massive repositories of satellite data and can even dramatically speed up processing by utilizing thumbnails. Inspired by the outstanding learning capability of convolutional neural networks (CNNs) for feature extraction, we propose a new dual-branch CNN architecture for cloud segmentation for GF-5 preview RGB images, termed a multiscale fusion gated network (MFGNet), which introduces pyramid pooling attention and spatial attention to extract both shallow and deep information. In addition, a new gated multilevel feature fusion module is also employed to fuse features at different depths and scales to generate pixelwise cloud segmentation results. The proposed model is extensively trained on hundreds of globally distributed GF-5 satellite images and compared with current mainstream CNN-based detection networks. The experimental results indicate that our proposed method has a higher F1 score (0.94) and fewer parameters (7.83 M) than the compared methods.

仇一帆, 柴登峰.

无人工标注数据的Landsat影像云检测深度学习方法

[J]. 国土资源遥感, 2021, 33(1): 102-107.doi:10.6046/gtzyyg.2020090.

[本文引用: 1]

Qiu Y F, Chai D F.

A deep learning method for Landsat image cloud detection without manually labeled data

[J]. Remote Sensing for Land and Resources, 2021, 33(1): 102-107.doi:10.6046/gtzyyg.2020090.

[本文引用: 1]

Ronneberger O, Fischer P, Brox T.

U-net:Convolutional networks for biomedical image segmentation

[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham:Springer, 2015:234-241.

[本文引用: 2]

Mohajerani S, Saeedi P.

Cloud-Net:An end-to-end cloud detection algorithm for Landsat8 imagery

[C]// 2019 IEEE International Geoscience and Remote Sensing Symposium.IEEE, 2019: 1029-1032.

[本文引用: 1]

Wu X, Shi Z, Zou Z.

A geographic information-driven method and a new large scale dataset for remote sensing cloud/snow detection

[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, 174:87-104.

DOI:10.1016/j.isprsjprs.2021.01.023      URL     [本文引用: 1]

He Q, Sun X, Yan Z, et al.

DABNet:Deformable contextual and boundary-weighted network for cloud detection in remote sensing images

[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 60:5601216.

[本文引用: 1]

Badrinarayanan V, Kendall A, Cipolla R.

SegNet:A deep convolutional encoder-decoder architecture for image segmentation

[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39:2481-2495.

DOI:10.1109/TPAMI.34      URL     [本文引用: 1]

Zhao H, Shi J, Qi X, et al.

Pyramid scene parsing network

[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017:6230-6239.

[本文引用: 1]

Yu C, Wang J, Peng C, et al.

Bisenet:Bilateral segmentation network for real-time semantic segmentation

[C]// 15th European Conference on Computer Vision (ECCV), 2018:334-349.

[本文引用: 1]

Sun K, Zhao Y, Jiang B, et al.

High-resolution representations for labeling pixels and regions

[J/OL]. arXiv, 2019(2019-04-09)[2021-04-23]. https://arxiv.org/abs/1904.04514.pdf.

URL     [本文引用: 1]

Chen L C, Zhu Y, Papandreou G, et al.

Encoder-decoder with atrous separable convolution for semantic image segmentation

[C]// 15th European Conference on Computer Vision (ECCV), 2018:833-851.

[本文引用: 1]

/

京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发