Please wait a minute...
 
自然资源遥感  2024, Vol. 36 Issue (4): 149-157    DOI: 10.6046/zrzyyg.2023169
  技术方法 本期目录 | 过刊浏览 | 高级检索 |
一种基于对抗学习的高分辨率遥感影像语义分割无监督域自适应方法
潘俊杰(), 慎利(), 鄢薪, 聂欣, 董宽林
西南交通大学地球科学与环境工程学院,成都 610097
An adversarial learning-based unsupervised domain adaptation method for semantic segmentation of high-resolution remote sensing images
PAN Junjie(), SHEN Li(), YAN Xin, NIE Xin, DONG Kuanlin
Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 610097, China
全文: PDF(3941 KB)   HTML  
输出: BibTeX | EndNote (RIS)      
摘要 

语义分割模型在高分辨率遥感影像中表现良好的关键是训练集和测试集之间域的高度一致。然而,不同数据集之间存在域偏差,包括地理位置、传感器成像方式和天气条件的差异,导致在一个数据集上训练的模型在另一个数据集上预测时准确性会显著下降。域自适应是解决上述问题的有效策略,该文从域自适应模型的角度,基于对抗学习方法提出了一种用于高分辨率遥感图像语义分割任务的无监督域自适应框架。该框架对全局域对齐模块和局部域对齐模块分别融入熵值加权注意力和逐类别域特征聚合机制,缓解源域和目标域之间的域偏差; 此外,引入了对象上下文表征(object context representation, OCR)模块和空洞空间金字塔池化(atrous spatial pyramid pooling, ASPP)模块,以充分利用影像中的空间级和对象级上下文信息,并提出了OCR/ASPP双分类器组合策略,以提高分割精度和准确性。实验结果表明,该方法在公开的2个数据集中实现了优越的跨域分割性能,并超过了同类型的其他方法。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
潘俊杰
慎利
鄢薪
聂欣
董宽林
关键词 高分辨率遥感图像语义分割对抗学习无监督域自适应    
Abstract

The key to the high performance of semantic segmentation models for high-resolution remote sensing images lies in the high domain consistency between the training and testing datasets. The domain discrepancies between different datasets, including differences in geographic locations, sensors’ imaging patterns, and weather conditions, lead to significantly decreased accuracy when a model trained on one dataset is applied to another. Domain adaptation is an effective strategy to address the aforementioned issue. From the perspective of a domain adaptation model, this study developed an adversarial learning-based unsupervised domain adaptation framework for the semantic segmentation of high-resolution remote sensing images. This framework fused the entropy-weighted attention and class-wise domain feature aggregation mechanism into the global and local domain alignment modules, respectively, alleviating the domain discrepancies between the source and target. Additionally, the object context representation (OCR) and Atrous spatial pyramid pooling (ASPP) modules were incorporated to fully leverage spatial- and object-level contextual information in the images. Furthermore, the OCR and ASPP combination strategy was employed to improve segmentation accuracy and precision. The experimental results indicate that the proposed method allows for superior cross-domain segmentation on two publicly available datasets, outperforming other methods of the same type.

Key wordshigh-resolution remote sensing images    semantic segmentation    adversarial learning    unsupervised domain adaptation
收稿日期: 2023-06-09      出版日期: 2024-12-23
ZTFLH:  TP751  
  P237  
基金资助:国家重点研发计划项目“时空知识图谱服务平台与应用验证”(2022YFB3904205);国家自然科学基金项目“基于弱监督深度学习的高分辨率遥感影像灾后损毁建筑物提取研究”(42071386);“基于匀质化分解与解析式合成的栅格类别数据尺度效应建模”(41971330);四川省科技厅基本科研业务费项目“耕地‘非粮化’调查监测成果的知识化服务研究”(2023JDKY0017-3)
通讯作者: 慎利(1986-),男,博士,副教授,主要从事遥感影像智能解译、资源环境遥感研究等方面的教学与科研工作。Email: rsshenli@outlook.com
作者简介: 潘俊杰(1998-),男,硕士研究生,主要从事摄影测量与遥感方向研究。Email: peter_panjunjie@163.com
引用本文:   
潘俊杰, 慎利, 鄢薪, 聂欣, 董宽林. 一种基于对抗学习的高分辨率遥感影像语义分割无监督域自适应方法[J]. 自然资源遥感, 2024, 36(4): 149-157.
PAN Junjie, SHEN Li, YAN Xin, NIE Xin, DONG Kuanlin. An adversarial learning-based unsupervised domain adaptation method for semantic segmentation of high-resolution remote sensing images. Remote Sensing for Natural Resources, 2024, 36(4): 149-157.
链接本文:  
https://www.gtzyyg.com/CN/10.6046/zrzyyg.2023169      或      https://www.gtzyyg.com/CN/Y2024/V36/I4/149
Fig.1  OA-GAL框架结构示意图
Fig.2  OCR/ASPP双分类器组合
Fig.3  EWG模块
Fig.4  CAL模块
Fig.5  源域和目标域数据集样本示例
Tab.1  Potsdam→Vaihingen的UDA分割结果示例
模型 mIOU IOU
其他类 汽车 树木 低矮植被 建筑 道路
Deeplabv2 0.264 7 0.066 3 0.074 5 0.186 5 0.244 3 0.488 6 0.527 9
AdaptSegNet 0.423 1 0.075 2 0.263 4 0.457 8 0.401 1 0.720 1 0.620 9
CLAN 0.410 1 0.084 7 0.164 0 0.544 1 0.274 1 0.773 0 0.619 9
ADVENT 0.434 8 0.168 6 0.221 8 0.510 7 0.316 9 0.768 2 0.622 4
Metacorrection 0.440 4 0.102 8 0.249 5 0.517 1 0.400 1 0.744 8 0.628 1
OA-GAL 0.474 8 0.114 8 0.219 5 0.573 2 0.435 5 0.818 2 0.687 4
Tab.2  Potsdam→Vaihingen对比实验的结果评价
Fig.6  伪标签和熵值图可视化示例
模型 OCR/ASPP CAL EWG mIOU
基线模型 0.423 1
OCR/ASPP 0.438 2
OCR/ASPP+CAL 0.451 9
OCR/ASPP+EWG 0.453 7
OA-GAL 0.474 8
Tab.3  消融实验
[1] 卢晓燕. 面向高分辨率遥感影像大范围道路提取的深度学习方法研究[J]. 武汉大学学报(信息科学版), 2023, 48(5):821.
Lu X Y. Deep learning method for large-scale road extraction from high resolution remote sensing imagery[J]. Geomatics and Information Science of Wuhan University, 2023, 48(5):821.
[2] 薛源, 覃超, 吴保生, 等. 基于多源国产高分辨率遥感影像的山区河流信息自动提取[J]. 清华大学学报(自然科学版), 2023, 63(1):134-145.
Xue Y, Qin C, Wu B S, et al. Automatic extraction of mountain river information from multiple Chinese high-resolution remote sensing satellite images[J]. Journal of Tsinghua University (Science and Technology), 2023, 63(1):134-145.
[3] 赵会芹, 于博, 陈方, 等. 基于高分辨率卫星遥感影像滑坡提取方法研究现状[J]. 遥感技术与应用, 2023, 38(1):108-115.
doi: 10.11873/j.issn.1004-0323.2023.1.0108
Zhao H Q, Yu B, Chen F, et al. Research status of landslide extraction methods based on high-resolution satellite remote sensing images[J]. Remote Sensing Technology and Application, 2023, 38(1):108-115.
[4] 王丽梅, 王延正. 基于高分辨率遥感影像的建筑物提取[J]. 测绘通报, 2023(6):180-183.
doi: 10.13474/j.cnki.11-2246.2023.0191
Wang L M, Wang Y Z. Buildings extraction based on high-resolution remote sensing imagery[J]. Bulletin of Surveying and Mapping, 2023(6):180-183.
doi: 10.13474/j.cnki.11-2246.2023.0191
[5] 张新长, 黄健锋, 宁婷. 高分辨率遥感影像耕地提取研究进展与展望[J]. 武汉大学学报(信息科学版), 2023, 48(10):1582-1590.
Zhang X C, Huang J F, Ning T. Progress and prospect of cultivated land extraction from high-resolution remote sensing images[J]. Geomatics and Information Science of Wuhan University, 2023, 48(10):1582-1590.
[6] 董秀春, 刘忠友, 蒋怡, 等. 基于WorldView-2影像和语义分割模型的小麦分类提取[J]. 遥感技术与应用, 2022, 37(3):564-570.
doi: 10.11873/j.issn.1004-0323.2022.3.0564
Dong X C, Liu Z Y, Jiang Y, et al. Winter wheat extraction of WorldView-2 image based on semantic segmentation method[J]. Remote Sensing Technology and Application, 2022, 37(3):564-570.
[7] 杨军, 于茜子. 结合空洞卷积的FuseNet变体网络高分辨率遥感影像语义分割[J]. 武汉大学学报(信息科学版), 2022, 47(7):1071-1080.
Yang J, Yu X Z. Semantic segmentation of high-resolution remote sensing images based on improved FuseNet combined with atrous convolution[J]. Geomatics and Information Science of Wuhan University, 2022, 47(7):1071-1080.
[8] Chen B, Xia M, Qian M, et al. MANet:A multi-level aggregation network for semantic segmentation of high-resolution remote sensing images[J]. International Journal of Remote Sensing, 2022, 43(15/16):5874-5894.
[9] Wang Y, Zeng X, Liao X, et al. B-FGC-net:A building extraction network from high resolution remote sensing imagery[J]. Remote Sensing, 2022, 14(2):269.
[10] Guo Y, Liu Y, Georgiou T K, et al. A review of semantic segmentation using deep neural networks[J]. International Journal of Multimedia Information Retrieval, 2018, 7(2):87-93.
[11] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[C]// IEEE Transactions on Pattern Analysis and Machine Intelligence.IEEE, 2017:640-651.
[12] Ronneberger O, Fischer P, Brox T. U-net:Convolutional networks for biomedical image segmentation[M]//Lecture Notes in Computer Science. Cham: Springer International Publishing, 2015:234-241.
[13] Badrinarayanan V, Kendall A, Cipolla R. SegNet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
doi: 10.1109/TPAMI.2016.2644615 pmid: 28060704
[14] Zhao H, Shi J, Qi X, et al. Pyramid scene parsing network[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Honolulu,HI,USA.IEEE, 2017:6230-6239.
[15] Chen L C, Papandreou G, Kokkinos I, et al. DeepLab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(4):834-848.
[16] Zhao S, Yue X, Zhang S, et al. A review of single-source deep unsupervised visual domain adaptation[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(2):473-493.
[17] Xu M, Wu M, Chen K, et al. The eyes of the gods:A survey of unsupervised domain adaptation methods based on remote sensing data[J]. Remote Sensing, 2022, 14(17):4380.
[18] Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]// 2017 IEEE International Conference on Computer Vision (ICCV).Venice,Italy.IEEE, 2017:2242-2251.
[19] Yang Y, Soatto S. FDA:Fourier domain adaptation for semantic segmentation[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Seattle,WA,USA.IEEE, 2020:4084-4094.
[20] Hoffman J, Tzeng E, Park T, et al. CyCADA:Cycle-consistent adversarial domain adaptation[J/OL]. arXiv, 2017. https://arxiv.org/abs/1711.03213.pdf.
[21] Ma H, Lin X, Wu Z, et al. Coarse-to-fine domain adaptive semantic segmentation with photometric alignment and category-center regularization[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Nashville,TN,USA.IEEE, 2021:4050-4059.
[22] Zou Y, Yu Z, Vijaya Kumar B V K, et al. Unsupervised domain ada-ptation for semantic segmentation via class-balanced self-training[C]// Computer Vision - ECCV 2018:15th European Conference,Munich,Germany, September 8-14,2018,Proceedings,Part III.ACM, 2018:297-313.
[23] Zheng Z, Yang Y. Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation[J]. International Journal of Computer Vision, 2021, 129(4):1106-1120.
[24] Tsai Y H, Hung W C, Schulter S, et al. Learning to adapt structure output space for semantic segmentation[C]// 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.Salt Lake City,UT,USA.IEEE, 2018:7472-7481.
[25] Luo Y, Zheng L, Guan T, et al. Taking a closer look at domain shift:Category-level adversaries for semantics consistent domain adaptation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Long Beach,CA,USA.IEEE, 2019:2502-2511.
[26] Vu T H, Jain H, Bucher M, et al. ADVENT:Adversarial entropy minimization for domain adaptation in semantic segmentation[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Long Beach,CA,USA.IEEE, 2019:2512-2521.
[27] Guo X, Yang C, Li B, et al. MetaCorrection:Domain-aware meta loss correction for unsupervised domain adaptation in semantic segmentation[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Nashville,TN,USA.IEEE, 2021:3926-3935.
[28] Yuan Y, Chen X, Wang J. Object-contextual representations for semantic segmentation[J/OL]. arXiv, 2019. https://arxiv.org/abs/1909.11065.pdf.
[29] Yuan Y, Chen X, Chen X, et al. Segmentation transformer:Object-contextual representations for semantic segmentation[J/OL]. arXiv, 2019(2021-04-30). https://arxiv.org/abs/1909.11065v2.
[30] Huang S, Han W, Chen H, et al. Recognizing zucchinis intercropped with sunflowers in UAV visible images using an improved method based on OCRNet[J]. Remote Sensing, 2021, 13(14):2706.
[31] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks[J/OL]. arXiv, 2015. https://arxiv.org/abs/1511.06434.pdf.
[32] Bottou L. Large-scale machine learning with stochastic gradient descent[C]// Proceedings of COMPSTAT2010:19th International Conference on Computational Statistics.Physica-Verlag HD, 2010:177-186.
[33] Kingma D P, Adam B J. Adam:A method for stochastic optimization[J/OL]. arXiv, 2014(2017-01-30). https://arxiv.org/abs/1412.6980.
[34] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Las Vegas,NV,USA.IEEE, 2016:770-778.
[1] 曲海成, 梁旭. 融合混合注意力机制与多尺度特征增强的高分影像建筑物提取[J]. 自然资源遥感, 2024, 36(4): 107-116.
[2] 李世琦, 姚国清. 基于CNN与SETR的特征融合滑坡体检测[J]. 自然资源遥感, 2024, 36(4): 158-164.
[3] 苏腾飞. 深度卷积语义分割网络在农田遥感影像分类中的对比研究——以河套灌区为例[J]. 自然资源遥感, 2024, 36(4): 210-217.
[4] 罗维, 李修华, 覃火娟, 张木清, 王泽平, 蒋柱辉. 基于多源卫星遥感影像的广西中南部地区甘蔗识别及产量预测[J]. 自然资源遥感, 2024, 36(3): 248-258.
[5] 白石, 唐攀攀, 苗朝, 金彩凤, 赵博, 万昊明. 基于高分辨率遥感影像和改进U-Net模型的滑坡提取——以汶川地区为例[J]. 自然资源遥感, 2024, 36(3): 96-107.
[6] 林佳惠, 刘广, 范景辉, 赵红丽, 白世彪, 潘宏宇. 联合改进U-Net模型和D-InSAR技术采矿沉陷提取方法[J]. 自然资源遥感, 2023, 35(3): 145-152.
[7] 刘立, 董先敏, 刘娟. 顾及地学特征的遥感影像语义分割模型性能评价方法[J]. 自然资源遥感, 2023, 35(3): 80-87.
[8] 赵凌虎, 袁希平, 甘淑, 胡琳, 丘鸣语. 改进Deeplabv3+的高分辨率遥感影像道路提取模型[J]. 自然资源遥感, 2023, 35(1): 107-114.
[9] 孟琮棠, 赵银娣, 韩文泉, 何晨阳, 陈锡秋. 基于RandLA-Net的机载激光雷达点云城市建筑物变化检测[J]. 自然资源遥感, 2022, 34(4): 113-121.
[10] 沈骏翱, 马梦婷, 宋致远, 柳汀洲, 张微. 基于深度学习语义分割模型的高分辨率遥感图像水体提取[J]. 自然资源遥感, 2022, 34(4): 129-135.
[11] 王华俊, 葛小三. 一种轻量级的DeepLabv3+遥感影像建筑物提取方法[J]. 自然资源遥感, 2022, 34(2): 128-135.
[12] 廖廓, 聂磊, 杨泽宇, 张红艳, 王艳杰, 彭继达, 党皓飞, 冷伟. 基于多维卷积神经网络的多源高分辨率卫星影像茶园分类[J]. 自然资源遥感, 2022, 34(2): 152-161.
[13] 郭文, 张荞. 基于注意力增强全卷积神经网络的高分卫星影像建筑物提取[J]. 国土资源遥感, 2021, 33(2): 100-107.
[14] 刘钊, 赵桐, 廖斐凡, 李帅, 李海洋. 基于语义分割网络的高分遥感影像城市建成区提取方法研究与对比分析[J]. 国土资源遥感, 2021, 33(1): 45-53.
[15] 蔡祥, 李琦, 罗言, 齐建东. 面向对象结合深度学习方法的矿区地物提取[J]. 国土资源遥感, 2021, 33(1): 63-71.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发