Please wait a minute...
 
自然资源遥感  2025, Vol. 37 Issue (3): 104-112    DOI: 10.6046/zrzyyg.2024047
  技术方法 本期目录 | 过刊浏览 | 高级检索 |
高维上下文注意和双感受野增强的SAR船舶检测
郭伟(), 李煜(), 金海波
辽宁工程技术大学软件学院,葫芦岛 125105
Detecting ships from SAR images based on high-dimensional contextual attention and dual receptive field enhancement
GUO Wei(), LI Yu(), JIN Haibo
School of Software, Liaoning University of Technology, Huludao 125105, China
全文: PDF(3449 KB)   HTML  
输出: BibTeX | EndNote (RIS)      
摘要 

在基于深度学习的合成孔径雷达(synthetic aperture Radar, SAR)船舶目标检测中,SAR图像丰富的上下文信息尚未被充分利用。因此,该研究提出一种新颖的SAR船舶图像检测方法,它结合高维上下文注意力和双感受野增强,通过双感受野增强从SAR图像中提取多维特征信息,从而引导动态注意力矩阵在由粗到细的高维特征提取过程中学习丰富的上下文信息;另外,基于YOLOv7,通过引入轻量级卷积模块、轻量化非对称多级压缩检测头和新的损失函数XIoU,构建了YOLO-HD网络。在E-HRSID和SSDD数据集上进行对比实验,实验中所提方法的检测平均精度分别达到91.36%和97.64%,相比原始模型分别提高4.56百分点和9.83百分点,且相比其他经典模型结果更优。

服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
郭伟
李煜
金海波
关键词 深度学习计算机视觉YOLOv7SAR图像船舶检测注意力机制    
Abstract

The abundant contextual information in synthetic aperture radar (SAR) images remains underutilized in deep learning-based ship detection. Hence, this study proposed a novel method for detecting ships from SAR images based on high-dimensional contextual attention and dual receptive field enhancement. The dual receptive field enhancement was employed to extract multi-dimensional feature information from SAR images, thereby guiding the dynamic attention matrix to learn rich contextual information during the coarse-to-fine extraction of high-dimensional features. Based on YOLOv7, a YOLO-HD network was constructed by incorporating a lightweight convolutional module, a lightweight asymmetric multi-level compression detection head, and a new loss function,XIoU. A comparative experiment was conducted on the E-HRSID and SSDD datasets. The proposed method achieved average detection accuracy of 91.36 % and 97.64 %, respectively, representing improvements by 4.56 and 9.83 percentage points compared to the original model, and outperforming other classical models.

Key wordsdeep learning    computer vision    YOLOv7    synthetic aperture radar (SAR) image    ship detection    attention mechanism
收稿日期: 2024-01-26      出版日期: 2025-07-01
ZTFLH:  TP79  
基金资助:国家自然科学基金项目“不确定因素影响下特钢生产系统动态可靠性建模与优化维修研究”(62173171)
通讯作者: 李煜(1998-),男,硕士研究生,研究方向为图像与视觉信息计算。Email: ly1361621985@gmail.com
作者简介: 郭伟(1970-),女,硕士,副教授,主要从事图像与视觉信息计算研究。Email: guowei@lntu.edu.cn
引用本文:   
郭伟, 李煜, 金海波. 高维上下文注意和双感受野增强的SAR船舶检测[J]. 自然资源遥感, 2025, 37(3): 104-112.
GUO Wei, LI Yu, JIN Haibo. Detecting ships from SAR images based on high-dimensional contextual attention and dual receptive field enhancement. Remote Sensing for Natural Resources, 2025, 37(3): 104-112.
链接本文:  
https://www.gtzyyg.com/CN/10.6046/zrzyyg.2024047      或      https://www.gtzyyg.com/CN/Y2025/V37/I3/104
Fig.1  YOLO-HD算法模型图
Fig.2  高维上下文注意结构实现细节
Fig.3  双感受野增强结构
Fig.4  轻量化卷积层实现细节
模型 E-HRSID SSDD
P/% R/% mAP/% F1 P/% R/% mAP/% F1
CenterNet 96.63 60.95 76.77 0.75 97.46 75.42 89.04 0.83
Efficientdet 97.29 22.36 34.73 0.36 95.77 25.09 73.83 0.40
Faster R-CNN 34.3 35.71 26.95 0.35 73.50 68.07 88.12 0.71
RetinaNet 93.31 27.65 34.35 0.43 86.81 63.14 80.86 0.73
SSD 88.79 16.34 40.64 0.28 95.80 42.07 89.58 0.58
YOLOv7 88.16 78.16 86.80 0.83 90.80 78.00 87.81 0.84
YOLOv8 89.53 83.27 90.47 0.86 95.40 91.80 97.10 0.94
YOLO-HD 90.65 84.36 91.36 0.87 95.25 95.55 97.64 0.95
Tab.1  E-HRSID和SSDD数据集对比实验结果
Fig.5  各模型的P-R曲线图
SAR图像 真实值 YOLOv7 YOLO-HD YOLOv8 CenterNet
图像1
图像2
图像3
图像4
Tab.2  4种模型的检测结果
模型 DRFE HD-ELAN LAMCD XIoU L-ELAN P/% R/% mAP/% 参数量/MB GFLOPS
基础模型 88.16 78.16 86.80 38.4 105.4
Net1 88.14 81.39 88.85 43.4 108.7
Net2 90.54 81.10 89.79 39.4 162.1
Net3 90.45 78.90 89.29 37.6 141.2
Net4 91.21 82.92 90.89 51.0 187.0
本文模型 90.65 84.36 91.36 56.8 143.2
Tab.3  消融试验
[1] 曾涛, 温育涵, 王岩, 等. 合成孔径雷达参数化成像技术进展[J]. 雷达学报, 2021, 10(3):327-341.
Zeng T, Wen Y H, Wang Y, et al. Research progress on synthetic aperture Radar parametric imaging methods[J]. Journal of Radars, 2021, 10(3):327-341.
[2] Gong M, Li Y, Jiao L, et al. SAR change detection based on intensity and texture changes[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2014,93:123-135.
[3] El-Darymli K, McGuire P, Power D, et al. Target detection in synthetic aperture Radar imagery:A state-of-the-art survey[J]. Journal of Applied Remote Sensing, 2013, 7(1):071598.
[4] 张帆, 陆圣涛, 项德良, 等. 一种改进的高分辨率SAR图像超像素CFAR舰船检测算法[J]. 雷达学报, 2023, 12(1):120-139.
Zhang F, Lu S T, Xiang D L, et al. An improved superpixel-based CFAR method for high-resolution SAR image ship target detection[J]. Journal of Radars, 2023, 12(1):120-139.
[5] Sugimoto M. SAR image analysis target detection utilizing polarimetricinformation[D]. Yokosukashi: National Defense Academy Graduate School of Science and Engineering, 2013.
[6] Wang C, Wang Y, Liao M. Removal of azimuth ambiguities and detection of a ship:Usingpolarimetric airborne C-band SAR images[J]. International Journal of Remote Sensing, 2012, 33(10):3197-3210.
[7] Liu T, Yang Z, Marino A, et al. PolSAR ship detection based on neighborhood polarimetric covariance matrix[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(6):4874-4887.
[8] Zhang P, Zhang J F, Liu T. Constant false alarm rate detection of slow targets in polarimetric along-track interferometric synthetic aperture radar imagery[J]. IET Radar,Sonar& Navigation, 2019, 13(1):31-44.
[9] 张鹏, 张嘉峰, 刘涛. 基于相干度优化的极化顺轨干涉SAR慢小目标CFAR检测[J]. 北京航空航天大学学报, 2019, 45(3):575-587.
Zhang P, Zhang J F, Liu T. Slow and small target CFAR detection of polarimetric along-track interferometric SAR using coherence optimization[J]. Journal of Beijing University of Aeronautics and Astronautics, 2019, 45(3):575-587.
[10] Girshick R. Fast R-CNN[C]// 2015 IEEE International Conference on Computer Vision (ICCV).December 7-13,2015,Santiago,Chile.IEEE, 2015:1440-1448.
[11] Liu W, Anguelov D, Erhan D, et al. SSD:single shot MultiBox detector[M]// Lecture Notes in Computer Science. Cham:SpringerInternational Publishing, 2016:21-37.
[12] Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7:Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 17-24,2023,Vancouver,BC,Canada.IEEE, 2023:7464-7475.
[13] Li Y, Yao T, Pan Y, et al. Contextual transformer networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(2):1489-1500.
[14] Rao Y, Zhao W, Tang Y, et al. Hornet:Efficient high-order spatial interactions with recursive gated convolutions[J]. Advances in Neural Information Processing Systems, 2022,35:10353-10366.
[15] Vaswani A. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017.
[16] Li L, Li B, Zhou H. Lightweight multi-scale network for small object detection[J]. PeerJ Computer Science, 2022,8:e1145.
[17] Yu Z, Huang H, Chen W, et al. YOLO-FaceV2:A scale and occlusion aware face detector[J]. Pattern Recognition, 2024,155:110714.
[18] Chen J, Kao S H, He H, et al. Run,don’twalk:Chasing higher FLOPS for faster neural networks[C]// 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 17-24,2023,Vancouver,BC,Canada.IEEE, 2023:12021-12031.
[19] Chollet F. Xception:Deep learning with depthwise separable convolutions[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).July 21-26,2017,Honolulu,HI,USA.IEEE, 2017:1800-1807.
[20] Ioannou Y, Robertson D, Cipolla R, et al. Deeproots:Improving CNN efficiency with hierarchical filter groups[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).July 21-26,2017,Honolulu,HI,USA.IEEE, 2017:5977-5986.
[21] Wei S, Zeng X, Qu Q, et al. HRSID:A high-resolution SAR images dataset for ship detection and instance segmentation[J]. IEEE Access, 2020,8:120234-120254.
[22] Zhang T, Zhang X, Li J, et al. SAR ship detection dataset (SSDD):Official release and comprehensive data analysis[J]. Remote Sensing, 2021, 13(18):3690.
[23] Chicco D, Jurman G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation[J]. BMC Genomics, 2020, 21(1):6.
doi: 10.1186/s12864-019-6413-7 pmid: 31898477
[24] Hou X, Wang D, Krähenbühl P. Objects as points[J/OL]. arXiv, 2019. https://arxiv.org/pdf/1904.07850.
[25] Tan M, Pang R, Le Q V. EfficientDet:Scalable and efficient object detection[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).June 13-19,2020,Seattle,WA,USA.IEEE, 2020:10778-10787.
[26] Ren S, He K, Girshick R, et al. FasterR-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.
[27] Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]// 2017 IEEE International Conference on Computer Vision (ICCV).October 22-29,2017,Venice,Italy.IEEE, 2017:2999-3007.
[28] Redmon J, Divvala S, Girshick R, et al. You only look once:Unified,real-time object detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).June 27-30,2016,Las Vegas,NV,USA.IEEE, 2016:779-788.
[1] 陈兰兰, 范永超, 肖海平, 万俊辉, 陈磊. 结合时序InSAR与IRIME-LSTM模型的大范围矿区地表沉降预测[J]. 自然资源遥感, 2025, 37(3): 245-252.
[2] 叶武剑, 谢林峰, 刘怡俊, 温晓卓, 李扬. 基于MobileNet的轻量化云检测模型[J]. 自然资源遥感, 2025, 37(3): 95-103.
[3] 邹海靖, 邹滨, 王玉龙, 张波, 邹伦文. 基于多尺度样本集优化策略的矿区工业固废及露天采场遥感识别[J]. 自然资源遥感, 2025, 37(3): 1-8.
[4] 陈民, 彭栓, 王涛, 吴雪芳, 刘润璞, 陈玉烁, 方艳茹, 阳平坚. 基于资源1号02D高光谱图像湿地水体分类方法对比——以白洋淀为例[J]. 自然资源遥感, 2025, 37(3): 133-141.
[5] 郑宗生, 高萌, 周文睆, 王政翰, 霍志俊, 张月维. 基于样本迭代优化策略的密集连接多尺度土地覆盖语义分割[J]. 自然资源遥感, 2025, 37(2): 11-18.
[6] 庞敏. 国产多源卫片图斑智能提取平台研究与应用[J]. 自然资源遥感, 2025, 37(2): 148-154.
[7] 刘晨晨, 葛小三, 武永斌, 余海坤, 张蓓蓓. 基于混合注意力机制和Deeplabv3+的遥感影像建筑物提取方法[J]. 自然资源遥感, 2025, 37(1): 31-37.
[8] 曲海成, 王莹, 刘腊梅, 郝明. 融合CNN与Transformer的遥感影像道路信息提取[J]. 自然资源遥感, 2025, 37(1): 38-45.
[9] 黄川, 李雅琴, 祁越然, 魏晓燕, 邵远征. 基于3D-CAE的高光谱解混及小样本分类方法[J]. 自然资源遥感, 2025, 37(1): 8-14.
[10] 曲海成, 梁旭. 融合混合注意力机制与多尺度特征增强的高分影像建筑物提取[J]. 自然资源遥感, 2024, 36(4): 107-116.
[11] 郑宗生, 王政翰, 王振华, 卢鹏, 高萌, 霍志俊. 改进3D-Octave卷积的高光谱图像分类方法[J]. 自然资源遥感, 2024, 36(4): 82-91.
[12] 张瑞瑞, 夏浪, 陈立平, 丁晨琛, 郑爱春, 胡新苗, 伊铜川, 陈梅香, 陈天恩. 深度语义分割网络无人机遥感松材线虫病变色木识别[J]. 自然资源遥感, 2024, 36(3): 216-224.
[13] 温泉, 李璐, 熊立, 杜磊, 刘庆杰, 温奇. 基于深度学习的遥感图像水体提取综述[J]. 自然资源遥感, 2024, 36(3): 57-71.
[14] 白石, 唐攀攀, 苗朝, 金彩凤, 赵博, 万昊明. 基于高分辨率遥感影像和改进U-Net模型的滑坡提取——以汶川地区为例[J]. 自然资源遥感, 2024, 36(3): 96-107.
[15] 宋爽爽, 肖开斐, 刘昭华, 曾昭亮. 一种基于YOLOv5的高分辨率遥感影像目标检测方法[J]. 自然资源遥感, 2024, 36(2): 50-59.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
京ICP备05055290号-2
版权所有 © 2015 《自然资源遥感》编辑部
地址:北京学院路31号中国国土资源航空物探遥感中心 邮编:100083
电话:010-62060291/62060292 E-mail:zrzyyg@163.com
本系统由北京玛格泰克科技发展有限公司设计开发