RandLA-Net-based detection of urban building change using airborne LiDAR point clouds
MENG Congtang1(), ZHAO Yindi1(), HAN Wenquan2, HE Chenyang1, CHEN Xiqiu2
1. School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China 2. Nanjing Insititute of Surveying, Mapping and Geotechnical Investigation Co. Ltd., Nanjing 210019, China
Using remote sensing to detect changes in urban buildings can obtain the change information of building coverage quickly and accurately. However, it is difficult to detect 3D changes quickly and accurately based on image data alone. Moreover, conventional point cloud-based methods have low automation and poor precision. To address these problems, this study used the airborne LiDAR point clouds and employed the RandLA-Net’s point cloud semantic segmentation method to improve the accuracy and automation of change detection. Meanwhile, the failure in differentiating two-period data due to point cloud disorder was overcome through point cloud projection. The standard RandLA-Net method, with the location and color information of points as features, is mainly used for semantic segmentation of street-level point clouds. In this study, urban large-scale airborne point clouds combined with the inherent reflection intensity and the spectral information of point clouds given by images were used to explore the influence of different feature information on the precision of the results. Furthermore, it was found that in addition to the point cloud intensity and spectral features, the coordinate information of points is equally important and can be converted into relative coordinates to significantly improve the result precision. The experimental findings show that the results obtained using RandLA-Net are significantly better than those using conventional methods for building extraction and change detection. This study also verified the feasibility of using deep learning methods to process LiDAR data for building extraction and change detection, which can realize reliable 3D building change detection.
Murakami H, Nakagawa K, Hasegawa H, et al. Change detection of buildings using an airborne laser scanner[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 1999, 54(2):148-152.
doi: 10.1016/S0924-2716(99)00006-4
[2]
Vu T, Matsuoka M, Yamazaki F. LiDAR-based change detection of buildings in dense urban areas[C]// IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2004:3413-3416.
[3]
Pang S, Hu X, Wang Z, et al. Object-based analysis of airborne LiDAR data for building change detection[J]. Remote Sensing, 2014, 6(11):10733-10749.
doi: 10.3390/rs61110733
[4]
Pirasteh S, Rashidi P, Rastiveis H, et al. Developing an algorithm for buildings extraction and determining changes from airborne LiDAR,and comparing with R-CNN method from drone images[J]. Remote Sensing, 2019, 11(11):1272.
doi: 10.3390/rs11111272
Zeng J J, Zhang X G, Wang G. Urban land surface change detection based on LiDAR point cloud[J]. Urban Geotechnical Investigation and Surveying, 2021(2):92-95.
[6]
Matikainen L, Hyyppä J, Ahokas E, et al. Automatic detection of buildings and changes in buildings for updating of maps[J]. Remote Sensing, 2010, 2(5):1217-1248.
doi: 10.3390/rs2051217
[7]
Malpica J A, Alonso M C. Urban changes with satellite imagery and LiDAR data[J]. International Archives of the Photogrammetry,Remote Sensing and Spatial Information Science, 2010, 38(8):853-858.
[8]
Du S, Zhang Y, Qin R, et al. Building change detection using old aerial images and new LiDAR data[J]. Remote Sensing, 2016, 8(12):1030.
doi: 10.3390/rs8121030
[9]
Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-651.
doi: 10.1109/TPAMI.2016.2572683
pmid: 27244717
[10]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J/OL]. arXiv, 2014(2015-04-10)[2022-10/15]. https://arxiv.org/abs/1409.1556 .
[11]
Badrinarayanan V, Kendall A, Cipolla R. Segnet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
doi: 10.1109/TPAMI.2016.2644615
pmid: 28060704
[12]
He K, Zhang X, Ren S, et al. Deep residual learning for image reco-gnition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:770-778.
[13]
Ronneberger O, Fischer P, Brox T. U-Net:Convolutional networks for biomedical image segmentation[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention.Springer,Cham, 2015:234-241.
[14]
Ren S, He K, Girshick R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39(6):1137-1149.
doi: 10.1109/TPAMI.2016.2577031
[15]
Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:2117-2125.
Xie Q F, Yao G Q, Zhang M. Research on high resolution image object detection technology based on Faster R-CNN[J]. Remote Sensing for Land and Resources, 2019, 31(2):38-43.doi:10.6046/gtzyyg.2019.02.06.
doi: 10.6046/gtzyyg.2019.02.06
Wu Y, Zhang J, Li Y X, et al. Research on building cluster identification based on improved U-Net[J]. Remote Sensing for Land and Resources, 2021, 33(2):48-54.doi:10.6046/gtzyyg.2020278.
doi: 10.6046/gtzyyg.2020278
Lu Q, Qin J, Yao X D, et al. Buildings extraction of GF-2 remote sensing image based on multi-layer perception network[J]. Remote Sensing for Land and Resources, 2021, 33(2):75-84.doi:10.6046/gtzyyg.2020289.
doi: 10.6046/gtzyyg.2020289
Liu W Y, Yue A Z, Ji Y, et al. Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model[J]. Remote Sensing for Land and Resources, 2020, 32(2):120-129.doi:10.6046/gtzyyg.2020.02.16.
doi: 10.6046/gtzyyg.2020.02.16
An J J, Meng Q Y, Hu D, et al. The detection and determination of the working state of cooling tower in the thermal power plant based on Faster R-CNN[J]. Remote Sensing for Land and Resources, 2021, 33(2):93-99.doi:10.6046/gtzyyg.2020184.
doi: 10.6046/gtzyyg.2020184
[21]
Guo Y, Wang H, Hu Q, et al. Deep learning for 3D point clouds:A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 43(12):4338-4364.
doi: 10.1109/TPAMI.2020.3005434
[22]
Qi C R, Su H, Mo K, et al. Pointnet:Deep learning on point sets for 3D classification and segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:652-660.
[23]
Wu B, Wan A, Yue X, et al. Squeezeseg:Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud[C]// 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018:1887-1893.
[24]
Thomas H, Qi C R, Deschaud J E, et al. Kpconv:Flexible and deformable convolution for point clouds[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019:6411-6420.
[25]
Hu Q, Yang B, Xie L, et al. RandLA-Net:Efficient semantic segmentation of large-scale point clouds[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:11108-11117.
[26]
Hackel T, Savinov N, Ladicky L, et al. Semantic3d.net:A new large-scale point cloud classification benchmark[J/OL]. arXiv, 2017(2017-04-12)[2022-10/15]. https://arxiv.org/abs/1704.03847 .