Vo-Norvana: Versatile Framework for Efficient Segmentation of Large Point Cloud Data Sets
Publication: Journal of Computing in Civil Engineering
Volume 37, Issue 4
Abstract
Dense three-dimensional (3D) point clouds collected from rapidly evolving data acquisition techniques such as light detection and ranging (lidar) and structure from motion (SfM) multiview stereo (MVS) photogrammetry contain detailed geometric information of a scene suitable for a wide variety of applications. Among the many processes within a typical point cloud processing workflow, segmentation is often a crucial step to group points with similar attributes to support more advanced modeling and analysis. Segmenting large point cloud data sets (i.e., hundreds of millions to billions of points) can be extremely time consuming and tedious with current tools, which primarily rely on significant manual effort. While many automated methods have been proposed, the practicality, scalability, and versatility of these approaches remain a bottleneck stifling processing of large data sets. To overcome these challenges, this paper introduces a novel, generalized segmentation framework called Vo-Norvana, which incorporates a new voxelization technique, a normal variation analysis considering the positioning uncertainty of the point cloud, and a custom region growing process for clustering. The proposed framework was tested with several large-volume data sets collected in diverse scene types using several data acquisition platforms including terrestrial lidar, mobile lidar, airborne lidar, and drone-based SfM-MVS photogrammetry. In evaluating the accuracy of models generated from Vo-Norvana against manual segmentation, the average error of the position, orientation, and dimensions are 2.7 mm, 0.083°, and 0.9 mm, respectively. Over 0.2 million points per second and 36 thousand voxels per second can be achieved when segmenting an airborne lidar data set containing over 639 million points to about 1 million segments.
Get full access to this article
View all available purchase options and get full access to this article.
Data Availability Statement
Some data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request (benchmark, TLS, and UAS-SfM data sets). Some data, models, or code used during the study were provided by a third party (MLS data set). Direct requests for these materials may be made to the provider as indicated in the Acknowledgments. Some data, models, or code generated or used during the study are available in a repository online in accordance with funder data retention policies (the ALS data set is available at https://geo.nyu.edu/catalog/nyu_2451_38684).
Acknowledgments
This material is based on work supported by the National Science Foundation under Grant Nos. CMMI-1351487, OIA- 2040735, and EEC-1937070. This work is partially supported by the Pacific Northwest Transportation Consortium (PacTrans) and the University Venture Development Fund (UVDF) from Oregon State University (OSU). The authors would like to thank the Oregon Department of Transportation and Chase Simpson at OSU for collecting and preparing some of the data, as well as Leica Geosystems and David Evans and Associates for providing hardware and software used in this research. CloudCompare was also used for some data visualization. The Confederated Tribes of the Grand Ronde graciously provided access to the Blue Heron Paper Mill site and supported site data acquisition. The authors have financial interests in EZDataMD LLC, which commercializes the technology described in this research. The conduct, outcomes, or reporting of this research could benefit EZDataMD LLC and could potentially benefit the authors.
References
ASPRS (American Society for Photogrammetry & Remote Sensing). 2019. “LAS specification 1.4-R15.” Accessed December 30, 2021. http://www.asprs.org/wp-content/uploads/2019/07/LAS_1_4_r15.pdf.
Baru, C., et al. 2022. “Enabling AI innovation via data and model sharing: An overview of the NSF Convergence Accelerator Track D.” AI Mag. 43 (1): 93–104. https://doi.org/10.1002/aaai.12042.
Bassier, M., M. Bonduel, B. Van Genechten, and M. Vergauwen. 2017. “Segmentation of large unstructured point clouds using octree-based region growing and conditional random fields.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 42 (2W8): 25–30. https://doi.org/10.5194/isprs-archives-XLII-2-W8-25-2017.
Bello, S. A., S. Yu, C. Wang, J. M. Adam, and J. Li. 2020. “Deep learning on 3D point clouds.” Remote Sens. 12 (11): 1729. https://doi.org/10.3390/rs12111729.
Bresky, R. J. 2016. The papermakers: More than run of the mill. San Bernardino, CA: CreateSpace Independent Publishing Platform.
Che, E., J. Jung, and M. J. Olsen. 2019. “Object recognition, segmentation, and classification of mobile laser scanning point clouds: A state of the art review.” Sensors 19 (4): 810. https://doi.org/10.3390/s19040810.
Che, E., and M. J. Olsen. 2018. “Multi-scan segmentation of terrestrial laser scanning data based on normal variation analysis.” ISPRS J. Photogramm. Remote Sens. 143 (Sep): 233–248. https://doi.org/10.1016/j.isprsjprs.2018.01.019.
Che, E., and M. J. Olsen. 2019. “An efficient framework for mobile lidar trajectory reconstruction and Mo-norvana segmentation.” Remote Sens. 11 (7): 836. https://doi.org/10.3390/rs11070836.
Che, E., M. J. Olsen, and J. Jung. 2021a. “Efficient segment-based ground filtering and adaptive road detection from mobile light detection and ranging (LiDAR) data.” Int. J. Remote Sens. 42 (10): 3633–3659. https://doi.org/10.1080/01431161.2020.1871095.
Che, E., A. Senogles, and M. J. Olsen. 2021b. “Vo-SmoG: A versatile, smooth segment-based ground filter for point clouds via multi-scale voxelization.” ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 8 (Oct): 59–66. https://doi.org/10.5194/isprs-annals-VIII-4-W2-2021-59-2021.
Dal Poz, A. P., and M. S. Yano Ywata. 2020. “Adaptive random sample consensus approach for segmentation of building roof in airborne laser scanning point cloud.” Int. J. Remote Sens. 41 (6): 2047–2061. https://doi.org/10.1080/01431161.2019.1683644.
Dong, Z., B. Yang, P. Hu, and S. Scherer. 2018. “An efficient global energy optimization approach for robust 3D plane segmentation of point clouds.” ISPRS J. Photogramm. Remote Sens. 137 (Mar): 112–133. https://doi.org/10.1016/j.isprsjprs.2018.01.013.
Grilli, E., F. Menna, and F. Remondino. 2017. “A review of point clouds segmentation and classification algorithms.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 42: 339. https://doi.org/10.5194/isprs-archives-XLII-2-W3-339-2017.
Hackel, T., N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, and M. Pollefeys. 2017. “Semantic3d. net: A new large-scale point cloud classification benchmark.” Preprint, submitted April 12, 2017. https://arxiv.org/abs/1704.03847.
Huang, M., P. Wei, and X. Liu. 2019. “An efficient encoding voxel-based segmentation (EVBS) algorithm based on fast adjacent voxel search for point cloud plane segmentation.” Remote Sen. 11 (23): 2727. https://doi.org/10.3390/rs11232727.
Isenburg, M. 2013. “LASzip: Lossless compression of LiDAR data.” Photogramm. Eng. Remote Sens. 79 (2): 209–217. https://doi.org/10.14358/PERS.79.2.209.
Laefer, D. F., S. Abuwarda, A. V. Vo, L. Truong-Hong, and H. Gharibi. 2017. 2015 aerial laser and photogrammetry survey of Dublin city collection record. New York: New York Univ. https://doi.org/10.17609/N8MQ0N.
Li, Y., L. Ma, Z. Zhong, F. Liu, M. A. Chapman, D. Cao, and J. Li. 2020. “Deep learning for LiDAR point clouds in autonomous driving: A review.” IEEE Trans. Neural Networks Learn. Syst. 32 (8): 3412–3432. https://doi.org/10.1109/TNNLS.2020.3015992.
Lin, Y., C. Wang, B. Chen, D. Zai, and J. Li. 2017. “Facet segmentation-based line segment extraction for large-scale point clouds.” IEEE Trans. Geosci. Remote Sens. 55 (9): 4839–4854. https://doi.org/10.1109/TGRS.2016.2639025.
Maalek, R., D. D. Lichti, and J. Y. Ruwanpura. 2018. “Robust segmentation of planar and linear features of terrestrial laser scanner point clouds acquired from construction sites.” Sensors 18 (3): 819. https://doi.org/10.3390/s18030819.
Mahmoudabadi, H., T. Shoaf, and M. J. Olsen. 2013. “Superpixel clustering and planar fit segmentation of 3D LiDAR point clouds.” In Proc., 2013 4th Int. Conf. on Computing for Geospatial Research and Application, 1–7. New York: IEEE.
Matrone, F., E. Grilli, M. Martini, M. Paolanti, R. Pierdicca, and F. Remondino. 2020. “Comparing machine and deep learning methods for large 3D heritage semantic segmentation.” ISPRS Int. J. Geo-Inf. 9 (9): 535. https://doi.org/10.3390/ijgi9090535.
Ni, H., X. Lin, and J. Zhang. 2017. “Classification of ALS point cloud with improved point cloud segmentation and random forests.” Remote Sens. 9 (3): 288. https://doi.org/10.3390/rs9030288.
Olsen, M. J., G. V. Roe, C. Glennie, F. Persi, M. Reedy, D. Hurwitz, K. Williams, H. Tuss, A. Squellati, and M. Knodler. 2013. Guidelines for the use of mobile LIDAR in transportation applications. Washington, DC: Transportation Research Board.
Olsen, M. J., J. Wartman, M. McAlister, H. Mahmoudabadi, M. S. O’Banion, L. Dunham, and K. Cunningham. 2015. “To fill or not to fill: Sensitivity analysis of the influence of resolution and hole filling on point cloud surface modeling and individual rockfall event detection.” Remote Sens. 7 (9): 12103–12134. https://doi.org/10.3390/rs70912103.
Poux, F., and R. Billen. 2019. “Voxel-based 3D point cloud semantic segmentation: Unsupervised geometric and relationship featuring vs. deep learning methods.” ISPRS Int. J. Geo-Inf. 8 (5): 213. https://doi.org/10.3390/ijgi8050213.
Poux, F., C. Mattes, and L. Kobbelt. 2020. “Unsupervised segmentation of indoor 3D point cloud: Application to object-based classification.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 44 (2020): 111–118. https://doi.org/10.5194/isprs-archives-XLIV-4-W1-2020-111-2020.
Vo, A. V., D. F. Laefer, and J. Byrne. 2021. “Optimizing urban LiDAR flight path planning using a genetic algorithm and a dual parallel computing framework.” Remote Sens. 13 (21): 4437. https://doi.org/10.3390/rs13214437.
Vo, A. V., L. Truong-Hong, D. F. Laefer, and M. Bertolotto. 2015. “Octree-based region growing for point cloud segmentation.” ISPRS J. Photogramm. Remote Sens. 104 (Jun): 88–100. https://doi.org/10.1016/j.isprsjprs.2015.01.011.
Vosselman, G. 2013. “Point cloud segmentation for urban scene classification.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 1 (257–262): 1. https://doi.org/10.5194/isprsarchives-XL-7-W2-257-2013.
Wang, K., T. Wang, and X. Liu. 2019. “A review: Individual tree species classification using integrated airborne LiDAR and optical imagery with a focus on the urban environment.” Forests 10 (1): 1. https://doi.org/10.3390/f10010001.
Wang, Q., and M. K. Kim. 2019. “Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018.” Adv. Eng. Inf. 39 (Jan): 306–319. https://doi.org/10.1016/j.aei.2019.02.007.
Wang, R. 2013. “3D building modeling using images and LiDAR: A review.” Int. J. Image Data Fusion 4 (4): 273–292. https://doi.org/10.1080/19479832.2013.811124.
Wang, R., J. Peethambaran, and D. Chen. 2018. “Lidar point clouds to 3-D urban models: A review.” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 11 (2): 606–627. https://doi.org/10.1109/JSTARS.2017.2781132.
Wang, X., T. O. Chan, K. Liu, J. Pan, M. Luo, W. Li, and C. Wei. 2020. “A robust segmentation framework for closely packed buildings from airborne LiDAR point clouds.” Int. J. Remote Sens. 41 (14): 5147–5165. https://doi.org/10.1080/01431161.2020.1727053.
Weinmann, M., B. Jutzi, S. Hinz, and C. Mallet. 2015. “Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers.” ISPRS J. Photogramm. Remote Sens. 105 (Jul): 286–304. https://doi.org/10.1016/j.isprsjprs.2015.01.016.
Xia, S., D. Chen, R. Wang, J. Li, and X. Zhang. 2020. “Geometric primitives in LiDAR point clouds: A review.” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 13 (Jan): 685–707. https://doi.org/10.1109/JSTARS.2020.2969119.
Xie, Y., J. Tian, and X. X. Zhu. 2020. “Linking points with labels in 3D: A review of point cloud semantic segmentation.” IEEE Geosci. Remote Sens. Mag. 8 (4): 38–59. https://doi.org/10.1109/MGRS.2019.2937630.
Xu, Y., X. Tong, and U. Stilla. 2021. “Voxel-based representation of 3D point clouds: Methods, applications, and its potential use in the construction industry.” Autom. Constr. 126 (Jun): 103675. https://doi.org/10.1016/j.autcon.2021.103675.
Yang, J., Z. Kang, S. Cheng, Z. Yang, and P. H. Akwensi. 2020. “An individual tree segmentation method based on watershed algorithm and three-dimensional spatial distribution analysis from airborne LiDAR point clouds.” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 13 (Mar): 1055–1067. https://doi.org/10.1109/JSTARS.2020.2979369.
Zhang, Y., W. Yang, X. Liu, Y. Wan, X. Zhu, and Y. Tan. 2021. “Unsupervised building instance segmentation of airborne LiDAR point clouds for parallel reconstruction analysis.” Remote Sens. 13 (6): 1136. https://doi.org/10.3390/rs13061136.
Zhao, C., H. Guo, J. Lu, D. Yu, X. Zhou, and Y. Lin. 2021. “A new approach for roof segmentation from airborne LiDAR point clouds.” Remote Sens. Lett. 12 (4): 377–386. https://doi.org/10.1080/2150704X.2020.1847348.
Zhu, X., X. Liu, Y. Zhang, Y. Wan, and Y. Duan. 2021. “Robust 3-D plane segmentation from airborne point clouds based on quasi-a-contrario theory.” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 14 (Jun): 7133–7147. https://doi.org/10.1109/JSTARS.2021.3093576.
Information & Authors
Information
Published In
Copyright
© 2023 American Society of Civil Engineers.
History
Received: Apr 26, 2022
Accepted: Jan 24, 2023
Published online: Mar 30, 2023
Published in print: Jul 1, 2023
Discussion open until: Aug 30, 2023
Authors
Metrics & Citations
Metrics
Citations
Download citation
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.