Technical Papers
Mar 24, 2023

Dense 3D Reconstruction of Building Scenes by AI-Based Camera–Lidar Fusion and Odometry

Publication: Journal of Computing in Civil Engineering
Volume 37, Issue 4

Abstract

Scanning is a key element for many use cases in the architectural, engineering, construction, and operation industry. It provides point clouds used for construction quality assurance, scan-to-building information modelling (BIM) workflows, and construction surveys. However, data acquisition using static laser scanners or photogrammetry methods is labor-intensive during scanning and postprocessing. Mobile scanners are conceptually the solution to this problem, given their potential to dramatically reduce onsite scanning effort and eliminate postprocessing work. However, current mobile mapping devices are limited to producing point clouds of relatively low resolution. In this paper, we propose a dense three-dimensional (3D) reconstruction pipeline for improving the resolution of point clouds, suitable for handheld scanners comprised of a color camera and a lidar. We fuse time-synchronized and spatially registered images and lidar sweeps using a spatial artificial intelligence (AI) method into dense scans of higher resolution, which are then used for progressive reconstruction. The novelty of our approach is that we first increase the precision and density of a bunch of individual lidar scans by inferring additional geometric constraints coming from predicted feature maps in the corresponding images. Then, we automatically register these scans together, thus reconstructing the scene progressively in an odometric manner. We built a prototypic scanner, implemented our reconstruction pipeline as a software package, and tested the whole in both indoor and outdoor case studies. The results showed that our method provided an overall noise reduction in point clouds by 11% and increased their density around six times.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

The three reconstructed models portrayed in Figs. 12(a–c) are available from the corresponding author upon reasonable request.

Acknowledgments

The research leading to this paper received funding from BP, GeoSLAM, Laing O’Rourke, Topcon, and Trimble. We would like to thank these companies for making our research possible. We gratefully acknowledge the collaboration of all the industrial partners. Any opinions, findings, conclusions, or recommendations expressed in this material are ours and do not necessarily reflect the views of the aforementioned companies. We also would like to thank Hughes Hall College for providing access to their grounds.

References

Amblard, V., T. P. Osedach, A. Croux, A. Speck, and J. J. Leonard. 2021. “Lidar-monocular surface reconstruction using line segments.” In Proc., 2021 IEEE Int. Conf. on Robotics and Automation (ICRA), 5631–5637. New York: IEEE.
Arun, K. S., T. S. Huang, and S. D. Blostein. 1987. “Least-squares fitting of two 3-D point sets.” IEEE Trans. Pattern Anal. Mach. Intell. 9 (5): 698–700. https://doi.org/10.1109/TPAMI.1987.4767965.
BIM Task Group. 2013. Client guide to 3D scanning and data capturing. Cambridge, UK: UK Government BIM Task Group.
Bradski, G. 2000. “The OpenCV library.” Accessed December 8, 2021. https://opencv.org/.
Cadena, C., L. Carlone, H. Carrillo, Y. Latif, D. Scaramuzza, J. Neira, I. Reid, and J. J. Leonard. 2016. “Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age.” IEEE Trans. Rob. 32 (6): 1309–1332. https://doi.org/10.1109/TRO.2016.2624754.
Cipolla, R. 2017. Computer vision, handout 2: Feature extraction and description. Cambridge, UK: Univ. of Cambridge.
Deschaud, J.-E. 2018. “IMLS-SLAM: Scan-to-model matching based on 3D data.” Preprint, submitted February 23, 2018. http://arxiv.org/abs/1802.08633.
Engel, J., V. Koltun, and D. Cremers. 2016. “Direct sparse odometry.” Preprint, submitted July 9, 2016. http://arxiv.org/abs/1607.02565.
FARO. 2016. “Technical specification sheet for the Focus3D X 30/130/330 and X 130/330 HDR.” Accessed January 19, 2021. https://knowledge.faro.com/Hardware/3D_Scanners/Focus/Technical_Specification_Sheet_for_the_Focus3D_X_30-130-330_and_X_130-330_HDR.
Geiger, A., P. Lenz, and R. Urtasun. 2012. “Are we ready for autonomous driving? The KITTI vision benchmark suite.” In Proc., 2012 IEEE Conf. on Computer Vision and Pattern Recognition, 3354–3361. New York: IEEE. https://doi.org/10.1109/CVPR.2012.6248074.
Graeter, J., A. Wilczynski, and M. Lauer. 2018. “Limo: Lidar-monocular visual odometry.” In Proc., 2018  IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 7872–7879. New York: IEEE.
Grisetti, G., R. Kümmerle, C. Stachniss, and W. Burgard. 2010. “A tutorial on graph-based SLAM.” IEEE Intell. Transp. Syst. Mag. 2 (4): 31–43. https://doi.org/10.1109/MITS.2010.939925.
Huang, J.-K., and J. W. Grizzle. 2020. “Improvements to target-based 3D LiDAR to camera calibration.” IEEE Access 8 (Jul): 134101–134110. https://doi.org/10.1109/ACCESS.2020.3010734.
Imran, S., X. Liu, and D. Morris. 2021. “Depth completion with twin surface extrapolation at occlusion boundaries.” Preprint, submitted April 6, 2021. http://arxiv.org/abs/210402253.
Kaess, M., and F. Dellaert. 2017. “Factor graphs for robot perception.” Accessed February 2, 2022. https://ieeexplore.ieee.org/document/8187520?bkn=8187520.
Kalyan, T. S., P. A. Zadeh, S. Staub-French, and T. M. Froese. 2016. “Construction quality assessment using 3D as-built models generated with Project Tango.” Procedia Eng. 145 (May): 1416–1423. https://doi.org/10.1016/j.proeng.2016.04.178.
Ku, J., A. Harakeh, and S. L. Waslander. 2018. “In defense of classical image processing: Fast depth completion on the CPU.” In Proc., 2018 15th Conf. on Computer and Robot Vision (CRV), 16–22. New York: IEEE.
Leal-Taixé, L. 2020. “TUM lectures, advanced deep learning for computer vision: Autoencoders, VAE and style transfer.” Accessed August 20, 2020. https://www.youtube.com/watch?v=_UBLolMRwsk&list=PLuv1FSpHurUcQi2CwFIVQelSF CzxphJqz&index.
Li, Z., P. C. Gogia, and M. Kaess. 2019. “Dense surface reconstruction from monocular vision and LiDAR.” In Proc., 2019 Int. Conf. on Robotics and Automation (ICRA), 6905–6911. New York: IEEE.
Ling, Z., M. Yuqing, and S. Ruoming. 2008. “Study on the resolution of laser scanning point cloud.” In Proc., IGARSS 2008-2008 IEEE Int. Geoscience and Remote Sensing Symp., 1136–1139. New York: IEEE.
Ma, F., G. V. Cavalheiro, and S. Karaman. 2019. “Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera.” In Proc., 2019 Int. Conf. on Robotics and Automation (ICRA), 3288–3295. New York: IEEE.
Mur-Artal, R., and J. D. Tardos. 2017. “ORB-SLAM2: An open-source SLAM system for monocular, stereo and RGB-D cameras.” IEEE Trans. Rob. 33 (5): 1255–1262. https://doi.org/10.1109/TRO.2017.2705103.
NavVis. et al. 2021. State of mobile mapping: Survey. Munich, Germany: NavVis.
Park, J., K. Joo, Z. Hu, C. K. Liu, and I. So Kweon. 2020. “Non-local spatial propagation network for depth completion.” Preprint, submitted July 20, 2020. http://arxiv.org/abs/200710042.
Qin, T., and S. Cao. 2019. A-LOAM: Advanced lidar odometry and mapping. Hong Kong: HKUST Aerial-Robotics Group.
Quigley, M., K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. 2009. “ROS: An open-source robot operating system.” In Proc., ICRA Workshop on Open Source Software, 5. Berlin: Springer.
RICS (Royal Institution of Chartered Surveyors). 2014. “Measured surveys of land, buildings and utilities.” In RICS guidance note, global. 3rd ed. London: RICS.
ROS. 2022. “Rosbag: ROS Wiki.” Accessed February 2, 2022. http://wiki.ros.org/rosbag.
Shan, T., and B. Englot. 2018. “LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain.” In Proc., 2018  IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 4758–4765. New York: IEEE. https://doi.org/10.1109/IROS.2018.8594299.
Shan, T., B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus. 2020. “LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping.” Preprint, submitted July 20, 2020. http://arxiv.org/abs/2007.00258.
Silberman, N., D. Hoiem, P. Kohli, and R. Fergus. 2012. “Indoor segmentation and support inference from RGBD images.” ECCV 7576 (5): 746–760. https://doi.org/10.1007/978-3-642-33715-4_54.
Tang, J., R. Ambrus, V. Guizilini, S. Pillai, H. Kim, P. Jensfelt, and A. Gaidon. 2019. “Self-supervised 3D keypoint learning for ego-motion estimation.” Preprint, submitted July 20, 2020. http://arxiv.org/abs/191203426.
Trzeciak, M. P., and I. Brilakis. 2021. “Comparison of accuracy and density of static and mobile laser scanners.” In Vol. 2 of Proc., EC3 Conf. 2021, 197–203. Zürich, Switzerland: ETH. https://doi.org/10.17863/CAM.66857.
Uhrig, J., N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger. 2017. “Sparsity invariant CNNs.” In Proc., 2017 Int. Conf. on 3D Vision (3DV), 11–20. New York: IEEE.
Xiong, X., H. Xiong, K. Xian, C. Zhao, Z. Cao, and X. Li. 2020. “Sparse-to-dense depth completion revisited: Sampling strategy and graph construction.” In Computer vision: ECCV 2020, edited by A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, 682–699. Cham, Switzerland: Springer.
Zach, C. 2014. “Dense reconstruction.” In Computer vision: A reference guide, edited by K. Ikeuchi, 179–181. Boston: Springer.
Zhang, J., and S. Singh. 2014. “LOAM: Lidar odometry and mapping in real-time.” Rob. Sci. Syst. 2 (9): 1–9.
Zhang, J., and S. Singh. 2015. “Visual-lidar odometry and mapping: Low-drift, robust, and fast.” In Proc., 2015 IEEE Int. Conf. on Robotics and Automation (ICRA), 2174–2181. New York: IEEE.
Zhang, J., and S. Singh. 2018. “Laser–visual–inertial odometry and mapping with high robustness and low drift.” J. Field Rob. 35 (8): 1242–1264. https://doi.org/10.1002/ROB.21809.
Zhang, Y., and T. Funkhouser. 2018. “Deep depth completion of a single RGB-D image.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 175–185. New York: IEEE.
Zhen, W., Y. Hu, J. Liu, and S. Scherer. 2019. “A joint optimization approach of lidar-camera fusion for accurate dense 3-D reconstructions.” IEEE Rob. Autom. Lett. 4 (4): 3585–3592. https://doi.org/10.1109/LRA.2019.2928261.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 4July 2023

History

Received: Mar 8, 2022
Accepted: Jan 20, 2023
Published online: Mar 24, 2023
Published in print: Jul 1, 2023
Discussion open until: Aug 24, 2023

Permissions

Request permissions for this article.

Authors

Affiliations

Ph.D. Candidate, Dept. of Engineering, Univ. of Cambridge, Cambridge CB3 0FA, UK (corresponding author). ORCID: https://orcid.org/0000-0001-8188-487X. Email: [email protected]
Ioannis Brilakis, M.ASCE https://orcid.org/0000-0003-1829-2083
Laing O’Rourke Professor, Dept. of Engineering, Univ. of Cambridge, Cambridge CB3 0FA, UK. ORCID: https://orcid.org/0000-0003-1829-2083

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • A Streamlined Laser Scanning Verticality Check Method for Installation of Prefabricated Wall Panels, Journal of Construction Engineering and Management, 10.1061/JCEMD4.COENG-14989, 150, 11, (2024).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share