Abstract

This paper presents a data set collected periodically on a construction site. The data set aims to evaluate the performance of simultaneous localization and mapping (SLAM) algorithms used by mobile scanners or autonomous robots. It includes ground-truth scans of a construction site collected using a terrestrial laser scanner along with five sequences of spatially registered and time-synchronized images, lidar scans, and inertial data coming from our prototypical handheld scanner. We also recover the ground-truth trajectory of the mobile scanner by registering the sequential lidar scans to the ground-truth scans and show how to use a popular software package to measure the accuracy of SLAM algorithms against our trajectory automatically. To the best of our knowledge, this is the first publicly accessible data set consisting of periodically collected sequential data on a construction site.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

The whole data set described in this paper is accessible under the following link: https://github.com/mac137/ConSLAM.

Acknowledgments

The authors would like to thank Laing O’Rourke for allowing access to their construction site and collecting the ground-truth scans. We also acknowledge Romain Carriquiry-Borchiari of Ubisoft France for his help in rendering some of the figures and Amanda Xu, a summer intern at the University of Cambridge, who helped us with anonymizing the images. This work is also supported by the EU Horizon 2020 BIM2TWIN: Optimal Construction Management & Production Control project under Agreement No. 958398. The first author would also like to thank BP, GeoSLAM, Laing O’Rourke, Topcon, and Trimble for sponsoring his studentship funding.

References

Behley, J., M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall. 2019. “Semantickitti: A dataset for semantic scene understanding of lidar sequences.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 9297–9307. New York: IEEE.
Behley, J., and C. Stachniss. 2018. “Efficient surfel-based slam using 3d laser range data in urban environments.” Rob.: Sci. Syst. 2018 (Jun): 59.
Beltrán, J., C. Guindel, A. de la Escalera, and F. García. 2022. “Automatic extrinsic calibration method for lidar and camera sensor setups.” IEEE Trans. Intell. Transp. Syst. 23 (10): 17677–17689. https://doi.org/10.1109/TITS.2022.3155228.
Bradski, G. 2000. “The OpenCV library.” Dr. Dobb’s J. Software Tools 25 (11): 120–123.
Brown, D. 1966. “Decentering distortion of lenses.” Photogramm. Eng. 32 (3): 444–462.
Caesar, H., V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom. 2020. “nuscenes: A multimodal dataset for autonomous driving.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 11621–11631. New York: IEEE.
Chang, M.-F., et al. 2019. “Argoverse: 3D tracking and forecasting with rich maps.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 8748–8757. New York: IEEE.
Faizullin, M., A. Kornilova, and G. Ferrer. 2021. “Open-source lidar time synchronization system by mimicking GNSS-clock.” In Proc., 2022 IEEE Int. Symp. on Precision Clock Synchronization for Measurement, Control, and Communication (ISPCS), 1–5. New York: IEEE.
Field, D., J. Leibs, J. Bowman, D. Thomas, and J. Perron. 2010. “Rosbag.” Accessed October 5, 2022. https://wiki.ros.org/rosbag.
Fritsch, J., T. Kuehnl, and A. Geiger. 2013. “A new performance measure and evaluation benchmark for road detection algorithms.” In Proc., 16th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC 2013), 1693–1700. New York: IEEE.
Gao, B., Y. Pan, C. Li, S. Geng, and H. Zhao. 2020. “Are we hungry for 3D LiDAR data for semantic segmentation? A survey and experimental study.” Preprint, submitted November 30, 2020. https://arxiv.org/abs/2006.04307.
Garrido-Jurado, S., R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marn-Jiménez. 2014. “Automatic generation and detection of highly reliable fiducial markers under occlusion.” Pattern Recognit. 47 (6): 2280–2292. https://doi.org/10.1016/j.patcog.2014.01.005.
Geiger, A., P. Lenz, C. Stiller, and R. Urtasun. 2013. “Vision meets robotics: The kitti dataset.” Int. J. Rob. Res. 32 (11): 1231–1237. https://doi.org/10.1177/0278364913491297.
Geiger, A., P. Lenz, and R. Urtasun. 2012. “Are we ready for autonomous driving? The kitti vision benchmark suite.” In Proc., 2012 IEEE Conf. on Computer Vision and Pattern Recognition, 3354–3361. New York: IEEE.
Github. n.d. “ConSLAM dataset.” Accessed February 14, 2023. https://github.com/mac137/ConSLAM.
Griffiths, D., and J. Boehm. 2019. “SynthCity: A large scale synthetic point cloud.” Preprint, submitted July 10, 2019. https://arxiv.org/abs/1907.04758.
Grupp, M. 2017. “evo: Python package for the evaluation of odometry and SLAM.” Accessed September 15, 2022. https://github.com/MichaelGrupp/evo.
Haarbach, A. 2015. “Multiview ICP.” Accessed September 15, 2022. http://www.adrian-haarbach.de/mv-lm-icp/docs/mv-lm-icp.pdf.
Helmberger, M., K. Morin, N. Kumar, D. Wang, Y. Yue, G. Cioffi, and D. Scaramuzza. 2021. “The Hilti SLAM challenge dataset.” Preprint, submitted September 23, 2021. https://arxiv.org/abs/2109.11316.
Huang, X., G. Mei, J. Zhang, and R. Abbas. 2021. “A comprehensive survey on point cloud registration.” Preprint, submitted March 3, 2021. https://arxiv.org/abs/2103.02690.
Khoshelham, K., K. Díaz, L. Vilariño, M. Peter, Z. Kang, and D. Acharya. 2017. “The ISPRS benchmark on indoor modeling.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 42 (2): W7. https://doi.org/10.5194/isprs-archives-XLII-2-W7-367-2017.
Lv, J., X. Zuo, K. Hu, J. Xu, G. Huang, and Y. Liu. 2022. “Observability-aware intrinsic and extrinsic calibration of LiDAR-IMU systems.” IEEE Trans. Rob. 38 (6): 3734–3753. https://doi.org/10.1109/TRO.2022.3174476.
Menze, M., and A. Geiger. 2015. “Object scene flow for autonomous vehicles.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3061–3070. New York: IEEE.
Mishra, S., G. Pandey, and S. Saripalli. 2021. “Target-free extrinsic calibration of a 3D-LiDAR and an imu.” In Proc., 2021 IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems (MFI), 1–7. New York: IEEE.
Nua, C., and M. Jitianming. 2020. “Automatic calibration of 3D lidar and imu extrinsics.” Accessed October 5, 2022. https://github.com/chennuo0125-HIT/lidar_imu_calib.
Open Robotics. 2010. “Approximate time filters.” Accessed October 5, 2022. https://wiki.ros.org/message_filters/ApproximateTime.
O’Quin, J., P. Beeson, M. Quinlan, and Y. Liu. 2010. “Ros velodyne driver.” Accessed October 5, 2022. http://wiki.ros.org/velodyne_driver.
Pan, Y., B. Gao, J. Mei, S. Geng, C. Li, and H. Zhao. 2020. “Semanticposs: A point cloud dataset with large quantity of dynamic instances.” In Proc., 2020 IEEE Intelligent Vehicles Symp. (IV), 687–693. New York: IEEE.
Quigley, M., B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng. 2009. “Ros: An open-source robot operating system.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA) Workshop on Open Source Robotics. New York: IEEE.
Richter, S. R., V. Vineet, S. Roth, and V. Koltun. 2016. “Playing for data: Ground truth from computer games.” In Proc., European Conf. on Computer Vision, 102–118. Berlin: Springer.
Shan, T., B. Englot, D. Meyers, W. Wang, C. Ratti, and R. Daniela. 2020. “Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping.” In Proc., IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 5135–5142. New York: IEEE.
Shu, Z., S. Cao, Q. Jiang, Z. Xu, J. Tang, and Q. Zhou. 2021. “Pairwise registration algorithm for large-scale planar point cloud used in flatness measurement.” Sensors 21 (14): 4860. https://doi.org/10.3390/s21144860.
Sturm, J., N. Engelhard, F. Endres, W. Burgard, and D. Cremers. 2012. “A benchmark for the evaluation of rgb-d slam systems.” In Proc., Int. Conf. on Intelligent Robot Systems (IROS). New York: IEEE.
Sun, Z., R. Zhang, J. Hu, and X. Liu. 2022. “Probability re-weighted 3D point cloud registration for missing correspondences.” Multimedia Tools Appl. 81 (8): 11107–11126. https://doi.org/10.1007/s11042-022-12134-5.
Tong, Q., and C. Shaozu. 2018. “Advanced implementation of loam.” Accessed October 5, 2022. https://github.com/HKUST-Aerial-Robotics/A-LOAM.
Trzeciak, M., K. Pluta, Y. Fathy, L. Alcalde, S. Chee, A. Bromley, I. Brilakis, and P. Alliez. 2022. “Conslam: Periodically collected real-world construction dataset for SLAM and progress monitoring.” In Proc., Computer Vision–ECCV 2022 Workshops: Tel Aviv, Israel, 317–331. Cham, Switzerland: Springer Nature.
Velodyne. 2019. “VLP-16 user manual.” Accessed February 26, 2019. https://velodynelidar.com/wp-content/uploads/2019/12/63-9243-Rev-E-VLP-16-User- Manual.pdf.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 3May 2023

History

Received: Oct 7, 2022
Accepted: Jan 11, 2023
Published online: Mar 11, 2023
Published in print: May 1, 2023
Discussion open until: Aug 11, 2023

Permissions

Request permissions for this article.

Authors

Affiliations

Ph.D. Student, Dept. of Engineering, Univ. of Cambridge, Cambridge CB2 1PZ, UK (corresponding author). ORCID: https://orcid.org/0000-0001-8188-487X. Email: [email protected]
Kacper Pluta [email protected]
Researcher, Inria centre at Université Côte d'Azur, 2004 Route des Lucioles, Sophia-Antipolis Cedex 06902, France. Email: [email protected]
Researcher, Dept. of Engineering, Univ. of Cambridge, Cambridge CB2 1PZ, UK. ORCID: https://orcid.org/0000-0001-7398-5283. Email: [email protected]
Lucio Alcalde [email protected]
Managing Land Surveyor, Laing O’Rourke, Bridge Place, 1-2 Anchor Blvd., Crossways Blvd., Dartford DA2 6SN, UK. Email: [email protected]
Stanley Chee [email protected]
Land Surveyor, Laing O’Rourke, Bridge Place, 1-2 Anchor Blvd., Crossways Blvd., Dartford DA2 6SN, UK. Email: [email protected]
Antony Bromley [email protected]
Lead Digital Engineer, Laing O’Rourke, Bridge Place, 1-2 Anchor Blvd., Crossways Blvd., Dartford DA2 6SN, UK. Email: [email protected]
Professor, Dept. of Engineering, Univ. of Cambridge, Cambridge CB2 1PZ, UK. ORCID: https://orcid.org/0000-0003-1829-2083. Email: [email protected]
Senior Researcher, Inria centre at Université Côte d'Azur, 2004 route des Lucioles, Sophia-Antipolis cedex 06902, France. ORCID: https://orcid.org/0000-0002-6214-4005. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • A Streamlined Laser Scanning Verticality Check Method for Installation of Prefabricated Wall Panels, Journal of Construction Engineering and Management, 10.1061/JCEMD4.COENG-14989, 150, 11, (2024).
  • Construction Photo Localization in 3D Reality Models for Vision-Based Automated Daily Project Monitoring, Journal of Computing in Civil Engineering, 10.1061/JCCEE5.CPENG-5353, 37, 6, (2023).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share