Technical Papers
Sep 7, 2022

Robotic Cross-Platform Sensor Fusion and Augmented Visualization for Large Indoor Space Reality Capture

Publication: Journal of Computing in Civil Engineering
Volume 36, Issue 6

Abstract

The advancement in sensors, robotics, and artificial intelligence has enabled a series of methods such as simultaneous localization and mapping (SLAM), semantic segmentation, and point cloud registration to help with the reality capture process. To completely investigate an unknown indoor space, obtaining a general spatial comprehension as well as detailed scene reconstruction for a digital twin model requires a deeper insight into the characteristics of different ranging sensors, as well as corresponding techniques to combine data from distinct systems. This paper discusses the necessity and workflow of utilizing two distinct types of scanning sensors, including depth camera and light detection and ranging sensor (LiDAR), paired with a quadrupedal ground robot to obtain spatial data of a large, complex indoor space. A digital twin model was built in real time with two SLAM methods and then consolidated with the geometric feature extraction methods of fast point feature histograms (FPFH) and fast global registration. Finally, the reconstructed scene was streamed to a HoloLens 2 headset to create an illusion of seeing through walls. Results showed that both the depth camera and LiDAR could handle a large space reality capture with both required coverage and fidelity with textural information. As a result, the proposed workflow and analytical pipeline provides a hierarchical data fusion strategy to integrate the advantages of distinct sensing methods and to carry out a complete indoor investigation. It also validates the feasibility of robot-assisted reality capture in larger spaces.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

All data, models, or code generated or used during the study are available from the corresponding author by request.

Acknowledgments

This material is supported by the National Science Foundation (NSF) under Grant No. 2033592 and by the National Institute of Standards and Technology (NIST) under Grant No. 70NANB21H045. Any opinions, findings, conclusions, or recommendations expressed in this article are those of the authors and do not reflect the views of the NSF or NIST.

References

Abbas, S. M., and A. Muhammad. 2012. “Outdoor RGB-D slam performance in slow mine detection.” In Proc., ROBOTIK 2012 7th German Conf. on Robotics, 1–6. Berlin: VDE VERLAG GmbH.
Aldous, D. 1991. “The continuum random tree.” In The annals of probability, 1–28. Cambridge, UK: Cambridge University Press & Assessment.
Alwasel, A., E. M. Abdel-Rahman, C. T. Haas, and S. Lee. 2017. “Experience, productivity, and musculoskeletal injury among masonry workers.” J. Constr. Eng. Manage. 143 (6): 05017003. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001308.
Anand, B., V. Barsaiyan, M. Senapati, and P. Rajalakshmi. 2019. “Real time LIDAR point cloud compression and transmission for intelligent transportation system.” In Proc., 2019 IEEE 89th Vehicular Technology Conf. (VTC2019-Spring), 1–5. New York: IEEE.
Arikan, M., R. Preiner, C. Scheiblauer, S. Jeschke, and M. Wimmer. 2014. “Large-scale point-cloud visualization through localized textured surface reconstruction.” IEEE Trans. Vis. Comput. Graphics 20 (9): 1280–1292. https://doi.org/10.1109/TVCG.2014.2312011.
Ariyachandra, M., and I. Brilakis. 2020. “Detection of railway masts in airborne LiDAR data.” J. Constr. Eng. Manage. 146 (9): 04020105. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001894.
Bae, H., M. Golparvar-Fard, and J. White. 2013. “High-precision vision-based mobile augmented reality system for context-aware architectural, engineering, construction and facility management (AEC/FM) applications.” Visualization Eng. 1 (1): 1–13. https://doi.org/10.1186/2213-7459-1-3.
Besl, P. J., and N. D. McKay. 1992. “Method for registration of 3-D shapes.” In Proc., Sensor fusion IV: Control paradigms and data structures, 586–606. Bellingham, WA: International Society for Optics and Photonics.
Borrmann, D., J. Elseberg, K. Lingemann, and A. Nüchter. 2011. “The 3D Hough transform for plane detection in point clouds: A review and a new accumulator design.” 3D Res. 2 (2): 1–13. https://doi.org/10.1007/3DRes.02(2011)3.
Bouman, A., M. F. Ginting, N. Alatur, M. Palieri, D. D. Fan, T. Touma, T. Pailevanian, S.-K. Kim, K. Otsu, and J. Burdick. 2020. “Autonomous spot: Long-range autonomous exploration of extreme environments with legged locomotion.” In Proc., 2020 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2518–2525. New York: IEEE.
Buckley, S. J., E. Schwarz, V. Terlaky, and J. A. Howell. 2010. “Combining aerial photogrammetry and terrestrial lidar for reservoir analog modeling.” Photogramm. Eng. Remote Sensing 76 (8): 953–963. https://doi.org/10.14358/PERS.76.8.953.
Bula, J., M.-H. Derron, and G. Mariethoz. 2020. “Dense point cloud acquisition with a low-cost Velodyne VLP-16.” Geosci. Instrum. Methods Data Syst. 9 (2): 385–396. https://doi.org/10.5194/gi-9-385-2020.
Clipp, B., J. Lim, J.-M. Frahm, and M. Pollefeys. 2010. “Parallel, real-time visual SLAM.” In Proc., 2010 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 3961–3968. New York: IEEE.
Czerniawski, T., and F. Leite. 2020. “Automated digital modeling of existing buildings: A review of visual object recognition methods.” Autom. Constr. 113 (20): 103131. https://doi.org/10.1016/j.autcon.2020.103131.
Dong, P., and Q. Chen. 2017. LiDAR remote sensing and applications. London: CRC Press.
Dou, M., and H. Fuchs. 2014. “Temporally enhanced 3d capture of room-sized dynamic scenes with commodity depth cameras.” In Proc., 2014 IEEE Virtual Reality (VR), 39–44. New York: IEEE.
Durrant-Whyte, H., and T. Bailey. 2006. “Simultaneous localization and mapping: Part I.” IEEE Rob. Autom. Mag. 13 (2): 99–110. https://doi.org/10.1109/MRA.2006.1638022.
Elbaz, G., T. Avraham, and A. Fischer. 2017. “3D point cloud registration for localization using a deep neural network auto-encoder.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 4631–4640. New York: IEEE.
Ester, M., H.-P. Kriegel, J. Sander, and X. Xu. 1996. “A density-based algorithm for discovering clusters in large spatial databases with noise.” In Proc., KDD, 226–231. Palo Alto, CA: Association for the Advancement of Artificial Intelligence.
Fierro, R. D., G. H. Golub, P. C. Hansen, and D. P. O’Leary. 1997. “Regularization by truncated total least squares.” SIAM J. Sci. Comput. 18 (4): 1223–1241. https://doi.org/10.1137/S1064827594263837.
Filipenko, M., and I. Afanasyev. 2018. “Comparison of various slam systems for mobile robot in an indoor environment.” In Proc., 2018 Int. Conf. on Intelligent Systems (IS), 400–407. New York: IEEE.
Gargoum, S. A., K. El-Basyouny, and J. Sabbagh. 2018. “Assessing stopping and passing sight distance on highways using mobile LiDAR data.” J. Comput. Civ. Eng. 32 (4): 04018025. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000753.
Geneva, P., K. Eckenhoff, W. Lee, Y. Yang, and G. Huang. 2020. “Openvins: A research platform for visual-inertial estimation.” In Proc., 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), 4666–4672. New York: IEEE.
Geng, X., S. Ji, M. Lu, and L. Zhao. 2021. “Multi-scale attentive aggregation for LiDAR point cloud segmentation.” Remote Sens. 13 (4): 691. https://doi.org/10.3390/rs13040691.
Glennie, C., A. Kusari, and A. Facchin. 2016. “Calibration and stability analysis of the VLP-16 laser scanner.” In Proc., ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. Germany: ISPRS.
Gojcic, Z., C. Zhou, J. D. Wegner, and A. Wieser. 2019. “The perfect match: 3D point cloud matching with smoothed densities.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 5545–5554. New York: IEEE.
Golparvar-Fard, M., F. Peña-Mora, and S. Savarese. 2009. “D4AR—A 4-dimensional augmented reality model for automating construction progress monitoring data collection, processing and communication.” J. Inf. Technol. Constr. 14 (13): 129–153.
Gordon, C., F. Boukamp, D. Huber, E. Latimer, K. Park, and B. Akinci. 2003. “Combining reality capture technologies for construction defect detection: A case study.” In Proc., 9th EuropIA Int. Conf. EIA9: E-Activities and Intelligent Support in Design and the Built Environment, 99–108. Paris: Europia Productions.
Gupta, S., and B. Lohani. 2014. “Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones.” Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 5 (5): 153. https://doi.org/10.5194/isprsannals-II-5-153-2014.
Hamledari, H., S. Davari, E. R. Azar, B. McCabe, F. Flager, and M. Fischer. 2017. “UAV-enabled site-to-BIM automation: Aerial robotic-and computer vision-based development of as-built/as-is BIMs and quality control.” In Proc., Construction Research Congress, 336–346. Reston, VA: ASCE.
Hamledari, H., and M. Fischer. 2021. “Construction payment automation using blockchain-enabled smart contracts and robotic reality capture technologies.” Autom. Constr. 132 (12): 103926. https://doi.org/10.1016/j.autcon.2021.103926.
Han, S., S. Lee, and F. Peña-Mora. 2013. “Vision-based detection of unsafe actions of a construction worker: Case study of ladder climbing.” J. Comput. Civ. Eng. 27 (6): 635–644. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000279.
Henry, P., M. Krainin, E. Herbst, X. Ren, and D. Fox. 2014. “RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments.” In Proc., Experimental Robotics, 477–491. Berlin: Springer.
Hui, Z., Y. Hu, Y. Z. Yevenyo, and X. Yu. 2016. “An improved morphological algorithm for filtering airborne LiDAR point cloud based on multi-level kriging interpolation.” Remote Sens. 8 (1): 35. https://doi.org/10.3390/rs8010035.
Ibragimov, I. Z., and I. M. Afanasyev. 2017. “Comparison of ROS-based visual SLAM methods in homogeneous indoor environment.” In Proc., 2017 14th Workshop on Positioning, Navigation and Communications (WPNC), 1–6. New York: IEEE.
Isenburg, M. 2013. “LASzip: Lossless compression of LiDAR data.” Photogramm. Eng. Remote Sens. 79 (2): 209–217. https://doi.org/10.14358/PERS.79.2.209.
Izadi, S., D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, and A. Davison. 2011. “KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera.” In Proc., 24th Annual ACM Symp. on User Interface Software and Technology, 559–568. New York: ACM.
Jahanshahi, M. R., F. Jazizadeh, S. F. Masri, and B. Becerik-Gerber. 2013. “Unsupervised approach for autonomous pavement-defect detection and quantification using an inexpensive depth sensor.” J. Comput. Civ. Eng. 27 (6): 743–754. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000245.
Jenke, P., M. Wand, M. Bokeloh, A. Schilling, and W. Straßer. 2006. “Bayesian point cloud reconstruction.” In Proc., Computer Graphics Forum, 379–388. New York: Wiley.
Ji, S., Y. Ren, Z. Ji, X. Liu, and G. Hong. 2017. “An improved method for registration of point cloud.” Optik 140 (4): 451–458. https://doi.org/10.1016/j.ijleo.2017.01.041.
Jin, Y.-H., I.-T. Hwang, and W.-H. Lee. 2020. “A mobile augmented reality system for the real-time visualization of pipes in point cloud data with a depth sensor.” Electronics 9 (5): 836. https://doi.org/10.3390/electronics9050836.
Kalogerakis, E., D. Nowrouzezahrai, P. Simari, and K. Singh. 2009. “Extracting lines of curvature from noisy point clouds.” Comput.-Aided Des. 41 (4): 282–292. https://doi.org/10.1016/j.cad.2008.12.004.
Kerle, N., and R. R. Hoffman. 2013. “Collaborative damage mapping for emergency response: The role of cognitive systems engineering.” Nat. Hazards Earth Syst. Sci. 13 (1): 97–113. https://doi.org/10.5194/nhess-13-97-2013.
Kim, P., L. C. Price, J. Park, and Y. K. Cho. 2019. “UAV-UGV cooperative 3D environmental mapping.” In Computing in civil engineering 2019: Data, sensing, and analytics, 384–392. Reston, VA: ASCE.
Kokalj, Ž., and M. Somrak. 2019. “Why not a single image? Combining visualizations to facilitate fieldwork and on-screen mapping.” Remote Sens. 11 (7): 747. https://doi.org/10.3390/rs11070747.
Kreylos, O., G. W. Bawden, and L. H. Kellogg. 2008. “Immersive visualization and analysis of LiDAR data.” In Proc., Int. Symp. on Visual Computing, 846–855. Berlin: Springer.
Kumar Mishra, R., and Y. Zhang. 2012. “A review of optical imagery and airborne lidar data registration methods.” Open Remote Sens. J. 5 (1): 10054. https://doi.org/10.2174/1875413901205010054.
Labbe, M., and F. Michaud. 2013. “Appearance-based loop closure detection for online large-scale and long-term operation.” IEEE Trans. Rob. 29 (3): 734–745. https://doi.org/10.1109/TRO.2013.2242375.
Labbé, M., and F. Michaud. 2019. “RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation.” J. Field Rob. 36 (2): 416–446. https://doi.org/10.1002/rob.21831.
Li, D., B. Zhou, Z. Wang, S. Yang, and P. Liu. 2021. “The identification and compensation of static drift induced by external disturbances for LiDAR SLAM.” IEEE Access 9 (8): 58102–58115. https://doi.org/10.1109/ACCESS.2021.3072935.
Li, J., G. Deng, C. Luo, Q. Lin, Q. Yan, and Z. Ming. 2016. “A hybrid path planning method in unmanned air/ground vehicle (UAV/UGV) cooperative systems.” IEEE Trans. Veh. Technol. 65 (12): 9585–9596. https://doi.org/10.1109/TVT.2016.2623666.
Li, Y., W. Li, S. Tang, W. Darwish, Y. Hu, and W. Chen. 2020. “Automatic indoor as-built building information models generation by using low-cost RGB-D sensors.” Sensors 20 (1): 293. https://doi.org/10.3390/s20010293.
Lin, H., J. Gao, Y. Zhou, G. Lu, M. Ye, C. Zhang, L. Liu, and R. Yang. 2013. “Semantic decomposition and reconstruction of residential scenes from LiDAR data.” ACM Trans. Graphics 32 (4): 1–10. https://doi.org/10.1145/2461912.2461969.
Lin, J., X. Liu, and F. Zhang. 2020. “A decentralized framework for simultaneous calibration, localization and mapping with multiple LiDARs.” In Proc., 2020 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 4870–4877. New York: IEEE.
Livingston, M. A., J. Sebastian, Z. Ai, and J. W. Decker. 2012. “Performance measurements for the Microsoft Kinect skeleton.” In Proc., 2012 IEEE Virtual Reality Workshops (VRW), 119–120. New York: IEEE.
Mahmood, B., and S. Han. 2019. “3D registration of indoor point clouds for augmented reality.” In Computing in civil engineering 2019: Visualization, information modeling, and simulation, 1–8. Reston, VA: ASCE.
Makhataeva, Z., and H. A. Varol. 2020. “Augmented reality for robotics: A review.” Robotics 9 (2): 21. https://doi.org/10.3390/robotics9020021.
May, K. W., J. Walsh, R. T. Smith, N. Gu, and B. H. Thomas. 2020. “VRGlare: A virtual reality lighting performance simulator for real-time three-dimensional glare simulation and analysis.” In Proc., ISARC Int. Symp. on Automation and Robotics in Construction, 32–39. Oulu, Finland: International Association for Automation and Robotics in Construction.
Meruvia-Pastor, O. 2019. “Enhancing 3D capture with multiple depth camera systems: A state-of-the-art report.” In RGB-D image analysis and processing, 145–166. Berlin: Springer.
Miądlicki, K., M. Pajor, and M. Saków. 2017. “Real-time ground filtration method for a loader crane environment monitoring system using sparse LIDAR data.” In Proc., 2017 IEEE Int. Conf. on Innovations in Intelligent Systems and Applications (INISTA), 207–212. New York: IEEE.
Microsoft. 2022. “Holographic remoting player overview.” Accessed January 10, 2022. https://docs.microsoft.com/en-us/windows/mixed-reality/develop/native/holographic-remoting-player.
Mur-Artal, R., J. M. M. Montiel, and J. D. Tardos. 2015. “ORB-SLAM: A versatile and accurate monocular SLAM system.” IEEE Trans. Rob. 31 (5): 1147–1163. https://doi.org/10.1109/TRO.2015.2463671.
Myronenko, A., and X. Song. 2010. “Point set registration: Coherent point drift.” IEEE Trans. Pattern Anal. Mach. Intell. 32 (12): 2262–2275. https://doi.org/10.1109/TPAMI.2010.46.
Nex, F., and F. Remondino. 2014. “UAV for 3D mapping applications: A review.” Appl. Geomatics 6 (1): 1–15. https://doi.org/10.1007/s12518-013-0120-x.
Ni, D., A. Y. Nee, S.-K. Ong, H. Li, C. Zhu, and A. Song. 2018a. “Point cloud augmented virtual reality environment with haptic constraints for teleoperation.” Trans. Instrum. Meas. Control 40 (15): 4091–4104. https://doi.org/10.1177/0142331217739953.
Ni, D., A. Song, S. Wang, H. Li, and C. Zhu. 2018b. “Translational objects dynamic modeling and correction for point cloud augmented virtual reality–Based teleoperation.” Adv. Mech. Eng. 10 (1): 168781401775387. https://doi.org/10.1177/1687814017753870.
Park, M.-W., N. Elsafty, and Z. Zhu. 2015. “Hardhat-wearing detection for enhancing on-site safety of construction workers.” J. Constr. Eng. Manage. 141 (9): 04015024. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000974.
Pomerleau, F., M. Liu, F. Colas, and R. Siegwart. 2012. “Challenging data sets for point cloud registration algorithms.” Int. J. Rob. Res. 31 (14): 1705–1711. https://doi.org/10.1177/0278364912458814.
Pu, G., P. Wei, A. Aribe, J. Boultinghouse, N. Dinh, F. Xu, and J. Du. 2021. “Seeing through walls: real-time digital twin modeling of indoor spaces.” In Proc., 2021 Winter Simulation Conf. (WSC), 1–10. New York: IEEE.
Pütz, S., T. Wiemann, and J. Hertzberg. 2019. “Tools for visualizing, annotating and storing triangle meshes in ROS and RViz.” In Proc., 2019 European Conf. on Mobile Robots (ECMR), 1–6. New York: IEEE.
Qi, C. R., H. Su, K. Mo, and L. J. Guibas. 2017. “Pointnet: Deep learning on point sets for 3d classification and segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 652–660. New York: IEEE.
Qu, S., G. Chen, C. Ye, F. Lu, F. Wang, Z. Xu, and Y. Gel. 2018. “An efficient L-shape fitting method for vehicle pose detection with 2D lidar.” In Proc., 2018 IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), 1159–1164. New York: IEEE.
Quigley, M., K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. 2009. “ROS: An open-source robot operating system.” In Proc., ICRA Workshop on Open Source Software. New York: IEEE.
Ragot, N., R. Khemmar, A. Pokala, R. Rossi, and J.-Y. Ertaud. 2019. “Benchmark of visual slam algorithms: Orb-slam2 vs rtab-map.” In Proc., 2019 8th Int. Conf. on Emerging Security Technologies (EST), 1–6. New York: IEEE.
Remondino, F. 2003. “From point cloud to surface: The modeling and visualization problem.” In Proc., Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Germany: International Society for Photogrammetry and Remote Sensing.
Rosinol, A., M. Abate, Y. Chang, and L. Carlone. 2020. “Kimera: An open-source library for real-time metric-semantic localization and mapping.” In Proc., 2020 IEEE Int. Conf. on Robotics and Automation (ICRA), 1689–1696. New York: IEEE.
Rusu, R. B., N. Blodow, and M. Beetz. 2009. “Fast point feature histograms (FPFH) for 3D registration.” In Proc., 2009 IEEE Int. Conf. on Robotics and Automation, 3212–3217. New York: IEEE.
Rusu, R. B., N. Blodow, Z. C. Marton, and M. Beetz. 2008. “Aligning point cloud views using persistent feature histograms.” In Proc., 2008 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 3384–3391. New York: IEEE.
Rusu, R. B., and S. Cousins. 2011. “3D is here: Point cloud library (PCL).” In Proc., 2011 IEEE Int. Conf. on Robotics and Automation, 1–4. New York: IEEE.
Sadik, M. J., and M. C. Lam. 2017. “Stereoscopic vision mobile augmented reality system architecture in assembly tasks.” J. Eng. Appl. Sci. 12 (8): 2098–2105.
Schall, O., A. Belyaev, and H.-P. Seidel. 2005. “Robust filtering of noisy scattered point data.” In Proc., Eurographics/IEEE VGTC Symp. Point-Based Graphics, 71–144. New York: IEEE.
Schütz, M., B. Kerbl, and M. Wimmer. 2021. Rendering point clouds with compute shaders and vertex order optimization. Hoboken, NJ: Wiley-Blackwell.
Seppänen, H., J. Mäkelä, P. Luokkala, and K. Virrantaus. 2013. “Developing shared situational awareness for emergency management.” Saf. Sci. 55 (12): 1–9. https://doi.org/10.1016/j.ssci.2012.12.009.
Sharafutdinov, D., M. Griguletskii, P. Kopanev, M. Kurenkov, G. Ferrer, A. Burkov, A. Gonnochenko, and D. Tsetserukou. 2021. Comparison of modern open-source visual SLAM approaches. New York: Cornell Univ.
Shashi, M., and K. Jain. 2007. “Use of photogrammetry in 3D modeling and visualization of buildings.” J. Eng. Appl. Sci. 2 (2): 37–40.
Shirokuma. 2022. “Cupoch-robotics with GPU computing.” Accessed January 10, 2022. https://github.com/neka-nat.
Tang, P., D. Huber, B. Akinci, R. Lipman, and A. Lytle. 2010. “Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques.” Autom. Constr. 19 (7): 829–843. https://doi.org/10.1016/j.autcon.2010.06.007.
Tölgyessy, M., M. Dekan, Ľ. Chovanec, and P. Hubinský. 2021. “Evaluation of the azure Kinect and its comparison to Kinect V1 and Kinect V2.” Sensors 21 (2): 413. https://doi.org/10.3390/s21020413.
Wang, C., Y. K. Cho, and M. Gai. 2013. “As-is 3D thermal modeling for existing building envelopes using a hybrid LIDAR system.” J. Comput. Civ. Eng. 27 (6): 645–656. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000273.
Wang, C., S. Hou, C. Wen, Z. Gong, Q. Li, X. Sun, and J. Li. 2018. “Semantic line framework-based indoor building modeling using backpacked laser scanning point cloud.” ISPRS J. Photogramm. Remote Sens. 143 (7): 150–166. https://doi.org/10.1016/j.isprsjprs.2018.03.025.
Wang, Q., and M.-K. Kim. 2019. “Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018.” Adv. Eng. Inf. 39 (Jan): 306–319. https://doi.org/10.1016/j.aei.2019.02.007.
Wirth, F., J. Quehl, J. Ota, and C. Stiller. 2019. “Pointatme: Efficient 3d point cloud labeling in virtual reality.” In Proc., 2019 IEEE Intelligent Vehicles Symp. (4), 1693–1698. New York: IEEE.
Xie, H. S., I. Brilakis, and E. Loscos. 2022. “Reality capture: Photography, videos, laser scanning and drones.” In Industry 4.0 for the built environment, 443–469. Berlin: Springer.
Xu, F. 2022. “Seeing through walls via HoloLens2” (video). Accessed January 31, 2022. https://youtu.be/uUFW7R9oDNs.
Xu, F., T. Zhou, P. Xia, G. Ye, and J. Zhu. 2022. “Robot dog for building modeling” (video). Accessed January 29, 2022. https://youtu.be/fYGcbh3TNLI.
Yan, X., H. Zhang, and H. Li. 2019. “Estimating worker-centric 3D spatial crowdedness for construction safety management using a single 2D camera.” J. Comput. Civ. Eng. 33 (5): 04019030. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000844.
Yang, H., J. Shi, and L. Carlone. 2020. “Teaser: Fast and certifiable point cloud registration.” IEEE Trans. Rob. 37 (2): 314–333. https://doi.org/10.1109/TRO.2020.3033695.
Ye, X., J. Li, H. Huang, L. Du, and X. Zhang. 2018. “3d recurrent neural networks with context fusion for point cloud semantic segmentation.” In Proc., European Conf. on Computer Vision (ECCV), 403–417. New York: IEEE.
Yi, C., Y. Zhang, Q. Wu, Y. Xu, O. Remil, M. Wei, and J. Wang. 2017. “Urban building reconstruction from raw LiDAR point data.” Comput.-Aided Des. 93 (12): 1–14. https://doi.org/10.1016/j.cad.2017.07.005.
Zhang, J., and S. Singh. 2014. “LOAM: Lidar odometry and mapping in real-time.” In Proc., Robotics: Science and Systems. Cambridge, MA: MIT Press.
Zhang, X., W. Xu, C. Dong, and J. M. Dolan. 2017. “Efficient L-shape fitting for vehicle detection using laser scanners.” In Proc., 2017 IEEE Intelligent Vehicles Symp. (4), 54–59. New York: IEEE.
Zhou, Q.-Y., J. Park, and V. Koltun. 2016. “Fast global registration.” In Proc., European Conf. on Computer Vision, 766–782. Berlin: Springer.
Zhou, T., Q. Zhu, and J. Du. 2020. Intuitive robot teleoperation for civil engineering operations with virtual reality and deep learning scene reconstruction. Amsterdam, Netherlands: Elsevier.
Zou, Q., Q. Sun, L. Chen, B. Nie, and Q. Li. 2021. A comparative analysis of LiDAR SLAM-Based indoor navigation for autonomous vehicles. New York: IEEE.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 36Issue 6November 2022

History

Received: Feb 2, 2022
Accepted: Jun 9, 2022
Published online: Sep 7, 2022
Published in print: Nov 1, 2022
Discussion open until: Feb 7, 2023

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Fang Xu, S.M.ASCE [email protected]
Ph.D. Student, Informatics, Cobots, and Intelligent Construction (ICIC) Lab, Dept. of Civil and Coastal Engineering, Univ. of Florida, Gainesville, FL 32611. Email: [email protected]
Pengxiang Xia, S.M.ASCE [email protected]
Ph.D. Student, Informatics, Cobots, and Intelligent Construction (ICIC) Lab, Dept. of Civil and Coastal Engineering, Univ. of Florida, Gainesville, FL 32611. Email: [email protected]
Ph.D. Student, Informatics, Cobots, and Intelligent Construction (ICIC) Lab, Dept. of Civil and Coastal Engineering, Univ. of Florida, Gainesville, FL 32611. ORCID: https://orcid.org/0000-0003-2594-3905. Email: [email protected]
Associate Professor, Informatics, Cobots, and Intelligent Construction (ICIC) Lab, Dept. of Civil and Coastal Engineering, Univ. of Florida, Gainesville, FL 32611 (corresponding author). ORCID: https://orcid.org/0000-0002-0481-4875. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • User Experience and Workload Evaluation in Robot-Assisted Virtual Reality Welding Training, Construction Research Congress 2024, 10.1061/9780784485293.011, (99-108), (2024).
  • Adaptive Scanning for Improved Stacked Object Detection with RGB and LiDAR, Construction Research Congress 2024, 10.1061/9780784485262.113, (1107-1116), (2024).
  • Augmented Telepresence: Enhancing Robot Arm Control with Mixed Reality for Dexterous Manipulation, Construction Research Congress 2024, 10.1061/9780784485262.074, (727-738), (2024).
  • Indoor Navigation Systems via Augmented Reality and Reality Capture: From Exocentric to Egocentric Spatial Perspective, Computing in Civil Engineering 2023, 10.1061/9780784485224.049, (404-411), (2024).
  • Stacked Object Clustering with Adaptive Scanning and Density Centralized Voting, Computing in Civil Engineering 2023, 10.1061/9780784485224.039, (317-325), (2024).
  • Pose Graph Relocalization with Deep Object Detection and BIM-Supported Object Landmark Dictionary, Journal of Computing in Civil Engineering, 10.1061/JCCEE5.CPENG-5301, 37, 5, (2023).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share