Chapter
Mar 18, 2024

Construction Scene Segmentation Using 3D Point Clouds: A Dataset and Challenges

Publication: Construction Research Congress 2024

ABSTRACT

With the purpose of facilitating process tracking such as inspection reports and progress monitoring, the AEC industry has adopted an as-built 3D model that is reconstructed using a 3D scanner during or after construction. In response to the laborious difficulties of converting a point cloud into a semantically rich model, for example, BIM, researchers are attempting to automate this process via machine learning, applying 3D semantic segmentation and parametric modeling. However, there are no publicly accessible 3D datasets that target construction sites, regarded as unstructured and cluttered scenes, thus yielding a barrier to construction scene segmentation development. To this end, this paper aims to generate a 3D construction dataset that can be utilized for machine learning models requiring ground truth and to suggest foundation processing for general scene segmentation on construction datasets. In addition, we identify and discuss several challenges pertaining to construction sites, in terms of 3D semantic segmentation.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Armeni, I., Sener, O., Zamir, A. R., Jiang, H., Brilakis, I., Fischer, M., and Savarese, S. (2016). 3d semantic parsing of large-scale indoor spaces. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1534–1543.
Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., and Gall, J. (2019). Semantickitti: A dataset for semantic scene understanding of lidar sequences. Proceedings of the IEEE/CVF International Conference on Computer Vision, 9297–9307.
Bosché, F., Ahmed, M., Turkan, Y., Haas, C. T., and Haas, R. (2015). The value of integrating Scan-to-BIM and Scan-vs-BIM techniques for construction monitoring using laser scanning and BIM: The case of cylindrical MEP components. Automation in Construction, 49, 201–213.
Bosché, F., Guillemet, A., Turkan, Y., Haas, C. T., and Haas, R. (2014). Tracking the built status of MEP works: Assessing the value of a Scan-vs-BIM system. J. Comput. Civ. Eng, 28(4), 05014004.
Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., and Zhang, Y. (2017). Matterport3d: Learning from rgb-d data in indoor environments.
Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., and Su, H. (2015). Shapenet: An information-rich 3d model repository.
Couprie, C., Farabet, C., Najman, L., and LeCun, Y. (2013). Indoor semantic segmentation using depth information.
Dai, A., Chang, A. X., Savva, M., Halber, M., Funkhouser, T., and Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5828–5839.
Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., and Bennamoun, M. (2020). Deep learning for 3d point clouds: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(12), 4338–4364.
Hackel, T., Savinov, N., Ladicky, L., Wegner, J. D., Schindler, K., and Pollefeys, M. (2017). Semantic3d. net: A new large-scale point cloud classification benchmark.
Henry, P., Krainin, M., Herbst, E., Ren, X., and Fox, D. (2014). RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments. Experimental Robotics: The 12th International Symposium on Experimental Robotics, 477–491.
Hu, Q., Yang, B., Khalid, S., Xiao, W., Trigoni, N., and Markham, A. (2021). Towards semantic segmentation of urban-scale 3D point clouds: A dataset, benchmarks and challenges. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4977–4987.
Kim, S., Yajima, Y., Park, J., Chen, J., and Cho, Y. K. (2022). A Hybrid Semantic-Geometric Approach for Clutter-Resistant Floorplan Generation from Building Point Clouds. International Conference on Construction Engineering and Project Management, 792–799.
Lugo, G., Li, R., Chauhan, R., Wang, Z., Tiwary, P., Pandey, U., Patel, A., Rombough, S., Schatz, R., and Cheng, I. (2022). LiSurveying: A high-resolution TLS-LiDAR benchmark. Computers & Graphics, 107, 116–130.
Ma, Z., and Liu, S. (2018). A review of 3D reconstruction techniques in civil engineering and their applications. Advanced Engineering Informatics, 37, 163–174.
Park, J., and Cho, Y. K. (2022). Point Cloud Information Modeling: Deep Learning–Based Automated Information Modeling Framework for Point Cloud Data. Journal of Construction Engineering and Management, 148(2), 04021191.
Patil, A., Malla, S., Gang, H., and Chen, Y.-T. (2019). The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes. 2019 International Conference on Robotics and Automation (ICRA), 9552–9557.
Perez-Perez, Y., Golparvar-Fard, M., and El-Rayes, K. (2021). Scan2BIM-NET: deep learning method for segmentation of point clouds for scan-to-BIM. Journal of Construction Engineering and Management, 147(9), 04021107.
Qi, C. R., Su, H., Mo, K., and Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 652–660.
Qi, C. R., Yi, L., Su, H., and Guibas, L. J. (2017). Pointnet++: Deep hierarchical feature learning on point sets in a metric space.
Son, H., Kim, C., and Kim, C. (2015). Fully automated as-built 3D pipeline extraction method from laser-scanned data based on curvature computation. Journal of Computing in Civil Engineering, 29(4), B4014003.
Tan, W., Qin, N., Ma, L., Li, Y., Du, J., Cai, G., Yang, K., and Li, J. (2020). Toronto-3D: A large-scale mobile lidar dataset for semantic segmentation of urban roadways. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 202–203.
Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., and Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1912–1920.
Xiao, J., Owens, A., and Torralba, A. (2013). Sun3d: A database of big spaces reconstructed using sfm and object labels. Proceedings of the IEEE International Conference on Computer Vision, 1625–1632.
Xiong, X., Adan, A., Akinci, B., and Huber, D. (2013). Automatic creation of semantically rich 3D building models from laser scanner data. Automation in Construction, 31, 325–337.

Information & Authors

Information

Published In

Go to Construction Research Congress 2024
Construction Research Congress 2024
Pages: 378 - 385

History

Published online: Mar 18, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Seongyong Kim [email protected]
1Ph.D. Student, School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA. ORCID: https://orcid.org/0000-0002-0774-6791. Email: [email protected]
2Ph.D. Student, School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA. Email: [email protected]
Yong K. Cho, Ph.D. [email protected]
3Professor, School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$276.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$276.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share