Technical Papers
Jun 6, 2024

TransWallNet: High-Performance Semantic Segmentation of Large-Scale and Multifeatured Point Clouds of Building Gables

Publication: Journal of Construction Engineering and Management
Volume 150, Issue 8

Abstract

Intelligent recognition of bulges, windows, and other features in building gable point cloud data is a prerequisite and critical step for the implementation of automated spray-painting in construction. Gable point cloud data exhibit characteristics such as large scenes, orthogonal structures, color degradation, and feature imbalance. Addressing these attributes, this paper proposes TransWallNet, a point cloud semantic segmentation model based on the attention mechanism. To alleviate the computational load from large scenes, the model employs random sampling. For the orthogonal nature of the gables, it innovatively utilizes Chebyshev distance to query neighbors, incorporating an attention mechanism to effectively aggregate local point cloud information. This allows for the reliance solely on positional information of point clouds to identify various features, addressing the issue of color feature reliance. The combination of local feature aggregation and a global attention module attends to both local point cloud details and their contextual relationships, accurately segmenting various gable elements. Compared to other leading methods, our approach achieved the highest macroaverage accuracy and macroaverage F1-score on a building facade data set, increasing by 9.81% and 4.55%, respectively, over other methods. This research provides high-quality environmental information and perception methods for the construction of gable spray-painting robots.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request. The archived version of the code described in this manuscript can be accessed through Code Ocean (https://codeocean.com/capsule/4269200/tree) and github (https://github.com/Johnsonary/code).

Acknowledgments

This research was supported by and the National Natural Science Foundation of China under Grant No. 5216050113.

References

Bhattacharyya, P., C. Huang, and K. Czarnecki. 2021. “SA-Det3D: Self-attention based context-aware 3D object detection.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 3022–3031. New York: IEEE. https://doi.org/10.1109/iccvw54120.2021.00337.
Bolourian, N., M. Nasrollahi, F. Bahreini, and A. Hammad. 2023. “Point cloud–based concrete surface defect semantic segmentation.” J. Comput. Civ. Eng. 37 (2): 04022056. https://doi.org/10.1061/JCCEE5.CPENG-5009.
Engelmann, F., T. Kontogianni, A. Hermans, and B. Leibe. 2017. “Exploring spatial context for 3D semantic segmentation of point clouds.” In Proc., IEEE Int. Conf. on Computer Vision Workshops, 716–724. New York: IEEE. https://doi.org/10.1109/iccvw.2017.90.
Femiani, J., W. R. Para, N. Mitra, and P. Wonka. 2018. “Facade segmentation in the wild.” Preprint, submitted May 9, 2018. http://arxiv.org/abs/1805.08634.
Graham, B., M. Engelcke, and L. Van Der Maaten. 2018. “3D semantic segmentation with submanifold sparse convolutional networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 9224–9232. New York: IEEE. https://doi.org/10.3390/buildings12060856.
Grilli, E., F. Menna, and F. Remondino. 2017. “A review of point clouds segmentation and classification algorithms.” Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. 42 (Feb): 339–344. https://doi.org/10.5194/isprs-archives-XLII-2-W3-339-2017.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE. https://doi.org/10.1109/cvpr.2016.90.
Hu, J., L. Shen, and G. Sun. 2018. “Squeeze-and-excitation networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 7132–7141. New York: IEEE. https://doi.org/10.1109/cvpr.2018.00745.
Hu, Q., B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, and A. Markham. 2020. “Randla-net: Efficient semantic segmentation of large-scale point clouds.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 11108–11117. New York: IEEE. https://doi.org/10.1109/cvpr42600.2020.01112.
Hua, B.-S., M.-K. Tran, and S.-K. Yeung. 2018. “Pointwise convolutional neural networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 984–993. New York: IEEE. https://doi.org/10.1109/cvpr.2018.00109.
Khurana, D., A. Koli, K. Khatter, and S. Singh. 2023. “Natural language processing: State of the art, current trends and challenges.” Multimedia Tools Appl. 82 (3): 3713–3744. https://doi.org/10.1007/s11042-022-13428-4.
Landrieu, L., and M. Boussaha. 2019. “Point cloud oversegmentation with graph-structured deep metric learning.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 7440–7449. New York: IEEE. https://doi.org/10.1109/cvpr.2019.00762.
Lawin, F. J., M. Danelljan, P. Tosteberg, G. Bhat, F. S. Khan, and M. Felsberg. 2017. “Deep projective 3D semantic segmentation.” In Proc., 17th Int. Conf., CAIP 2017: Computer Analysis of Images and Patterns, 95–107. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-64689-3_8.
Li, S., Y. Liu, and J. Gall. 2021. “Rethinking 3-D lidar point cloud segmentation.” IEEE Trans. Neural Networks Learn. Syst. 1–12. https://doi.org/10.1109/tnnls.2021.3132836.
Liu, Z., X. Zhao, T. Huang, R. Hu, Y. Zhou, and X. Bai. 2020. “Tanet: Robust 3D object detection from point clouds with triple attention.” In Vol. 34 of Proc. AAAI Conf. on Artificial Intelligence, 11677–11684. Washington, DC: Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v34i07.6837.
Mousavian, A., and Kosecka. 2016. “Semantic image based geolocation given a map.” Preprint, submitted September 1, 2016. http://arxiv.org/abs/1609.00278.
Qi, C. R., H. Su, K. Mo, and L. J. Guibas. 2017a. “Pointnet: Deep learning on point sets for 3d classification and segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 652–660. New York: IEEE. https://doi.org/10.1109/cvpr.2017.16.
Qi, C. R., L. Yi, H. Su, and L. J. Guibas. 2017b. “Pointnet++: Deep hierarchical feature learning on point sets in a metric space.” Adv. Neural Inf. Process. Syst. 30. https://doi.org/10.1109/cvpr.2017.16.
Qian, G., Y. Li, H. Peng, J. Mai, H. Hammoud, M. Elhoseiny, and B. Ghanem. 2022. “Pointnext: Revisiting pointnet++ with improved training and scaling strategies.” Adv. Neural Inf. Process. Syst. 35: 23192–23204.
Qiu, H., B. Yu, and D. Tao. 2022. “Gfnet: Geometric flow network for 3d point cloud semantic segmentation.” Preprint, submitted July 6, 2022. http://arxiv.org/abs/2207.02605.
Rethage, D., J. Wald, J. Sturm, N. Navab, and F. Tombari. 2018. “Fully-convolutional point networks for large-scale point clouds.” In Proc., European Conf. on Computer Vision (ECCV), 596–611. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-01225-0_37.
Ronneberger, O., P. Fischer, and T. Brox. 2015. “U-net: Convolutional networks for biomedical image segmentation.” In Proc., 18th Int. Conf., Part III 18: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, 234–241. Berlin: Springer. https://doi.org/10.1007/978-3-662-54345-0_3.
Su, H., V. Jampani, D. Sun, S. Maji, E. Kalogerakis, M.-H. Yang, and J. Kautz. 2018. “Splatnet: Sparse lattice networks for point cloud processing.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2530–2539. New York: IEEE. https://doi.org/10.1109/cvpr.2018.00268.
Tatarchenko, M., J. Park, V. Koltun, and Q.-Y. Zhou. 2018. “Tangent convolutions for dense prediction in 3d.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3887–3896. New York: IEEE. https://doi.org/10.1109/cvpr.2018.00409.
Thomas, H., C. R. Qi, J. E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas. 2019. “Kpconv: Flexible and deformable convolution for point clouds.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 6411–6420. New York: IEEE.
Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. 2017. “Attention is all you need.” In Advances in neural information processing systems, 30. Cambridge, MA: MIT Press.
Wang, J. 2023. “Medical image segmentation method based on multi-scale feature and u-net network.” Internet Technol. Lett. e451. https://doi.org/10.1002/itl2.451/v2/review1.
Wang, L., Y. Huang, Y. Hou, S. Zhang, and J. Shan. 2019a. “Graph attention convolution for point cloud semantic segmentation.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 10296–10305. New York: IEEE. https://doi.org/10.1109/cvpr.2019.01054.
Wang, X., J. He, and L. Ma. 2019b. “Exploiting local and global structure for point cloud semantic segmentation with contextual point representations.” In Advances in neural information processing systems, 32. Cambridge, MA: MIT Press.
Woo, S., J. Park, J.-Y. Lee, and I. S. Kweon. 2018. “CBAM: Convolutional block attention module.” In Proc., European Conf. on Computer Vision (ECCV), 3–19. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-01234-2_1.
Wu, W., Y. Liang, and W. Zhang. 2023. “Improved point cloud semantic segmentation network based on anisotropic separable set abstraction network.” J. Appl. Remote Sens. 17 (3): 036505. https://doi.org/10.1117/1.JRS.17.036505.
Wu, X., Y. Lao, L. Jiang, X. Liu, and H. Zhao. 2022. “Point transformer v2: Grouped vector attention and partition-based pooling.” Adv. Neural Inf. Process. Syst. 35 (Dec): 33330–33342.
Yang, B., Z. Lv, and F. Wang. 2022. “Digital twins for intelligent green buildings.” Buildings 12 (6): 856. https://doi.org/10.3390/buildings12060856.
Yin, R., Y. Cheng, H. Wu, Y. Song, B. Yu, and R. Niu. 2020. “Fusionlane: Multi-sensor fusion for lane marking semantic segmentation using deep neural networks.” IEEE Trans. Intell. Transp. Syst. 23 (2): 1543–1553. https://doi.org/10.1109/TITS.2020.3030767.
Zhao, C., W. Zhou, L. Lu, and Q. Zhao. 2019. “Pooling scores of neighboring points for improved 3d point cloud segmentation.” In Proc., 2019 IEEE Int. Conf. on Image Processing (ICIP), 1475–1479. New York: IEEE. https://doi.org/10.1109/icip.2019.8803048.
Zhao, H., L. Jiang, J. Jia, P. H. Torr, and V. Koltun. 2021. “Point transformer.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 16259–16268. New York: IEEE. https://doi.org/10.1109/iccv48922.2021.01595.
Zhao, S., Q. Wang, X. Fang, W. Liang, Y. Cao, C. Zhao, L. Li, C. Liu, and K. Wang. 2022. “Application and development of autonomous robots in concrete construction: Challenges and opportunities.” Drones 6 (12): 424. https://doi.org/10.3390/drones6120424.
Zou, Z., and Y. Li. 2021. “Efficient urban-scale point clouds segmentation with bev projection.” Preprint, submitted September 19, 2021. http://arxiv.org/abs/2109.09074.

Information & Authors

Information

Published In

Go to Journal of Construction Engineering and Management
Journal of Construction Engineering and Management
Volume 150Issue 8August 2024

History

Received: Nov 28, 2023
Accepted: Mar 7, 2024
Published online: Jun 6, 2024
Published in print: Aug 1, 2024
Discussion open until: Nov 6, 2024

Permissions

Request permissions for this article.

Authors

Affiliations

Assistant Professor, School of Mechanical Engineering, Guangxi Univ., Nanning 530004, China. Email: [email protected]
Graduate Student, School of Mechanical Engineering, Guangxi Univ., Nanning 530004, China (corresponding author). ORCID: https://orcid.org/0009-0003-0937-8819. Email: [email protected]
Director, Zanecon Technology (Shenzhen) Company Limited, Hangcheng Ave., Shenzhen 518100, China. Email: [email protected]
Xiaoping Liao [email protected]
Professor, School of Mechanical Engineering, Guangxi Univ., Nanning 530004, China. Email: [email protected]
Assistant Professor, Dept. of Mechanical and Marine Engineering, Beibu Gulf Univ., Qinzhou 535011, China. Email: [email protected]
Yunlong Zhao [email protected]
Graduate Student, School of Mechanical Engineering, Guangxi Univ., Nanning 530004, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share