Chapter
Mar 18, 2024

Requirements for Parametric Design of Physics-Based Synthetic Data Generation for Learning and Inference of Defect Conditions

Publication: Construction Research Congress 2024

ABSTRACT

The advancement of Artificial Intelligence (AI)-driven defect detection has already demonstrated promises to boost quality assurance and control, as well as condition assessment in the built environment. However, training defect detection models requires hefty amounts of reality capture data, and labeling is considered expensive. In most cases, such data may not cover all situations of defects. Synthetic data, most recently made with Building Information Models (BIM), is turbocharging model development for learning defect features. Nevertheless, few studies focused on characterizing defects to classify their severity, which is crucial to the condition assessment. To that end, this study explores the requirements for generating synthetic data. Parametric physics-based modeling approaches are carefully examined. Using the underlying geometric properties of such data, the condition of each defect can be determined. The feasibility of synthetic defect data is validated with a case study of crack segmentation using the transformer-based model, SegFormer. Examples of how different scenarios can be generated photo-realistically with the use of physics-based rendering for creating varying geometrical characteristics, appearance, and viewpoints of defects are presented. The generated synthetic crack datasets can successfully be used to train the SegFormer model and reach promising predictions on real crack images.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Artus, M., and Koch, C. (2022). “Object-Oriented Damage Information Modeling Concepts and Implementation for Bridge Inspection.” J. Comput. Civ. Eng. 36(6), 04022029.
Anil, E. B., Akinci, B., Kurc, O., and Garrett, J. H. (2016). “Building-Information-Modeling–Based Earthquake Damage Assessment for Reinforced Concrete Walls.” J. Comput. Civ. Eng. 30(4), 04015076.
Hamdan, A. H., Taraben, J., Helmrich, M., Mansperger, T., Morgenthal, G., and Scherer, R. J. (2021). “A semantic modeling approach for the automated detection and interpretation of structural damage.” Autom. Constr. 128, 103739.
Hinterstoisser, S., Lepetit, V., Wohlhart, P., and Konolige, K. (2018). On pre-trained image features and synthetic images for deep learning. Proc, ECCV 2018, 0-0.
Hong, Y., Park, S., Kim, H., and Kim, H. (2021). “Synthetic data generation using building information models.” Autom. Constr. 130, 103871.
Hoskere, V., Narazaki, Y., and Spencer, B. F., Jr. (2022). “Physics-Based Graphics Models in 3D Synthetic Environments as Autonomous Vision-Based Inspection Testbeds.” Sensors, 22(2), 532.
Hsu, S. H., Chang, T. W., and Chang, C. M. (2022). “Impacts of label quality on performance of steel fatigue crack recognition using deep learning-based image segmentation.” Smart Struct. Syst., 29(1), 207–220.
Hsu, S. H., Hung, H. T., Lin, Y. Q., and Chang, C. M. (2023). “Defect inspection of indoor components in buildings using deep learning object detection and augmented reality.” Earthquake Eng. Eng. Vibr., 1–14.
Isailović, D., Stojanovic, V., Trapp, M., Richter, R., Hajdin, R., and Döllner, J. (2020). “Bridge damage: Detection, IFC-based semantic enrichment and visualization.” Autom. Constr. 112, 103088.
Kulkarni, S., Singh, S., Balakrishnan, D., Sharma, S., Devunuri, S., and Korlapati, S. C. R. (2023). “CrackSeg9k: a collection and benchmark for crack segmentation datasets and frameworks.” Proc., ECCV 2020, 179–195.
Lee, J. G., Hwang, J., Chi, S., and Seo, J. (2022). “Synthetic Image Dataset Development for Vision-Based Construction Equipment Detection.” J. Comput. Civ. Eng. 36(5), 04022020.
Lin, J. J., Ibrahim, A., Sarwade, S., and Golparvar-Fard, M. (2021). “Bridge Inspection with Aerial Robots: Automating the Entire Pipeline of Visual Data Capture, 3D Mapping, Defect Detection, Analysis, and Reporting.” J. Comput. Civ. Eng. 35(2), 04020064.
Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). “Microsoft COCO: Common Objects in Context.” Proc., ECCV 2014, Zurich, Switzerland, 740–755.
Liu, Y., Yao, J., Lu, X., Xie, R., and Li, L. (2019). “DeepCrack: A deep hierarchical feature learning architecture for crack segmentation.” Neurocomputing, 338, 139–153.
Ma, J. W., Czerniawski, T., and Leite, F. (2020). “Semantic segmentation of point clouds of building interiors with deep learning: Augmenting training datasets with synthetic BIM-based point clouds.” Autom. Constr. 113, 103144.
Ma, L., Sacks, R., and Zeibak-Shini, R. (2015). “Information modeling of earthquake-damaged reinforced concrete structures.” Adv. Eng. Inf. 29(3), 396–407.
Núñez-Morales, J. D., Hsu, S. H., and Golparvar-Fard, M. (2023). “Synthetic Image Generation for Training 2D Segmentation Models at Scale for Computer Vision Progress Monitoring in Construction.” Proc., i3CE 2023 Conference, ASCE, Corvallis, OR, Accepted.
Shi, Y., Cui, L., Qi, Z., Meng, F., and Chen, Z. (2016). “Automatic Road Crack Detection Using Random Structured Forests.” IEEE Transactions on Intelligent Transportation Systems, 17(12), 3434–3445.
Tan, Y., Li, G., Cai, R., Ma, J., and Wang, M. (2022). “Mapping and modelling defect data from UAV captured images to BIM for building external wall inspection.” Autom. Constr. 139, 104284.
Wei, Y., and Akinci, B. (2021). “Synthetic Image Data Generation for Semantic Understanding in Everchanging Scenes Using BIM and Unreal Engine.” Proc., Computing in Civil Engineering 2021, ASCE, Orlando, FL, 934–941.
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., and Rush, A. M. (2020). “Transformers: State-of-the-Art Natural Language Processing.” Proc., the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 38–45.
Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., and Luo, P. (2021). “SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers.” Advances in Neural Information Processing Systems, 34, 12077–12090.
Yamane, T., Chun, P. J., and Honda, R. (2022). “Detecting and localising damage based on image recognition and structure from motion, and reflecting it in a 3D bridge model.” Struct. Infrastruct. Eng., 1–13.
Yang, F., Zhang, L., Yu, S., Prokhorov, D., Mei, X., and Ling, H. (2019). “Feature Pyramid and Hierarchical Boosting Network for Pavement Crack Detection.” IEEE Transactions on Intelligent Transportation Systems, 21(4), 1525–1535.
Yang, Y. Q., Guo, Y. X., Xiong, J. Y., Liu, Y., Pan, H., Wang, P. S., Tong, X., and Guo, B. (2023). “Swin3D: A Pretrained Transformer Backbone for 3D Indoor Scene Understanding.”.
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., and Torralba, A. (2017). “Scene Parsing Through ADE20K Dataset.” Proc., the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 633–641.

Information & Authors

Information

Published In

Go to Construction Research Congress 2024
Construction Research Congress 2024
Pages: 436 - 445

History

Published online: Mar 18, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Shun-Hsiang Hsu, S.M.ASCE [email protected]
1Ph.D. Student, Dept. of Civil and Environmental Engineering, Univ. of Illinois Urbana-Champaign. Email: [email protected]
Mani Golparvar-Fard, M.ASCE [email protected]
2Professor, Dept. of Civil and Environmental Engineering, Univ. of Illinois Urbana-Champaign, Champaign, IL. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$276.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$276.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share