Chapter
May 24, 2022

Synthetic Image Data Generation for Semantic Understanding in Everchanging Scenes Using BIM and Unreal Engine

Publication: Computing in Civil Engineering 2021

ABSTRACT

Scene understanding, such as object recognition and semantic segmentation on images, plays an important role in many existing information management workflows such as progress monitoring and facility management. Current studies on architecture/engineering/construction (AEC) scene understanding often focus on using visual data captured for a particular building at project closeout, which has two primary limitations: (1) A scene understanding model trained based on static images collected at project closeout data often is not useful for understanding an ever-changing scene where facility components could have intermediate and incomplete states, such as construction sites; and (2) many facility components are occluded at project closeout, which creates many challenges when labeling or detecting them. By leveraging as-designed information present in building information models, this paper proposed an approach to generate synthetic data for training semantic understanding models reflecting the changes in site conditions through 4D-BIM and Unreal Engine. The paper contains two primary contributions: (1) The proposed workflow addresses issues with changing scenes by generating synthetic images with ground truth semantic segmentations during any stage of construction based on given schedule information and (2) the proposed method reduced the labeling effort by utilizing the semantically rich as-designed information that exists in a BIM. The proposed workflow was tested on an academic building in its ability to create a useful synthetic data set using the Uniformat (2010) as semantic taxonomy. The experiment results showed that the proposed data augmentation can improve the ability of scene understanding for images captured in changing scenes.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Arashpour, M., Ngo, T., and Li, H. (2021). “Scene understanding in construction and buildings using image processing methods: A comprehensive review and a case study.” Journal of Building Engineering, Elsevier Ltd.
Chaiyasarn, K., Kim, T.-K., Viola, F., Cipolla, R., and Soga, K. (2016). “Distortion-Free Image Mosaicing for Tunnel Inspection Based on Robust Cylindrical Surface Estimation through Structure from Motion.” Journal of Computing in Civil Engineering, 30(3), 04015045.
Chen, L.-C., Papandreou, G., Schroff, F., and Adam, H. (2017). “Rethinking Atrous Convolution for Semantic Image Segmentation.”
Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. (2016). “The Cityscapes Dataset for Semantic Urban Scene Understanding.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 3213–3223.
CSC CSI. (2010). UniFormat, A Uniform Classification of Construction Systems and Assemblies. 204.
Czerniawski, T., and Leite, F. (2018). “3DFacilities: annotated 3D reconstructions of building facilities.” Workshop of the European Group for Intelligent Computing in Engineering, Springer, Cham, 186–200.
Handa, A., Patraucean, V., Badrinarayanan, V., Stent, S., and Cipolla, R. (2016). “Understanding RealWorld Indoor Scenes with Synthetic Data.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4077–4085.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep residual learning for image recognition.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 770–778.
Kim, D., Liu, M., Lee, S. H., and Kamat, V. R. (2019). “Remote proximity monitoring between mobile construction resources using camera-mounted UAVs.” Automation in Construction, 99, 168–182.
Li, W., Saeedi, S., McCormac, J., Clark, R., Tzoumanikas, D., Ye, Q., Huang, Y., Tang, R., and Leutenegger, S. (2019). “Interiornet: Mega-scale multi-sensor photo-realistic indoor scenes dataset.” British Machine Vision Conference 2018, BMVC 2018.
Liu, C.-W. W., Wu, T.-H. H., Tsai, M.-H. H., and Kang, S.-C. C. (2018). “Image-based semantic construction reconstruction.” Automation in Construction, Elsevier, 90, 67–78.
Neuhausen, M., Herbers, P., and König, M. (2020). “Using synthetic data to improve and evaluate the tracking performance of construction workers on site.” Applied Sciences (Switzerland), MDPI AG, 10(14), 4948.
Roberts, M., and Paczan, N. (2020). “Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding.”
Sutjaritvorakul, T., Vierling, A., and Berns, K. (2020). “Data-Driven Worker Detection from Load-View Crane Camera.” Proceedings of the 37th International Symposium on Automation and Robotics in Construction (ISARC).
Towns, J., Cockerill, T., Dahan, M., Foster, I., Gaither, K., Grimshaw, A., Hazlewood, V., Lathrop, S., Lifka, D., Peterson, G. D., Roskies, R., Scott, J. R., and Wilkens-Diehr, N. (2014). “XSEDE: Accelerating scientific discovery.” Computing in Science and Engineering, IEEE Computer Society, 16(5), 62–74.
Tremblay, J., Prakash, A., Acuna, D., Brophy, M., Jampani, V., Anil, C., To, T., Cameracci, E., Boochoon, S., and Birchfield, S. (2018). “Training deep networks with synthetic data: bridging the reality gap by domain randomization.”.
Wang, W., Zhu, D., Wang, X., Hu, Y., Qiu, Y., Wang, C., Hu, Y., Kapoor, A., and Scherer, S. (2020). “TartanAir: A Dataset to Push the Limits of Visual SLAM.” IEEE International Conference on Intelligent Robots and Systems, Institute of Electrical and Electronics Engineers Inc., 4909–4916.
Wei, Y., Kasireddy, V., and Akinci, B. (2018). “3D imaging in construction and infrastructure management: Technological assessment and future research directions.” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 37–60.
Wu, Y., Wu, Y., Gkioxari, G., and Tian, Y. (2018). “Building generalizable agents with a realistic and rich 3D environment.”.
Yang, J., Park, M. W., Vela, P. A., and Golparvar-Fard, M. (2015). “Construction performance monitoring via still images, time-lapse photos, and video streams: Now, tomorrow, and the future.” Advanced Engineering Informatics, Elsevier, 29(2), 211–224.
Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., and Torralba, A. (2017). “Scene parsing through ADE20K dataset.” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 5122–5130.

Information & Authors

Information

Published In

Go to Computing in Civil Engineering 2021
Computing in Civil Engineering 2021
Pages: 934 - 941

History

Published online: May 24, 2022

Permissions

Request permissions for this article.

Authors

Affiliations

1Dept. of Civil and Environmental Engineering, Carnegie Mellon Univ., Pittsburgh, PA. Email: [email protected]
Burcu Akinci [email protected]
2Dept. of Civil and Environmental Engineering, Carnegie Mellon Univ., Pittsburgh, PA. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$358.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$358.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share