Chapter
Mar 7, 2022

Toward Intelligent Agents to Detect Work Pieces and Processes in Modular Construction: An Approach to Generate Synthetic Training Data

Publication: Construction Research Congress 2022

ABSTRACT

Modular construction has been an alternative to traditional construction processes to reduce environmental impact and construction waste as well as to deal with space constraints in highly dense urban construction sites. Furthermore, since modules are pre-fabricated in a controlled environment, modular construction has the advantage to achieve automation and optimization as compared to traditional construction. However, due to the one-of-a-type nature of construction projects, automation in construction is still in its infancy as compared to other manufacturing industries. Meanwhile, recently, advancements in technologies such as computer vision and deep learning provide opportunities to train machine intelligence to solve problems that were not possible before. In this study, we propose an approach to automatically generate high-resolution synthetic training data for scene understanding in the modular construction context. Evaluation of the approach in testbed factory settings shows that we can systematically capture and label AEC components such as walls and doors on RGB-D images as synthetic datasets for applications of supervised learning in relation to modular construction. The proposed method can provide a mechanism to feed the necessary but missing large-scale datasets to train scene understanding models in modular construction factories as modular projects and corresponding workpieces change.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Lawson, R. M., Ogden, R. G., and Bergin, R. (2012). Application of modular construction in high-rise buildings. Journal of architectural engineering, 18(2), 148–154.
Bock, T. (2015). The future of construction automation: Technological disruption and the upcoming ubiquity of robotics. Automation in Construction, 59, 113–121.
Ng, A. (2021). “Issue 84”. Retrieved from https://www.deeplearning.ai/the-batch/issue-84/, access date: March 24, 2021.
Smith, L. B. (2005). Cognition as a dynamic system: Principles from embodiment. Developmental Review, 25(3-4), 278–298.
Armeni, I., He, Z. Y., Gwak, J., Zamir, A. R., Fischer, M., Malik, J., and Savarese, S. (2019). 3d scene graph: A structure for unified semantics, 3d space, and camera. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 5664–5673).
Kolve, E., Mottaghi, R., Han, W., VanderBilt, E., Weihs, L., Herrasti, A., Gorden, D., Zhu, Y., Gupta, A., and Farhadi, A. (2017). Ai2-thor: An interactive 3d environment for visual ai.
Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., and Zhang, Y. (2017). Matterport3d: Learning from rgb-d data in indoor environments.
Xia, F., Zamir, A. R., He, Z., Sax, A., Malik, J., and Savarese, S. (2018). Gibson env: Real-world perception for embodied agents. In Proceedings of the IEEE on CVPR (pp. 9068–9079).
Liu, C., Wu, J., and Furukawa, Y. (2018). Floornet: A unified framework for floorplan reconstruction from 3d scans. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 201–217).
Dai, A., Siddiqui, Y., Thies, J., Valentin, J., and Nießner, M. (2020). Spsg: Self-supervised photometric scene generation from rgb-d scans.
Dai, A., Ritchie, D., Bokeloh, M., Reed, S., Sturm, J., and Nießner, M. (2018). Scancomplete: Large-scale scene completion and semantic segmentation for 3d scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4578–4587).
Liu, S., Hu, Y., Zeng, Y., Tang, Q., Jin, B., Han, Y., and Li, X. (2018, December). See and think: Disentangling semantic scene completion. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (pp. 261–272).
Savva, M., Kadian, A., Maksymets, O., Zhao, Y., Wijmans, E., Jain, B., Straub, J., Liu, J., Koltun, V., Malik, J., Parikh, D., and Batra, D. (2019). Habitat: A platform for embodied ai research. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9339–9347).
Koo, B., Jung, R., and Yu, Y. (2021). Automatic classification of wall and door BIM element subtypes using 3D geometric deep neural networks. Advanced Engineering Informatics, 47, 101200.
Li, Z., Wang, H., and Li, J. (2020). Auto-MVCNN: Neural Architecture Search for Multi-view 3D Shape Recognition.
Hoiem, D., Hays, J., Xiao, J., and Khosla, A. (2015). Guest editorial: Scene understanding. International Journal of Computer Vision, 112(2), 131–132.
Park, K., and Ergan, S. (2021). Towards Intelligent Agents to Assist in Modular Construction: Evaluation of Datasets Generated in Virtual Environments for AI training. 38th International Symposium on Automation and Robotics in Construction (submitted).

Information & Authors

Information

Published In

Go to Construction Research Congress 2022
Construction Research Congress 2022
Pages: 802 - 811

History

Published online: Mar 7, 2022

Permissions

Request permissions for this article.

Authors

Affiliations

Keundeok Park [email protected]
1Ph.D. Student, Dept. of Civil and Urban Engineering, New York Univ., Brooklyn, NY. Email: [email protected]
Semiha Ergan, A.M.ASCE [email protected]
2Associate Professor, Dept. of Civil and Urban Engineering, New York Univ., Brooklyn, NY. ORCID: https://orcid.org/0000-0003-0496-7019. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • Automated Assembly Progress Monitoring in Modular Construction Factories Using Computer Vision-Based Instance Segmentation, Computing in Civil Engineering 2023, 10.1061/9780784485224.036, (290-297), (2024).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$288.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$288.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share