Abstract

Tower cranes are generally used in modern construction projects due to their exclusive and flexible performance for material transportation in complex workspaces. Although traditional deep learning vision-based methods have proven their effectiveness in construction site monitoring, a shortage of realistic training images and inefficiencies in them have been an obstacle in achieving effective tower crane monitoring. To deal with this issue, this paper presents a database-free approach for timely productivity monitoring of tower crane operations. Using a synthetic model, the authors construct a research framework to effectively generate training images on deep learning–powered object detection. Additionally, an enhanced tracking algorithm collaborating with unsupervised clustering-driven postprocessing enables stable performance for tracking resources. Further, logical ontology-based activity recognition constructed on domain-specific knowledge takes place to estimate time-to-time operation of the tower crane. The curtainwall installation was selected to represent the dynamicity and variability of tower crane operations for which it has been difficult to apply traditional deep learning technologies. Results displayed that the intended framework successfully produced 300,000 training images in 1 h, about 500 times faster than human resources, and an enhanced tracker performed well even in the case of unsatisfactory detection results. It is also proved that a performance of activity analysis was acceptable to 93.58% accuracy. The results demonstrate that the proposed framework has proven remarkable data generation speed with little manual input and good performance of operational-level activity analysis. These findings can automate the monitoring process of tower cranes’ operations and the productivity of curtainwall installation, assisting construction project decision making. Furthermore, such efficient methods are less time-consuming and can handle background scene changes effortlessly, building customized models in less time and to less cost, thus buffering the practical gap of applying vision-based technologies into visually dynamic construction sites.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Except for the data, models, and code provided externally, the analytical data and results for this study are available from the corresponding author on request.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C2003696). This research was also supported by a Seoul National University research grant in 2021. This research was supported by a Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0017304, Human Resource Development Program for Industrial Innovation).

References

Akhavian, R., and A. H. Behzadan. 2015. “Construction equipment activity recognition for simulation input modeling using mobile sensors and machine learning classifiers.” Adv. Eng. Inf. 29 (4): 867–877. https://doi.org/10.1016/j.aei.2015.03.001.
Bang, S., and H. Kim. 2020. “Context-based information generation for managing UAV-acquired data using image captioning.” Autom. Constr. 112 (Jun): 103116. https://doi.org/10.1016/j.autcon.2020.103116.
Bochinski, E., V. Eiselein, and T. Sikora. 2017. “High-speed tracking-by-detection without using image information.” In Proc., 14th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance, AVSS 2017, 1–6. New York: IEEE.
Bochinski, E., T. Senst, and T. Sikora. 2018. “Extending IOU based multi-object tracking by visual information.” In Proc., 2018 15th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS), 1–6. New York: IEEE.
Bügler, M., A. Borrmann, G. Ogunmakin, P. A. Vela, and J. Teizer. 2017. “Fusion of photogrammetry and video analysis for productivity assessment of earthwork processes.” Comput.-Aided Civ. Infrastruct. Eng. 32 (2): 107–123. https://doi.org/10.1111/mice.12235.
Cai, J., Y. Zhang, and H. Cai. 2019. “Two-step long short-term memory method for identifying construction activities through positional and attentional cues.” Autom. Constr. 106 (Jun): 102886. https://doi.org/10.1016/j.autcon.2019.102886.
Chris, S., R. Abbas, S. Biswanath, and M. A. Davenport. 2020. “Audio-based Bayesian model for productivity estimation of cyclic construction activities.” J. Comput. Civ. Eng. 34 (1): 4019048. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000863.
DarkLabel. 2017. “Video/image labeling and annotation tool.” Accessed June 30, 2022. https://github.com/darkpgmr/DarkLabel.
Ding, L., W. Fang, H. Luo, P. E. D. Love, B. Zhong, and X. Ouyang. 2018. “A deep hybrid learning model to detect unsafe behavior: Integrating convolution neural networks and long short-term memory.” Autom. Constr. 86 (2): 118–124. https://doi.org/10.1016/j.autcon.2017.11.002.
Ekanayake, B., J. K.-W. Wong, A. A. F. Fini, and P. Smith. 2021. “Computer vision-based interior construction progress monitoring: A literature review and future research directions.” Autom. Constr. 127 (Mar): 103705. https://doi.org/10.1016/j.autcon.2021.103705.
Fang, Q., H. Li, X. Luo, L. Ding, H. Luo, T. M. Rose, and W. An. 2018. “Detecting non-hardhat-use by a deep learning method from far-field surveillance videos.” Autom. Constr. 85 (Sep): 1–9. https://doi.org/10.1016/j.autcon.2017.09.018.
Fang, Y., Y. K. Cho, and J. Chen. 2016. “A framework for real-time pro-active safety assistance for mobile crane lifting operations.” Autom. Constr. 72 (Aug): 367–379. https://doi.org/10.1016/j.autcon.2016.08.025.
Golparvar-Fard, M., J. Bohn, J. Teizer, S. Savarese, and F. Peña-Mora. 2011. “Evaluation of image-based modeling and laser scanning accuracy for emerging automated performance monitoring techniques.” Autom. Constr. 20 (8): 1143–1155. https://doi.org/10.1016/j.autcon.2011.04.016.
Gong, J., and C. H. Caldas. 2010. “Computer vision-based video interpretation model for automated productivity analysis of construction operations.” J. Comput. Civ. Eng. 24 (3): 252–263. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000027.
Gong, J., and C. H. Caldas. 2011. “An object recognition, tracking, and contextual reasoning-based video interpretation method for rapid productivity analysis of construction operations.” Autom. Constr. 20 (8): 1211–1226. https://doi.org/10.1016/j.autcon.2011.05.005.
Hong, Y., S. Park, H. Kim, and H. Kim. 2021. “Synthetic data generation using building information models.” Autom. Constr. 130 (21): 103871. https://doi.org/10.1016/j.autcon.2021.103871.
Hwang, J., J. Kim, S. Chi, and J. Seo. 2022. “Development of training image database using web crawling for vision-based site monitoring.” Autom. Constr. 135 (Mar): 104141. https://doi.org/10.1016/j.autcon.2022.104141.
Khan, K., S. U. Rehman, K. Aziz, S. Fong, S. Sarasvady, and A. Vishwa. 2014. “DBSCAN: Past, present and future.” In Proc., 5th Int. Conf. on the Applications of Digital Information and Web Technologies, ICADIWT 2014, 232–238. New York: IEEE.
Kim, J., and S. Chi. 2017. “Adaptive detector and tracker on construction sites using functional integration and online learning.” J. Comput. Civ. Eng. 31 (5): 1–13. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000677.
Kim, J., and S. Chi. 2019. “Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles.” Autom. Constr. 104 (Dec): 255–264. https://doi.org/10.1016/j.autcon.2019.03.025.
Kim, J., and S. Chi. 2020. “Multi-camera vision-based productivity monitoring of earthmoving operations.” Autom. Constr. 112 (Feb): 103121. https://doi.org/10.1016/j.autcon.2020.103121.
Kim, J., and S. Chi. 2021. “A few-shot learning approach for database-free vision-based monitoring on construction sites.” Autom. Constr. 124 (Apr): 103566. https://doi.org/10.1016/j.autcon.2021.103566.
Kim, J., S. Chi, and M. Choi. 2019. “Sequential pattern learning of visual features and operation cycles for vision-based action recognition of earthmoving excavators.” In Proc., Computing in Civil Engineering 2019: Data, Sensing, and Analytics—Selected Papers from the ASCE Int. Conf. on Computing in Civil Engineering 2019, 298–304. Reston, VA: ASCE.
Kim, J., S. Chi, and J. Seo. 2018. “Interaction analysis for vision-based activity identification of earthmoving excavators and dump trucks.” Autom. Constr. 87 (May): 297–308. https://doi.org/10.1016/j.autcon.2017.12.016.
Kim, J., J. Hwang, S. Chi, and J. O. Seo. 2020. “Towards database-free vision-based monitoring on construction sites: A deep active learning approach.” Autom. Constr. 120 (Feb): 103376. https://doi.org/10.1016/j.autcon.2020.103376.
Korea Construction Technology Promotion Act. 2016. “Enforcement decree article 98 and 99.” Accessed July 19, 2022. https://www.law.go.kr/LSW//lsInfoP.do?lsiSeq=238361&chrClsCd=010202&urlMode=lsInfoP&efYd=#0000.
Korea Institute of Civil Engineering and Building Technology. 2021. “Standard of estimate for construction projects.” Accessed July 20, 2022. https://www.codil.or.kr/filebank/files/202102/helpdesk/BBS_202102090125023975.pdf?atchFileId=FILE_000000000008907&fileSn=5.
Li, H., G. Chan, J. K. W. Wong, and M. Skitmore. 2016. “Real-time locating systems applications in construction.” Autom. Constr. 63 (Dec): 37–47. https://doi.org/10.1016/j.autcon.2015.12.001.
Luo, X., H. Li, D. Cao, F. Dai, J. Seo, and S. Lee. 2018a. “Recognizing diverse construction activities in site images via relevance networks of construction-related objects detected by convolutional neural networks.” J. Comput. Civ. Eng. 32 (3): 1–16. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000756.
Luo, X., H. Li, D. Cao, Y. Yu, X. Yang, and T. Huang. 2018b. “Towards efficient and objective work sampling: Recognizing workers’ activities in site surveillance videos with two-stream convolutional networks.” Autom. Constr. 94 (Jun): 360–370. https://doi.org/10.1016/j.autcon.2018.07.011.
Luo, X., H. Li, X. Yang, Y. Yu, and D. Cao. 2019. “Capturing and understanding workers’ activities in far-field surveillance videos with deep action recognition and Bayesian nonparametric learning.” Comput.-Aided Civ. Infrastruct. Eng. 34 (4): 333–351. https://doi.org/10.1111/mice.12419.
Mahmood, B., S. U. Han, and J. Seo. 2022. “Implementation experiments on convolutional neural network training using synthetic images for 3D pose estimation of an excavator on real images.” Autom. Constr. 133 (9): 103996. https://doi.org/10.1016/j.autcon.2021.103996.
Nath, N. D., A. H. Behzadan, and S. G. Paal. 2020. “Deep learning for site safety: Real-time detection of personal protective equipment.” Autom. Constr. 112 (Jan): 103085. https://doi.org/10.1016/j.autcon.2020.103085.
Neuhausen, M., P. Herbers, and M. König. 2020. “Using synthetic data to improve and evaluate the tracking performance of construction workers on site.” Appl. Sci. 10 (14): 948. https://doi.org/10.3390/app10144948.
Project Management Institute. 1996. A guide to the project management body of knowledge. 4th ed. Newtown Square, PA: Project Management Institute.
Rashid, K. M., and J. Louis. 2019a. “Construction equipment activity recognition from IMUs mounted on articulated implements and supervised classification.” In Proc., Int. Conf. in Computing in Civil Engineering, 130–138. Reston, VA: ASCE.
Rashid, K. M., and J. Louis. 2019b. “Times-series data augmentation and deep learning for construction equipment activity recognition.” Adv. Eng. Inf. 42 (Apr): 100944. https://doi.org/10.1016/j.aei.2019.100944.
Rashidi, A., H. R. Nejad, and M. Maghiar. 2014. “Productivity estimation of bulldozers using generalized linear mixed models.” KSCE J. Civ. Eng. 18 (6): 1580–1589. https://doi.org/10.1007/s12205-014-0354-0.
Ren, S., K. He, R. Girshick, and J. Sun. 2017. “Faster R-CNN: Towards real-time object detection with region proposal networks.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (6): 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031.
Roberts, D., and M. Golparvar-Fard. 2019. “End-to-end vision-based detection, tracking and activity analysis of earthmoving equipment filmed at ground level.” Autom. Constr. 105 (9): 102811. https://doi.org/10.1016/j.autcon.2019.04.006.
Rubaiyat, A. H. M., T. T. Toma, M. Kalantari-Khandani, S. A. Rahman, L. Chen, Y. Ye, and C. S. Pan. 2017. “Automatic detection of helmet uses for construction safety.” In Proc., 2016 IEEE/WIC/ACM Int. Conf. on Web Intelligence Workshops, WIW 2016, 135–142. New York: IEEE.
Seo, J., A. Alwasel, S. Lee, E. M. Abdel-Rahman, and C. Haas. 2019. “A comparative study of in-field motion capture approaches for body kinematics measurement in construction.” Robotica 37 (5): 928–946. https://doi.org/10.1017/S0263574717000571.
Seo, J., and S. Lee. 2021. “Automated postural ergonomic risk assessment using vision-based posture classification.” Autom. Constr. 128 (Jun): 103725. https://doi.org/10.1016/j.autcon.2021.103725.
Shapira, A., and A. Elbaz. 2014. “Tower crane cycle times: Case study of remote-control versus cab-control operation.” J. Constr. Eng. Manage. 140 (12): 05014010. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000904.
Shapira, A., G. Lucko, and C. J. Schexnayder. 2007. “Cranes for building construction projects.” J. Constr. Eng. Manage. 133 (9): 690–700. https://doi.org/10.1061/(ASCE)0733-9364(2007)133:9(690).
Shorten, C., and T. M. Khoshgoftaar. 2019. “A survey on image data augmentation for deep learning.” J. Big Data 6 (1): 60. https://doi.org/10.1186/s40537-019-0197-0.
Soltani, M. M., Z. Zhu, and A. Hammad. 2016. “Automated annotation for visual recognition of construction resources using synthetic images.” Autom. Constr. 62 (10): 14–23. https://doi.org/10.1016/j.autcon.2015.10.002.
The Hospital Construction Channel. 2017. “Compilation of assorted construction time-lapse clips: Ⓗ Week 142: Curtain wall glass and more.” Accessed July 8, 2022. https://www.youtube.com/watch?v=VLtc4Ih3pMA.
Wang, D., X. Wang, B. Ren, J. Wang, T. Zeng, D. Kang, and G. Wang. 2022. “Vision-based productivity analysis of cable crane transportation using augmented reality–based synthetic image.” J. Comput. Civ. Eng. 36 (1): 4021030. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000994.
Wang, Z., Q. Zhang, B. Yang, T. Wu, K. Lei, B. Zhang, and T. Fang. 2021. “Vision-based framework for automatic progress monitoring of precast walls by using surveillance videos during the construction phase.” J. Comput. Civ. Eng. 35 (1): 04020056. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000933.
Yang, J., P. Vela, J. Teizer, and Z. Shi. 2014. “Vision-based tower crane tracking for understanding construction activity.” J. Comput. Civ. Eng. 28 (1): 103–112. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000242.
Zhang, C., A. Hammad, and S. Rodriguez. 2012. “Crane pose estimation using UWB real-time location system.” J. Comput. Civ. Eng. 26 (5): 625–637. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000172.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 4July 2023

History

Received: Jul 23, 2022
Accepted: Feb 18, 2023
Published online: Apr 25, 2023
Published in print: Jul 1, 2023
Discussion open until: Sep 25, 2023

Permissions

Request permissions for this article.

Authors

Affiliations

Graduate Student, Dept. of Civil and Environmental Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea. ORCID: https://orcid.org/0000-0003-0967-9227. Email: [email protected]
Graduate Student, Dept. of Civil and Environmental Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea. ORCID: https://orcid.org/0000-0003-0245-6995. Email: [email protected]
Graduate Student, Dept. of Civil and Environmental Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea. ORCID: https://orcid.org/0000-0001-7888-3358. Email: [email protected]
Professor, Dept. of Civil and Environmental Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea; Adjunct Professor, Institute of Construction and Environmental Engineering (ICEE), 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Republic of Korea (corresponding author). ORCID: https://orcid.org/0000-0002-0409-5268. Email: [email protected]
Professor, Dept. of the Built Environment, National Univ. of Singapore, 4 Architecture Dr., Singapore 117566, Singapore. ORCID: https://orcid.org/0000-0002-9034-2033. Email: [email protected]
Assistant Professor, School of Civil and Environmental Engineering, Nanyang Technological Univ., Nanyang Ave., Singapore 639798, Singapore. ORCID: https://orcid.org/0000-0002-2237-4965. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share