Technical Papers
Aug 24, 2024

Automatic Vision-Based Dump Truck Productivity Measurement Based on Deep-Learning Illumination Enhancement for Low-Visibility Harsh Construction Environment

Publication: Journal of Construction Engineering and Management
Volume 150, Issue 11

Abstract

Productivity assessment plays a key role in successful earthwork projects and is primarily achieved by monitoring key construction equipment such as excavators and dump trucks. Vision-based methods are widely adopted to analyze the productivity of construction equipment in earthwork projects. These methods are inexpensive and easy to implement and maintain. However, previous studies on vision-based productivity analysis of earthmoving equipment have predominantly demonstrated its effectiveness under favorable conditions characterized by stable lighting, areas devoid of shadows, and high-visibility environments free from significant occlusion by other objects or terrain. There is a significant illumination difference and limited visibility under harsh low-visibility conditions at earthwork construction sites, which makes it difficult to achieve reliable identification accuracy using these already developed methods. To address this problem, this study proposes an automatic vision-based dump truck productivity measurement method based on a deep learning illumination enhancement algorithm combined with transfer learning for low-visibility, harsh conditions at earthwork construction sites. This method uses an internet of things (IoT) system equipped with a camera to capture the image and a deep learning illumination enhancement algorithm, trainable deep hybrid network (TDHN), to enhance the image quality under low-light conditions at earthwork sites. Then, a deep convolutional neural network image recognition algorithm, ResNet50, was combined with a transfer learning technique to extract information from the image. Through the IoT, this processed information is sent to the earthwork platform to perform productivity analysis and make timely decisions regarding equipment allocation schemes. To validate the effectiveness of the proposed methodology, it was implemented in a real-time earthwork project. This study results show that image recognition accuracy of 99.6%, 95.67%, and 94.77% under normal lighting, low lighting, and extremely low lighting conditions, respectively. The dump truck recognition accuracy increased by 1.10%, 3.62%, and 21.19%, leading to a significant improvement in productivity measurement of 1.08%, 3.54%, and 20.71% for the above-mentioned lighting conditions, respectively.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean Government (MEST) (No. NRF 2018R1A5A1025137) and a Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure, and Transport (National Research for Smart Construction Technology (No. RS-2020-KA157089). The authors thank Hyeonsang Son, Jiayang Fan, Heoncheol Jang, and Zehua Zhao from Hanyang University for providing assistance with data collection and processing.

References

Brainard, D. H., and B. A. Wandell. 1986. “Analysis of the Retinex theory of color vision.” JOSA A 3 (10): 1651. https://doi.org/10.1364/JOSAA.3.001651.
Caterpillar. 2015. Caterpillar performance handbook. Peoria, IL: Catepillar Inc.
Chen, C., B. Xiao, Y. Zhang, and Z. Zhu. 2023. “Automatic vision-based calculation of excavator earthmoving productivity using zero-shot learning activity recognition.” Autom. Constr. 146 (Feb): 104702. https://doi.org/10.1016/j.autcon.2022.104702.
Cheng, C. F., A. Rashidi, M. A. Davenport, and D. V. Anderson. 2017. “Activity analysis of construction equipment using audio signals and support vector machines.” Autom. Constr. 81 (Sep): 240–253. https://doi.org/10.1016/j.autcon.2017.06.005.
Cheng, J. C. P., Q. Lu, and Y. Deng. 2016. “Analytical review and evaluation of civil information modeling.” Autom. Constr. 67 (Jul): 31–47. https://doi.org/10.1016/j.autcon.2016.02.006.
Cheng, T., M. Venugopal, J. Teizer, and P. A. Vela. 2011. “Performance evaluation of ultra wideband technology for construction resource location tracking in harsh environments.” Autom. Constr. 20 (8): 1173–1184. https://doi.org/10.1016/j.autcon.2011.05.001.
Fang, W., L. Ding, B. Zhong, P. E. D. Love, and H. Luo. 2018. “Automated detection of workers and heavy equipment on construction sites: A convolutional neural network approach.” Adv. Eng. Inf. 37 (Aug): 139–149. https://doi.org/10.1016/j.aei.2018.05.003.
Fu, X., D. Zeng, Y. Huang, X. P. Zhang, and X. Ding. 2016. “A weighted variational model for simultaneous reflectance and illumination estimation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2782–2790. New York: IEEE. https://doi.org/10.1109/CVPR.2016.304.
Gharbi, M., J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand. 2017. “Deep bilateral learning for real-time image enhancement.” ACM Trans. Graph. 36 (4): 1–12. https://doi.org/10.1145/3072959.3073592.
Glorot, X., and Y. Bengio. 2010. “Understanding the difficulty of training deep feedforward neural networks.” J. Mach. Learn. Res. 9 (Mar): 249–256. https://doi.org/10.1109/ACCESS.2021.3068534.
Golparvar-Fard, M., A. Heydarian, and J. C. Niebles. 2013. “Vision-based action recognition of earthmoving equipment using spatio-temporal features and support vector machine classifiers.” Adv. Eng. Inf. 27 (4): 652–663. https://doi.org/10.1016/j.aei.2013.09.001.
Guo, X., Y. Li, and H. Ling. 2017. “LIME: Low-light image enhancement via illumination map estimation.” IEEE Trans. Image Process. 26 (2): 982–993. https://doi.org/10.1109/TIP.2016.2639450.
Hajjar, D., and S. AbouRizk. 1999. “Simphony: An environment for building special purpose construction simulation tools.” In Vol. 2 of Proc., 31st Conf. on Winter Simulation: Simulation: A bridge to the future, 998–1006. New York: Association for Computing Machinery. https://doi.org/10.1145/324898.324981.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE. https://doi.org/10.1109/CVPR.2016.90.
Hutter, F. 2017. Automated learning machine methods, systems, challenges. Cham, Switzerland: Springer.
Ignatov, A., N. Kobyshev, R. Timofte, K. Vanhoey, and L. Van Gool. 2017. “DSLR-quality photos on mobile devices with deep convolutional networks.” In Proc., IEEE Int. Conf. Computer Vision, 3297–3305. New York: IEEE. https://doi.org/10.1109/ICCV.2017.355.
Jabbar, H. K., and R. Z. Khan. 2015. “Methods to avoid over-fitting and under-fitting in supervised machine learning (Comparative study).” Comput. Sci. Commun. Instrum. Devices 70 (10.3850): 978–981. https://doi.org/10.3850/978-981-09-5247-1_017.
Jung, S., J. Jeoung, H. Kang, and T. Hong. 2022. “3D convolutional neural network-based one-stage model for real-time action detection in video of construction equipment.” Comput. Civ. Infrastruct. Eng. 37 (1): 126–142. https://doi.org/10.1111/mice.12695.
Khan, R., Y. Yang, K. Wu, A. Mehmood, Z. H. Qaisar, and Z. Zheng. 2023. “A high dynamic range imaging method for short exposure multiview images.” Pattern Recognit. 137 (May): 109344. https://doi.org/10.1016/j.patcog.2023.109344.
Kim, H., Y. Ham, W. Kim, S. Park, and H. Kim. 2019. “Vision-based nonintrusive context documentation for earthmoving productivity simulation.” Autom. Constr. 102 (Dec): 135–147. https://doi.org/10.1016/j.autcon.2019.02.006.
Kim, I.-S., K. Latif, J. Kim, A. Sharafat, D.-E. Lee, and J. Seo. 2022. “Vision-based activity classification of excavators by bidirectional LSTM.” Appl. Sci. 13 (1): 272. https://doi.org/10.3390/app13010272.
Kim, J., and S. Chi. 2019. “Action recognition of earthmoving excavators based on sequential pattern analysis of visual features and operation cycles.” Autom. Constr. 104 (Aug): 255–264. https://doi.org/10.1016/j.autcon.2019.03.025.
Kim, J., and S. Chi. 2020. “Multi-camera vision-based productivity monitoring of earthmoving operations.” Autom. Constr. 112 (Jan): 103121. https://doi.org/10.1016/j.autcon.2020.103121.
Kim, J., S. Chi, and J. Seo. 2018. “Interaction analysis for vision-based activity identification of earthmoving excavators and dump trucks.” Autom. Constr. 87 (Nov): 297–308. https://doi.org/10.1016/j.autcon.2017.12.016.
Kim, J., J. Hwang, S. Chi, and J. O. Seo. 2020a. “Towards database-free vision-based monitoring on construction sites: A deep active learning approach.” Autom. Constr. 120 (Feb): 103376. https://doi.org/10.1016/j.autcon.2020.103376.
Kim, J., D. Lee, and J. Seo. 2020b. “Task planning strategy and path similarity analysis for an autonomous excavator.” Autom. Constr. 112 (Feb): 103108. https://doi.org/10.1016/j.autcon.2020.103108.
Langroodi, A. K., F. Vahdatikhaki, and A. Doree. 2021. “Activity recognition of construction equipment using fractional random forest.” Autom. Constr. 122 (Apr): 103465. https://doi.org/10.1016/j.autcon.2020.103465.
Latif, K., A. Sharafat, and J. Seo. 2023. “Digital twin-driven framework for TBM performance prediction, visualization, and monitoring through machine learning.” Appl. Sci. 13 (20): 11435. https://doi.org/10.3390/app132011435.
Lee, S., J. Y. Bae, A. Sharafat, and J. Seo. 2023. “Waste lime earthwork management using drone and BIM technology for construction projects: The case study of urban development project.” KSCE J. Civ. Eng. 28 (2): 517–531. https://doi.org/10.1007/s12205-023-1245-z.
Lee, S., A. Sharafat, I. S. Kim, and J. Seo. 2022. “Development and assessment of an intelligent compaction system for compaction quality monitoring, assurance, and management.” Appl. Sci. 12 (14): 6855. https://doi.org/10.3390/app12146855.
Lee, S. S., S. Park, and J. Seo. 2018. “Utilization analysis methodology for fleet telematics of heavy earthwork equipment.” Autom. Constr. 92 (Jun): 59–67. https://doi.org/10.1016/j.autcon.2018.02.035.
Liu, G., Q. Wang, T. Wang, B. Li, and X. Xi. 2024. “Vision-based excavator pose estimation for automatic control.” Autom. Constr. 157 (Nov): 105162. https://doi.org/10.1016/j.autcon.2023.105162.
Lv, F., B. Liu, and F. Lu. 2020. “Fast enhancement for non-uniform illumination images using light-weight CNNs.” In Proc., 28th ACM Int. Conf. on Multimedia, 1450–1458. https://doi.org/10.1145/3394171.3413925.
Montaser, A., I. Bakry, A. Alshibani, and O. Moselhi. 2012. “Estimating productivity of earthmoving operations using spatial technologies.” Can. J. Civ. Eng. 39 (9): 1072–1082. https://doi.org/10.1139/l2012-059.
Morawski, I., Y. A. Chen, Y. S. Lin, and W. H. Hsu. 2021. “NOD: Taking a closer look at detection under extreme low-light conditions with night object detection dataset.” Preprint, submitted October 20, 2021. http://arxiv.org/abs/2110.10364.
Naghshbandi, S. N., L. Varga, and Y. Hu. 2021. “Technologies for safe and resilient earthmoving operations: A systematic literature review.” Autom. Constr. 125 (Jun): 103632. https://doi.org/10.1016/j.autcon.2021.103632.
Ren, W., S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M. H. Yang. 2019. “Low-light image enhancement via a deep hybrid network.” IEEE Trans. Image Process. 28 (9): 4364–4375. https://doi.org/10.1109/TIP.2019.2910412.
Rezazadeh Azar, E., and B. McCabe. 2012. “Automated visual recognition of dump trucks in construction videos.” J. Comput. Civ. Eng. 26 (6): 769–781. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000179.
Roberts, D., and M. Golparvar-Fard. 2019. “End-to-end vision-based detection, tracking and activity analysis of earthmoving equipment filmed at ground level.” Autom. Constr. 105 (Aug): 102811. https://doi.org/10.1016/j.autcon.2019.04.006.
Sharafat, A., M. S. Khan, K. Latif, and J. Seo. 2021a. “BIM-based tunnel information modeling framework for visualization, management, and simulation of drill-and-blast tunneling projects.” J. Comput. Civ. Eng. 35 (2): 04020068. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000955.
Sharafat, A., M. S. Khan, K. Latif, W. A. Tanoli, W. Park, and J. Seo. 2021b. “BIM-GIS-based integrated framework for underground utility management system for earthwork operations.” Appl. Sci. 11 (12): 5721. https://doi.org/10.3390/app11125721.
Sharma, A., and R. T. Tan. 2021. “Nighttime visibility enhancement by increasing the dynamic range and suppression of light effects.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 11972–11981. New York: IEEE. https://doi.org/10.1109/CVPR46437.2021.01180.
Sharma, V. K., and R. N. Mir. 2020. “A comprehensive and systematic look up into deep learning based object detection techniques: A review.” Comput. Sci. Rev. 38 (Nov): 100301. https://doi.org/10.1016/j.cosrev.2020.100301.
Stumpf, A., H. Kim, and E. Jenicek. 2009. “3D-GIS based earthwork planning system for productivity improvement.” In Proc., Construction Research Congress 2009: Building a Sustainable, 497–505. Reston, VA: ASCE.
Tang, J., L. Wan, J. Schooling, P. Zhao, J. Chen, and S. Wei. 2022. “Automatic number plate recognition (ANPR) in smart cities: A systematic review on technological advancements and application cases.” Cities 129 (Oct): 103833. https://doi.org/10.1016/j.cities.2022.103833.
Tanoli, W. A., J. W. Seo, A. Sharafat, and S. S. Lee. 2018. “3D design modeling application in machine guidance system for earthwork operations.” KSCE J. Civ. Eng. 22 (12): 4779–4790. https://doi.org/10.1007/s12205-018-0309-y.
Tanoli, W. A., A. Sharafat, J. Park, and J. W. Seo. 2019. “Damage prevention for underground utilities using machine guidance.” Autom. Constr. 107 (Sep): 102893. https://doi.org/10.1016/j.autcon.2019.102893.
Vahdatikhaki, F., and A. Hammad. 2015. “Risk-based look-ahead workspace generation for earthwork equipment using near real-time simulation.” Autom. Constr. 58 (Oct): 207–220. https://doi.org/10.1016/j.autcon.2015.07.019.
Wang, L., and K. J. Yoon. 2022. “Deep learning for HDR imaging: State-of-the-art and future trends.” IEEE Trans. Pattern Anal. Mach. Intell. 44 (12): 8874–8895. https://doi.org/10.1109/TPAMI.2021.3123686.
Wang, X., P. E. D. Love, and P. R. Davis. 2012. “Automated cycle time measurement and analysis of excavator’s loading operation using smart phone-embedded IMU sensors.” In Proc., Computing in Civil Engineering 2015, 778–786. Reston, VA: ASCE.
Wang, Z., Z. Cai, and Y. Wu. 2023. “An improved YOLOX approach for low-light and small object detection: PPE on tunnel construction sites.” J. Comput. Des. Eng. 10 (3): 1158–1175. https://doi.org/10.1093/jcde/qwad042.
Wu, D., Y. Ying, M. Zhou, J. Pan, and D. Cui. 2023. “Improved ResNet-50 deep learning algorithm for identifying chicken gender.” Comput. Electron. Agric. 205 (Feb): 107622. https://doi.org/10.1016/j.compag.2023.107622.
Xiao, B., and S.-C. Kang. 2021. “Development of an image data set of construction machines for deep learning object detection.” J. Comput. Civ. Eng. 35 (2): 1–18. https://doi.org/10.1061/(asce)cp.1943-5487.0000945.
Xiao, B., Q. Lin, and Y. Chen. 2021. “A vision-based method for automatic tracking of construction machines at nighttime based on deep learning illumination enhancement.” Autom. Constr. 127 (Jul): 103721. https://doi.org/10.1016/j.autcon.2021.103721.
Yan, X., H. Zhang, Y. Wu, C. Lin, and S. Liu. 2023. “Construction instance segmentation (CIS) dataset for deep learning-based computer vision.” Autom. Constr. 156 (Dec): 105083. https://doi.org/10.1016/j.autcon.2023.105083.
Zhang, X., and X. Wang. 2021. “Marn: Multi-scale attention Retinex network for low-light image enhancement.” IEEE Access 9 (Mar): 50939–50948. https://doi.org/10.1109/ACCESS.2021.3068534.

Information & Authors

Information

Published In

Go to Journal of Construction Engineering and Management
Journal of Construction Engineering and Management
Volume 150Issue 11November 2024

History

Received: Jun 26, 2023
Accepted: Apr 29, 2024
Published online: Aug 24, 2024
Published in print: Nov 1, 2024
Discussion open until: Jan 24, 2025

Permissions

Request permissions for this article.

Authors

Affiliations

Ph.D. Candidate, Dept. of Civil and Environmental Engineering, Hanyang Univ., Seoul 04763, Republic of Korea. ORCID: https://orcid.org/0000-0002-6688-6217
Research Professor, School of Architectural, Civil, Environment, and Energy Engineering, Kyungpook National Univ., Daegu 41566, Republic of Korea. ORCID: https://orcid.org/0000-0003-3856-9799
Soomin Lee
Ph.D. Candidate, Dept. of Civil and Environmental Engineering, Hanyang Univ., Seoul 04763, Republic of Korea.
Jongwon Seo [email protected]
Professor, Dept. of Civil and Environmental Engineering, Hanyang Univ., Seoul 04763, Republic of Korea (corresponding author). Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share