Abstract

Mixed reality has been envisioned as an interactive and engaging pedagogical tool for providing experiential learning experiences and potentially enhancing the acquisition of technical competencies in construction engineering education. However, to achieve seamless learning interactions and automated learning assessments, it is pertinent that the mixed reality environments are intelligent, proactive, and adaptive to students’ learning needs. With the potentials of artificial intelligence for promoting interactive, assistive, and self-reliant learning environments and the professed effectiveness of deep learning in other domains, this study explores an approach to developing a smart mixed reality environment for technical skills acquisition in construction engineering education. The study is based on the usability assessment of a previously developed mixed reality environment for learning sensing technologies such as laser scanners in the construction industry. In this study, long short-term memory (LSTM) and a hybrid LSTM and convolutional neural networks (CNN) models were trained with augmented eye-tracking data to predict students’ learning interaction difficulties, cognitive development, and experience levels. This was achieved by using predefined labels obtained from think-aloud protocols and demographic questionnaires during laser scanning activities within a mixed reality learning environment. The proposed models performed well in recognizing interaction difficulties, experienced levels, and cognitive development with F1 scores of 95.95%, 98.52%, and 99.49% respectively. The hybrid CNN-LSTM models demonstrated improved performance with an accuracy of at least 20% higher than the LSTM models but at a higher inference time. The efficacy of the models for detecting the required classes, and the potentials of the adopted data augmentation techniques for eye-tracking data were further reported. However, as the model performance increased with data size, the computational cost also increased. This study sets precedence for exploring the applications of artificial intelligence for mixed reality learning environments in construction engineering education.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

The data sets generated during this study are available upon reasonable request.

Acknowledgments

The authors would like to acknowledge National Science Foundation for their support (Grant No. IUSE–1916521). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

References

Abraham, K., B. Patail, and D. Wurth. 2013. Usability testing of a U-500 insulin syringe: A human factors approach, 38–43. Thousand Oaks, CA: Journal of Diabetes Science and Technology.
Alzubaidi, L., J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. A. Fadhel, M. Al-Amidie, and L. Farhan. 2021. “Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions.” J. Big Data 8 (1): 1–74. https://doi.org/10.1186/s40537-021-00444-8.
Anderson, N. C., F. Anderson, A. Kingstone, and W. F. Bischof. 2015. “A comparison of scanpath comparison methods.” Behav. Res. Methods 47 (4): 1377–1392. https://doi.org/10.3758/s13428-014-0550-3.
Anifowose, F., A. Khoukhi, and A. Abdulraheem. 2017. “Investigating the effect of training–testing data stratification on the performance of soft computing techniques: An experimental study.” J. Exp. Theor. Artif. Intell. 29 (3): 517–535. https://doi.org/10.1080/0952813X.2016.1198936.
Asish, S. M., E. Hossain, A. K. Kulshreshth, and C. W. Borst. 2021. “Deep learning on eye gaze data to classify student distraction level in an educational VR environment.” In Proc., ICAT-EGVE 2021-Int. Conf. on Artificial Reality and Telexistence and Eurographics Symp. on Virtual Environments. Geneva: Eurographics Association.
Asish, S. M., A. Kulshreshth, and C. W. Borst. 2022. Detecting distracted students in educational vr environments using machine learning on eye gaze data, 75–87. Amsterdam, Netherlands: Elsevier.
Ayoub, J., X. J. Yang, and F. Zhou. 2021. “Combat COVID-19 infodemic using explainable natural language processing models.” Inf. Process. Manage. 58 (4): 102569. https://doi.org/10.1016/j.ipm.2021.102569.
Azhar, S., J. Kim, and A. Salman. 2018. “Implementing virtual reality and mixed reality technologies in construction education: Students’ perceptions and lessons learned.” In Proc., 11th Annual Int. Conf. of Education. Valencia, Spain: International Association of Technology, Education and Development.
Barral, O., S. Lallé, G. Guz, A. Iranpour, and C. Conati. 2020. “Eye-tracking to predict user cognitive abilities and performance for user-adaptive narrative visualizations.” In Proc., 2020 Int. Conf. on Multimodal Interaction. New York: Association for Computing Machinery.
Bhaskar, N., M. Suchetha, and N. Y. Philip. 2020. “Time series classification-based correlational neural network with bidirectional LSTM for automated detection of kidney disease.” IEEE Sens. J. 21 (4): 4811–4818. https://doi.org/10.1109/JSEN.2020.3028738.
Cao, Y., Y. Ding, R. W. Proctor, V. G. Duffy, Y. Liu, and X. Zhang. 2021. “Detecting users’ usage intentions for websites employing deep learning on eye-tracking data.” Inf. Technol. Manage. 22 (4): 281–292. https://doi.org/10.1007/s10799-021-00336-6.
Cerny, T., H. Vrzakova, and M. Hradis. 2012. “What do you want to do next: A novel approach for intent prediction in gaze-based interaction.” In Proc., Eye Tracking Research and Applications Symp. (ETRA). New York: Association for Computing Machinery.
Christou, G. 2013. “A comparison between experienced and inexperienced video game players’ perceptions.” Hum.-centric Comput. Inf. Sci. 3 (1): 1–15. https://doi.org/10.1186/2192-1962-3-15.
Claesen, M., and B. De Moor. 2015. Hyperparameter search in machine learning. Cham, Switzerland: Springer.
Conati, C., S. Lallé, M. A. Rahman, and D. Toker. 2020. “Comparing and combining interaction data and eye-tracking data for the real-time prediction of user cognitive abilities in visualization tasks.” ACM Trans. Interactive Intell. Syst. 10 (2): 1–41. https://doi.org/10.1145/3367472.
Cornia, M., L. Baraldi, G. Serra, and R. Cucchiara. 2018. “Predicting human eye fixations via an lstm-based saliency attentive model.” IEEE Trans. Image Process. 27 (10): 5142–5154. https://doi.org/10.1109/TIP.2018.2851672.
Czuszynski, K., A. Kwasniewska, M. Szankin, and J. Ruminski. 2018. “Optical sensor based gestures inference using recurrent neural network in mobile conditions.” In Proc., 2018 11th Int. Conf. on Human System Interaction. New York: IEEE.
Dalgarno, B., and M. J. W. Lee. 2010. “What are the learning affordances of 3-D virtual environments?” Br. J. Educ. Technol. 41 (1): 10–32. https://doi.org/10.1111/j.1467-8535.2009.01038.x.
Dalrymple, K. A., M. Jiang, Q. Zhao, and J. T. Elison. 2019. “Machine learning accurately classifies age of toddlers based on eye tracking.” Sci. Rep. 9 (1): 1–10. https://doi.org/10.1038/s41598-019-42764-z.
Das, S., K. Prudhvi, and J. Maiti. 2022. “Assessing mental workload using eye tracking technology and deep learning models.” In Handbook of intelligent computing and optimization for sustainable development, 1–11. Cham, Switzerland: Springer.
Dong, M., and J. Sun. 2020. “Partial discharge detection on aerial covered conductors using time-series decomposition and long short-term memory network.” Electr. Power Syst. Res. 184 (Jul): 106318. https://doi.org/10.1016/j.epsr.2020.106318.
Elling, S., L. Lentz, and M. De Jong. 2012. “Combining concurrent think-aloud protocols and eye-tracking observations: An analysis of verbalizations and silences.” IEEE Trans. Prof. Commun. 55 (3): 206–220. https://doi.org/10.1109/TPC.2012.2206190.
Garbin, C., X. Zhu, and O. Marques. 2020. “Dropout vs. batch normalization: An empirical study of their impact to deep learning.” Multimedia Tools Appl. 79 (19): 12777–12815. https://doi.org/10.1007/s11042-019-08453-9.
Heinen, S. J., and A. A. Skavenski. 1992. “Adaptation of saccades and fixation to bilateral foveal lesions in adult monkey.” Vis. Res. 32 (2): 365–373. https://doi.org/10.1016/0042-6989(92)90145-9.
Holmqvist, K., M. Nyström, R. Andersson, R. Dewhurst, H. Jarodzka, and J. Van de Weijer. 2011. Eye tracking: A comprehensive guide to methods and measures. Oxford, UK: OUP Oxford.
Hosseini, M., M. Powell, J. Collins, C. Callahan-Flintoft, W. Jones, H. Bowman, and B. Wyble. 2020. “I tried a bunch of things: The dangers of unexpected overfitting in classification of brain data.” Neurosci. Biobehav. Rev. 119 (5): 456–467. https://doi.org/10.1016/j.neubiorev.2020.09.036.
Huo, F., Y. Chen, W. Ren, H. Dong, T. Yu, and J. Zhang. 2022. “Prediction of reservoir key parameters in ‘sweet spot’on the basis of particle swarm optimization to TCN-LSTM network.” J. Pet. Sci. Eng. 214 (Jan): 110544. https://doi.org/10.1016/j.petrol.2022.110544.
Hwang, T. 2018. Computational power and the social impact of artificial intelligence. New York: Journal of Business Research.
Jaén-Vargas, M., K. M. R. Leiva, F. Fernandes, S. B. Gonçalves, M. T. Silva, D. S. Lopes, and J. J. S. Olmedo. 2022. “Effects of sliding window variation in the performance of acceleration-based human activity recognition using deep learning models.” PeerJ Comput. Sci. 8 (Jun): e1052. https://doi.org/10.7717/peerj-cs.1052.
Jeong, C. Y., H. C. Shin, and M. Kim. 2021. “Sensor-data augmentation for human activity recognition with time-warping and data masking.” Multimedia Tools Appl. 80 (14): 20991–21009. https://doi.org/10.1007/s11042-021-10600-0.
Jerčić, P., C. Sennersten, and C. Lindley. 2020. “Modeling cognitive load and physiological arousal through pupil diameter and heart rate.” Multimedia Tools Appl. 79 (5): 3145–3159.
Just, M. A., and P. A. Carpenter. 1980. “A theory of reading: From eye fixations to comprehension.” Psychol. Rev. 87 (4): 329. https://doi.org/10.1037/0033-295X.87.4.329.
Justus, D., J. Brennan, S. Bonner, and A. S. McGough. 2018. “Predicting the computational cost of deep learning models.” In Proc., 2018 IEEE Int. Conf. on big data (Big Data). New York: IEEE.
Kaplan, A. D., J. Cruit, M. Endsley, S. M. Beers, B. D. Sawyer, and P. A. Hancock. 2021. “The effects of virtual reality, augmented reality, and mixed reality as training enhancement methods: A meta-analysis.” Hum. Factors 63 (4): 706–726. https://doi.org/10.1177/0018720820904229.
Kim, J.-Y., and S.-B. Cho. 2019. “Evolutionary optimization of hyperparameters in deep learning models.” In Proc., 2019 IEEE Congress on Evolutionary Computation (CEC). New York: IEEE.
Kim, M., and C. Y. Jeong. 2021. “Label-preserving data augmentation for mobile sensor data.” Multidimension. Syst. Signal Process. 32 (1): 115–129. https://doi.org/10.1007/s11045-020-00731-2.
Kolcun, R., D. A. Popescu, V. Safronov, P. Yadav, A. M. Mandalari, Y. Xie, R. Mortier, and H. Haddadi. 2020. The case for retraining of ML models for IoT device identification at the edge. Ithaca, NY: Cornell Univ.
Koochaki, F., and L. Najafizadeh. 2019. “Eye gaze-based early intent prediction utilizing cnn-lstm.” In Proc., 2019 41st Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society (EMBC). New York: IEEE.
Koorathota, S. C., K. Thakoor, P. Adelman, Y. Mao, X. Liu, and P. Sajda. 2020. “Sequence models in eye tracking: Predicting pupil diameter during learning.” In Proc., ACM Symp. on Eye Tracking Research and Applications. New York: Association for Computing Machinery.
Lagun, D., C. Manzanares, S. M. Zola, E. A. Buffalo, and E. Agichtein. 2011. “Detecting cognitive impairment by eye movement analysis using automatic classification algorithms.” J. Neurosci. Methods 201 (1): 196–203. https://doi.org/10.1016/j.jneumeth.2011.06.027.
Le, Q. T., A. Pedro, C. R. Lim, H. T. Park, C. S. Park, and H. K. Kim. 2015. “A framework for using mobile based virtual reality and augmented reality for experiential construction safety education.” Int. J. Eng. Educ. 31 (3): 713–725.
Liao, Y.-C., C.-C. Wang, C.-H. Tu, M.-C. Kao, W.-Y. Liang, and S.-H. Hung. 2020. “PerfNetRT: Platform-aware performance modeling for optimized deep neural networks.” In Proc., 2020 Int. Computer Symp. (ICS). New York: IEEE.
Litzinger, T. A., P. V. Meter, C. M. Firetto, L. J. Passmore, C. B. Masters, S. R. Turns, G. L. Gray, F. Costanzo, and S. E. Zappe. 2010. “A cognitive study of problem solving in statics.” J. Eng. Educ. 99 (4): 337–353. https://doi.org/10.1002/j.2168-9830.2010.tb01067.x.
Lungu, A. J., W. Swinkels, L. Claesen, P. Tu, J. Egger, and X. Chen. 2021. “A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: An extension to different kinds of surgery.” Expert Rev. Med. Devices 18 (1): 47–62. https://doi.org/10.1080/17434440.2021.1860750.
Ma, S., L. Gao, X. Liu, and J. Lin. 2019. “Deep learning for track quality evaluation of high-speed railway based on vehicle-body vibration prediction.” IEEE Access 7 (Jun): 185099–185107. https://doi.org/10.1109/ACCESS.2019.2960537.
McLellan, S., A. Muddimer, and S. C. Peres. 2012. “The effect of experience on system usability scale ratings.” J. Usability Stud. 7 (2): 56–67.
Mengoudi, K., D. Ravi, K. X. X. Yong, S. Primativo, I. M. Pavisic, E. Brotherhood, K. Lu, J. M. Schott, S. J. Crutch, and D. C. Alexander. 2020. “Augmenting dementia cognitive assessment with instruction-less eye-tracking tests.” IEEE J. Biomed. Health Inf. 24 (11): 3066–3075. https://doi.org/10.1109/JBHI.2020.3004686.
Mu, D., W. Sun, G. Xu, and W. Li. 2021. “Random blur data augmentation for scene text recognition.” IEEE Access 9 (21): 136636–136646. https://doi.org/10.1109/ACCESS.2021.3117035.
Nan, M., M. Trăscău, A. M. Florea, and C. C. Iacob. 2021. “Comparison between recurrent networks and temporal convolutional networks approaches for skeleton-based action recognition.” Sensors 21 (6): 2051. https://doi.org/10.3390/s21062051.
Noghabaei, M., K. Han, and A. Albert. 2021. “Feasibility study to identify brain activity and eye-tracking features for assessing hazard recognition using consumer-grade wearables in an immersive virtual environment.” J. Constr. Eng. Manage. 147 (9): 04021104. https://doi.org/10.1061/(ASCE)CO.1943-7862.0002130.
Ogunseiju, O. R., N. Gonsalves, A. A. Akanmu, D. Bairaktarova, D. A. Bowman, and F. Jazizadeh. 2022. “Mixed reality environment for learning sensing technology applications in Construction: A usability study.” Adv. Eng. Inf. 53 (10): 101637. https://doi.org/10.1016/j.aei.2022.101637.
Oh, C., S. Han, and J. Jeong. 2020. “Time-series data augmentation based on interpolation.” Procedia Comput. Sci. 175 (7): 64–71. https://doi.org/10.1016/j.procs.2020.07.012.
Pan, Z., A. D. Cheok, H. Yang, J. Zhu, and J. Shi. 2006. “Virtual reality and mixed reality for virtual learning environments.” Comput. Graphics 30 (1): 20–28. https://doi.org/10.1016/j.cag.2005.10.004.
Rashid, K. M., and J. Louis. 2019. “Times-series data augmentation and deep learning for construction equipment activity recognition.” Adv. Eng. Inf. 42 (Sep): 100944. https://doi.org/10.1016/j.aei.2019.100944.
Rosch, J. L., and J. J. Vogel-Walcutt. 2013. “A review of eye-tracking applications as tools for training.” Cognit. Technol. Work 15 (3): 313–327. https://doi.org/10.1007/s10111-012-0234-7.
Salminen, J., M. Nagpal, H. Kwak, J. An, S.-G. Jung, and B. J. Jansen. 2019. “Confusion prediction from eye-tracking data: Experiments with machine learning.” In Proc., 9th Int. Conf. on Information Systems and Technologies. New York: Association for Computing Machinery.
Sampaio, Z. 2010. “Virtual reality technology used in civil engineering education.” Open Virtual Reality J. 2 (10): 18–25. https://doi.org/10.2174/1875323X01002010018.
Sawyer, S. F. 2009. “Analysis of variance: The fundamental concepts.” J. Manual Manipulative Ther. 17 (2): 27–38. https://doi.org/10.1179/jmt.2009.17.2.27E.
Schittenkopf, C., G. Deco, and W. Brauer. 1997. “Two strategies to avoid overfitting in feedforward networks.” Neural Networks 10 (3): 505–516. https://doi.org/10.1016/S0893-6080(96)00086-X.
Seha, S. N. A., D. Hatzinakos, A. S. Zandi, and F. J. Comeau. 2021. “Improving eye movement biometrics in low frame rate eye-tracking devices using periocular and eye blinking features.” Image Vis. Comput. 108 (21): 104124. https://doi.org/10.1016/j.imavis.2021.104124.
Shabani, K., M. Khatib, and S. Ebadi. 2010. “Vygotsky’s zone of proximal development: Instructional implications and teachers’ professional development.” Engl. Lang. Teach. 3 (4): 237–248. https://doi.org/10.5539/elt.v3n4p237.
Shafiei, S. B., Z. Lone, A. S. Elsayed, A. A. Hussein, and K. A. Guru. 2020. “Identifying mental health status using deep neural network trained by visual metrics.” Trans. Psychiatry 10 (1): 1–8. https://doi.org/10.1038/s41398-020-01117-5.
Shin, M., D. Jang, H. Nam, K. H. Lee, and D. Lee. 2016. “Predicting the absorption potential of chemical compounds through a deep learning approach.” IEEE/ACM Trans. Comput. Biol. Bioinf. 15 (2): 432–440.
Shorten, C., and T. M. Khoshgoftaar. 2019. “A survey on image data augmentation for deep learning.” J. Big Data 6 (1): 1–48. https://doi.org/10.1186/s40537-019-0197-0.
Slaton, T., C. Hernandez, and R. Akhavian. 2020. “Construction activity recognition with convolutional recurrent networks.” Autom. Constr. 113 (20): 103138. https://doi.org/10.1016/j.autcon.2020.103138.
Somu, N., G. Raman, and K. Ramamritham. 2021. “A deep learning framework for building energy consumption forecast.” Renewable Sustainable Energy Rev. 137 (11): 110591. https://doi.org/10.1016/j.rser.2020.110591.
Steed, A., Y. Pan, F. Zisch, and W. Steptoe. 2016. “The impact of a self-avatar on cognitive load in immersive virtual reality.” In Proc., 2016 IEEE Virtual Reality (VR). New York: IEEE.
Swets, J. A. 2014. Signal detection theory and ROC analysis in psychology and diagnostics: Collected papers. New York: Psychology Press.
Tien, T., P. H. Pucher, M. H. Sodergren, K. Sriskandarajah, G.-Z. Yang, and A. Darzi. 2015. “Differences in gaze behaviour of expert and junior surgeons performing open inguinal hernia repair.” Surgical Endoscopy 29 (2): 405–413. https://doi.org/10.1007/s00464-014-3683-7.
Toker, D., B. Steichen, M. Gingerich, C. Conati, and G. Carenini. 2014. “Towards facilitating user skill acquisition: Identifying untrained visualization users through eye tracking.” In Proc., 19th Int. Conf. on Intelligent User Interfaces. New York: Association for Computing Machinery.
Um, T. T., F. M. Pfister, D. Pichler, S. Endo, M. Lang, S. Hirche, U. Fietzek, and D. Kulić. 2017. “Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks.” In Proc., 19th ACM Int. Conf. on Multimodal Interaction. New York: Association for Computing Machinery.
Vansteenkiste, P., G. Cardon, R. Philippaerts, and M. Lenoir. 2015. “Measuring dwell time percentage from head-mounted eye-tracking data–comparison of a frame-by-frame and a fixation-by-fixation analysis.” Ergonomics 58 (5): 712–721. https://doi.org/10.1080/00140139.2014.990524.
Wang, H., Z. Lei, X. Zhang, B. Zhou, and J. Peng. 2019. “A review of deep learning for renewable energy forecasting.” Energy Convers. Manage. 198 (1): 111799. https://doi.org/10.1016/j.enconman.2019.111799.
Watalingam, R. D., N. Richetelli, J. B. Pelz, and J. A. Speir. 2017. “Eye tracking to evaluate evidence recognition in crime scene investigations.” Forensic Sci. Int. 280 (Aug): 64–80. https://doi.org/10.1016/j.forsciint.2017.08.012.
Wu, W., J. Hartless, A. Tesei, V. Gunji, S. Ayer, and J. London. 2019. “Design assessment in virtual and mixed reality environments: Comparison of novices and experts.” J. Constr. Eng. Manage. 145 (9): 04019049. https://doi.org/10.1061/(ASCE)CO.1943-7862.0001683.
Wu, W., A. Tesei, S. Ayer, J. London, Y. Luo, and V. Gunji. 2018. “Closing the skills gap: Construction and engineering education using mixed reality—A case study.” In Proc., 2018 IEEE Frontiers in Education Conf. (FIE). New York: IEEE.
Xie, Z., Y.-H. Huang, G.-Q. Fang, H. Ren, S.-Y. Fang, Y. Chen, and J. Hu. 2018. “RouteNet: Routability prediction for mixed-size designs using convolutional neural network.” In Proc., 2018 IEEE/ACM Int. Conf. on Computer-Aided Design (ICCAD). New York: IEEE.
Yamada, Y., and M. Kobayashi. 2018. “Detecting mental fatigue from eye-tracking data gathered while watching video: Evaluation in younger and older adults.” Artif. Intell. Med. 91 (Jul): 39–48. https://doi.org/10.1016/j.artmed.2018.06.005.
Yang, F., and Y. M. Goh. 2022. “VR and MR technology for safety management education: An authentic learning approach.” Saf. Sci. 148 (20): 105645. https://doi.org/10.1016/j.ssci.2021.105645.
Yin, Y., Y. Alqahtani, J. H. Feng, J. Chakraborty, and M. P. McGuire. 2021. “Deep learning methods for the prediction of information display type using eye tracking sequences.” In Proc., 2021 20th IEEE Int. Conf. on Machine Learning and Applications (ICMLA). New York: IEEE.
Zhang, C., Y. Lu, R. Xu, Y. Xiaomei, Y. Shi, and P. Lu. 2017. “An educational tool based on virtual construction site visit game.” Mod. Appl. Sci. 11 (8): 47. https://doi.org/10.5539/mas.v11n8p47.
Zhang, H., L. Zhang, and Y. Jiang. 2019. “Overfitting and underfitting analysis for deep learning based end-to-end communication systems.” In Proc., 2019 11th Int. Conf. on Wireless Communications and Signal Processing (WCSP). New York: IEEE.
Zhao, Z., W. Chen, X. Wu, P. C. Chen, and J. Liu. 2017. “LSTM network: A deep learning approach for short-term traffic forecast.” IET Intel. Transport Syst. 11 (2): 68–75. https://doi.org/10.1049/iet-its.2016.0208.
Zhu, R., W. Liao, and Y. Wang. 2020. “Short-term prediction for wind power based on temporal convolutional network.” Energy Rep. 6 (Jan): 424–429. https://doi.org/10.1016/j.egyr.2020.11.219.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 4July 2023

History

Received: Sep 1, 2022
Accepted: Feb 3, 2023
Published online: Mar 24, 2023
Published in print: Jul 1, 2023
Discussion open until: Aug 24, 2023

Permissions

Request permissions for this article.

Authors

Affiliations

Assistant Professor, School of Building of Construction, College of Design, Georgia Tech, Atlanta, GA 30332 (corresponding author). ORCID: https://orcid.org/0000-0002-3852-4032. Email: [email protected]
Abiola Akinniyi, S.M.ASCE
Ph.D. Student, Myers-Lawson School of Construction, Virginia Tech, Blacksburg, VA 24060.
Nihar Gonsalves
Ph.D. Candidate, Myers-Lawson School of Construction, Virginia Tech, Blacksburg, VA 24060.
Mohammad Khalid, S.M.ASCE https://orcid.org/0000-0001-8668-3022
Ph.D. Student, Myers-Lawson School of Construction, Virginia Tech, Blacksburg, VA 24060. ORCID: https://orcid.org/0000-0001-8668-3022
Abiola Akanmu, Ph.D., M.ASCE https://orcid.org/0000-0001-9145-4865
Associate Professor, Construction Engineering and Management, Myers-Lawson School of Construction, Virginia Tech, Blacksburg, VA 24060. ORCID: https://orcid.org/0000-0001-9145-4865

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share