Technical Papers
Dec 2, 2022

Multiagent Reinforcement Learning for Project-Level Intervention Planning under Multiple Uncertainties

Publication: Journal of Management in Engineering
Volume 39, Issue 2

Abstract

Reinforcement learning (RL) has recently been adopted by infrastructure asset management (IAM) researchers for adding flexibility regarding uncertainties in preventive actions decision-making. However, this relatively recent line of research has not incorporated other sources of uncertainties, such as hazards apart from deterioration patterns, nor has such research considered managerial aspects of IAM, such as stakeholders’ utilities. This paper aims to provide a holistic framework that draws upon recent developments in IAM systems and microworlds, employs RL model training, and considers deterioration, hazards, and cost fluctuations as the main sources of uncertainties while also adopting managerial aspects into decision-making. Consistent with the existing practice of IAM, this framework brings flexibility in the face of uncertainties to the IAM decision-making process. Multi-agent RL models based on deep Q networks and actor-critic models are constructed and trained for taking intervention actions regarding elements of a real bridge in Indiana through its life cycle. Both models could lead to higher expected utilities and lower costs compared to the optimal maintenance, rehabilitation, and reconstruction (MRR) plans obtained by Monte Carlo simulation and heuristic optimization algorithms. The proposed framework can assist decision-making bodies and managers in the IAM domain with making updateable optimal and more realistic decisions based on the updated state of various complex uncertainties in a negligible amount of time.

Practical Applications

Long-term strategic intervention actions are vital for maintaining the safety of degraded infrastructure assets. Traditionally, these strategic plans are derived through using optimization methods for maximizing stakeholders’ utilities, improving the safety of the network, and minimizing costs and risks associated with catastrophic events. The main drawback, and perhaps the biggest reported challenge, of this approach is the lack of flexibility in the face of the updated state of uncertain phenomena, such as the condition of assets and volatile costs. Simply put, the rare outcomes of unprecedented events can render optimal strategies useless. This study promotes the application of artificial intelligence-based agents for taking updateable decisions given the newly-observed state of all uncertain factors. Based on the results of the case study, the proposed framework was found able to reduce costs by up to 7% during the management horizon. This reduction in costs can be translated into saving millions of dollars of taxpayers’ money on a community scale. The theoretically well-founded and practically applicable proposed framework could measurably enhance the management of various infrastructure assets. Given that it can be tailored to other decision-making processes under uncertainty, the proposed framework can be similarly applied to other complex managerial problems.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

References

Abadi, M., A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Craig, and G. S. Corrado. 2016. “TensorFlow: Large-scale machine learning on heterogeneous distributed systems.” Preprint, submitted March 14, 2016. https://arxiv.org/abs/1603.04467.
Adey, B. T., M. Burkhalter, and C. Martani. 2020. “Defining road service to facilitate road infrastructure asset management.” Infrastruct. Asset Manage. 7 (4): 240–255. https://doi.org/10.1680/jinam.18.00045.
Amador-Jimenez, L., and A. Mohammadi. 2021. “Decision making methods to prioritise asset-management plans for municipal infrastructure.” Infrastruct. Asset Manage. 8 (1): 11–24. https://doi.org/10.1680/jinam.19.00064.
Andriotis, C. P., and K. G. Papakonstantinou. 2019. “Managing engineering systems with large state and action spaces through deep reinforcement learning.” Reliab. Eng. Syst. Saf. 191 (Nov): 106483. https://doi.org/10.1016/j.ress.2019.04.036.
ASCE. 2021. ASCE’s 2021 infrastructure report card. Reston, VA: ASCE.
Asghari, V., and S.-C. Hsu. 2021. “An open-source and extensible platform for general infrastructure asset management system.” Autom. Constr. 127 (Jul): 103692. https://doi.org/10.1016/j.autcon.2021.103692.
Asghari, V., and S.-C. Hsu. 2022. “Upscaling complex project-level infrastructure intervention planning to network assets.” J. Constr. Eng. Manage. 148 (1): 04021188. https://doi.org/10.1061/(ASCE)CO.1943-7862.0002221.
Asghari, V., S.-C. Hsu, and H.-H. Wei. 2021a. “Expediting life cycle cost analysis of infrastructure assets under multiple uncertainties by deep neural networks.” J. Manage. Eng. 37 (6): 04021059. https://doi.org/10.1061/(ASCE)ME.1943-5479.0000950.
Asghari, V., H. Kashani, and S.-C. Hsu. 2021b. “Optimal timing of the seismic vulnerability reduction measures using real options analysis.” ASCE-ASME J. Risk Uncertainty Eng. Syst. Part A: Civ. Eng. 7 (3): 04021027. https://doi.org/10.1061/AJRUA6.0001146.
Asghari, V., Y. F. Leung, and S.-C. Hsu. 2020. “Deep neural network based framework for complex correlations in engineering metrics.” Adv. Eng. Inf. 44 (Apr): 101058. https://doi.org/10.1016/j.aei.2020.101058.
Bai, Q., S. Labi, K. C. Sinha, and P. D. Thompson. 2013. “Multiobjective optimization for project selection in network-level bridge management incorporating decision-maker’s preference using the concept of holism.” J. Bridge Eng. 18 (9): 879–889. https://doi.org/10.1061/(ASCE)BE.1943-5592.0000428.
Bowes, B. D., A. Tavakoli, C. Wang, A. Heydarian, M. Behl, P. A. Beling, and J. L. Goodall. 2020. “Flood mitigation in coastal urban catchments using real-time stormwater infrastructure control and reinforcement learning.” J. Hydroinf. 23 (3): 529–547. https://doi.org/10.2166/hydro.2020.080.
Brandi, S., M. S. Piscitelli, M. Martellacci, and A. Capozzoli. 2020. “Deep reinforcement learning to optimise indoor temperature control and heating energy consumption in buildings.” Energy Build. 224 (Oct): 110225. https://doi.org/10.1016/j.enbuild.2020.110225.
Brockman, G., V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. 2016. “OpenAI gym.” Preprint, submitted June 5, 2016. https://arxiv.org/abs/1606.01540.
Burkhalter, M., and B. T. Adey. 2021. “Quantifying net benefits of intervention programmes to enable their digitalised generation.” Infrastruct. Asset Manage. 8 (3): 141–154. https://doi.org/10.1680/jinam.20.00020.
Chembe, C., D. Kunda, I. Ahmedy, R. M. Noor, A. Q. M. Sabri, and M. A. Ngadi. 2019. “Infrastructure based spectrum sensing scheme in VANET using reinforcement learning.” Veh. Commun. 18 (Aug): 100161. https://doi.org/10.1016/j.vehcom.2019.100161.
Chemingui, Y., A. Gastli, and O. Ellabban. 2020. “Reinforcement learning-based school energy management system.” Energies (Basel) 13 (23): 6354. https://doi.org/10.3390/en13236354.
Chen, L., and Q. Bai. 2019. “Optimization in decision making in infrastructure asset management: A review.” Appl. Sci. 9 (7): 1380. https://doi.org/10.3390/app9071380.
Chen, Y., L. K. Norford, H. W. Samuelson, and A. Malkawi. 2018. “Optimal control of HVAC and window systems for natural ventilation through reinforcement learning.” Energy Build. 169 (Jun): 195–205. https://doi.org/10.1016/j.enbuild.2018.03.051.
Chollet, F. 2015. “Keras.” Accessed January 6, 2022. https://keras.io.
Dewey, D. 2014. “Reinforcement learning and the reward engineering principle.” In Proc., AAAI Spring Symposia. Palo Alto, CA: Association for the Advancement of Artificial Intelligence.
Erharter, G. H., T. F. Hansen, Z. Liu, and T. Marcher. 2021. “Reinforcement learning based process optimization and strategy development in conventional tunneling.” Autom. Constr. 127 (Jul): 103701. https://doi.org/10.1016/j.autcon.2021.103701.
FEMA. 2003. Hazus—MH 2.1: Technical manual. Washington DC: FEMA.
FHWA (Federal Highway Administration). 2012. Recording and coding guide for the structure inventory and appraisal of the nation’s bridges. Washington, DC: FHWA.
Ford, D. N., D. M. Lander, and J. J. Voyer. 2002. “A real options approach to valuing strategic flexibility in uncertain construction projects.” Construct. Manage. Econ. 20 (4): 343–351. https://doi.org/10.1080/01446190210125572.
Ghannad, P., Y. Lee, and J. O. Choi. 2021. “Prioritizing postdisaster recovery of transportation infrastructure systems using multiagent reinforcement learning.” J. Manage. Eng. 37 (1): 04020100. https://doi.org/10.1061/(ASCE)ME.1943-5479.0000868.
Haji Hosseinloo, A., A. Ryzhov, A. Bischi, H. Ouerdane, K. Turitsyn, and M. A. Dahleh. 2020. “Data-driven control of micro-climate in buildings: An event-triggered reinforcement learning approach.” Appl. Energy 277 (1): 115451. https://doi.org/10.1016/j.apenergy.2020.115451.
Havelaar, M., W. Jaspers, S. van Nederveen, W. Auping, and R. Wolfert. 2019. “Multivariate simulation modelling for adaptive long-term infrastructure planning.” Infrastruct. Asset Manage. 6 (2): 129–137. https://doi.org/10.1680/jinam.18.00031.
Herder, P. M., and Y. Wijnia. 2012. “A systems view on infrastructure asset management.” In Asset management: The state of the art in Europe from a life cycle perspective, edited by T. van der Lei, P. Herder, and Y. Wijnia, 31–46. Dordrecht, Netherlands: Springer.
Hodge, V. J., R. Hawkins, and R. Alexander. 2021. “Deep reinforcement learning for drone navigation using sensor data.” Neural Comput. Appl. 33 (6): 2015–2033. https://doi.org/10.1007/s00521-020-05097-x.
Jeong, Y., W. S. Kim, I. Lee, and J. Lee. 2018. “Bridge inspection practices and bridge management programs in China, Japan, Korea, and US.” J. Struct. Integrity Maint. 3 (2): 126–135. https://doi.org/10.1080/24705314.2018.1461548.
Jung, S., J. Jeoung, H. Kang, and T. Hong. 2021. “Optimal planning of a rooftop PV system using GIS-based reinforcement learning.” Appl. Energy 298 (Sep): 117239. https://doi.org/10.1016/j.apenergy.2021.117239.
Kerwin, S., and B. T. Adey. 2021. “Exploiting digitalisation to plan interventions on large water distribution networks.” Infrastruct. Asset Manage. 40 (XXXX): 1–16. https://doi.org/10.1680/jinam.20.00017.
Khadilkar, H. 2019. “A scalable reinforcement learning algorithm for scheduling railway lines.” IEEE Trans. Intell. Transp. Syst. 20 (2): 727–736. https://doi.org/10.1109/TITS.2018.2829165.
Khazaeli, S., L. H. Nguyen, and J. A. Goulet. 2021. “Anomaly detection using state-space models and reinforcement learning.” Struct. Control Health Monit. 28 (6): e2720. https://doi.org/10.1002/stc.2720.
Konda, V., and J. Tsitsiklis. 2003. “On actor-critic algorithms.” SIAM J. Control Optim. 42 (4): 1143–1166. https://doi.org/10.1137/S0363012901385691.
Krishna Lakshmanan, A., R. Elara Mohan, B. Ramalingam, A. Vu Le, P. Veerajagadeshwar, K. Tiwari, and M. Ilyas. 2020. “Complete coverage path planning using reinforcement learning for Tetromino based cleaning and maintenance robot.” Autom. Constr. 112 (Apr): 103078. https://doi.org/10.1016/j.autcon.2020.103078.
Lee, D., and M. Kim. 2021. “Autonomous construction hoist system based on deep reinforcement learning in high-rise building construction.” Autom. Constr. 128 (Aug): 103737. https://doi.org/10.1016/j.autcon.2021.103737.
Li, Z., and K. C. Sinha. 2004. “Methodology for multicriteria decision making in highway asset management.” Transp. Res. Rec. 1885 (1): 79–87. https://doi.org/10.3141/1885-12.
Liang, C. J., V. R. Kamat, and C. C. Menassa. 2020. “Teaching robots to perform quasi-repetitive construction tasks through human demonstration.” Autom. Constr. 120 (Dec): 103370. https://doi.org/10.1016/j.autcon.2020.103370.
Long, C., L. Qiuchen, L. Shuai, H. Wenjing, and Y. Jian. 2021. “Bayesian Monte Carlo simulation–driven approach for construction schedule risk inference.” J. Manage. Eng. 37 (2): 04020115. https://doi.org/10.1061/(ASCE)ME.1943-5479.0000884.
Lu, Z., L. Xuan, and D. Sunil. 2021. “A reinforcement learning-based stakeholder value aggregation model for collaborative decision making on disaster resilience.” In Proc., Computing in Civil Engineering 2019. Reston, VA: ASCE.
McGuire, R. K. 2004. Seismic hazard and risk analysis. Berkeley, CA: Earthquake Engineering Research Institute.
Memarzadeh, M., and M. Pozzi. 2019. “Model-free reinforcement learning with model-based safe exploration: Optimizing adaptive recovery process of infrastructure systems.” Struct. Saf. 80 (Sep): 46–55. https://doi.org/10.1016/j.strusafe.2019.04.003.
Mnih, V., K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. “Playing Atari with deep reinforcement learning.” Preprint, submitted December 19, 2013. https://arxiv.org/abs/1312.5602.
Moghayedi, A., and A. Windapo. 2022. “Modelling the uncertainty of cost and time in highway projects.” Infrastruct. Asset Manage. 9 (2): 73–88. https://doi.org/10.1680/jinam.21.00004.
Mohamed, A. S., F. Xiao, and C. Hettiarachchi. 2022. “Project level management decisions in construction and rehabilitation of flexible pavements.” Autom. Constr. 133 (Jan ): 104035. https://doi.org/10.1016/j.autcon.2021.104035.
Mondoro, A., D. M. Frangopol, and L. Liu. 2018a. “Bridge adaptation and management under climate change uncertainties: A review.” Nat. Hazard. Rev. 19 (1): 04017023. https://doi.org/10.1061/(ASCE)NH.1527-6996.0000270.
Mondoro, A., D. M. Frangopol, and L. Liu. 2018b. “Multi-criteria robust optimization framework for bridge adaptation under climate change.” Struct. Saf. 74 (Sep): 14–23. https://doi.org/10.1016/j.strusafe.2018.03.002.
Ng, A. 2018. “Machine learning yearning.” Accessed October 28, 2021. https://www.deeplearning.ai/programs/.
Nguyen, T. T., N. D. Nguyen, and S. Nahavandi. 2020. “Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications.” IEEE Trans. Cybern. 50 (9): 3826–3839. https://doi.org/10.1109/TCYB.2020.2977374.
Nozhati, S., B. R. Ellingwood, and E. K. P. Chong. 2020. “Stochastic optimal control methodologies in risk-informed community resilience planning.” Struct. Saf. 84 (May): 101920. https://doi.org/10.1016/j.strusafe.2019.101920.
Panopoulos, V., A. Bougas, B. de Soto, and B. T. Adey. 2022. “Using Bayesian networks to estimate bridge characteristics in early road designs.” Infrastruct. Asset Manage. 9 (1): 40–56. https://doi.org/10.1680/jinam.20.00016.
Papakonstantinou, K. G., and M. Shinozuka. 2014. “Planning structural inspection and maintenance policies via dynamic programming and Markov processes. Part II: POMDP implementation.” Reliab. Eng. Syst. Saf. 130 (Oct): 214–224. https://doi.org/10.1016/j.ress.2014.04.006.
Papathanasiou, N., and B. T. Adey. 2020. “Usefulness of quantifying effects on rail service when comparing intervention strategies.” Infrastruct. Asset Manage. 7 (3): 167–189. https://doi.org/10.1680/jinam.19.00071.
Papathanasiou, N., B. T. Adey, and M. Burkhalter. 2020. “Defining and quantifying railway service to plan infrastructure interventions.” Infrastruct. Asset Manage. 7 (3): 146–166. https://doi.org/10.1680/jinam.18.00044.
Park, J. Y., T. Dougherty, H. Fritz, and Z. Nagy. 2019. “LightLearn: An adaptive and occupant centered controller for lighting based on reinforcement learning.” Build. Environ. 147 (Jan): 397–414. https://doi.org/10.1016/j.buildenv.2018.10.028.
Parvez Farazi, N., B. Zou, T. Ahamed, and L. Barua. 2021. “Deep reinforcement learning in transportation research: A review.” Transp. Res. Interdiscip. Perspect. 11: 100425. https://doi.org/10.1016/j.trip.2021.100425.
Peraka, N. S. P., and K. P. Biligiri. 2020. “Pavement asset management systems and technologies: A review.” Autom. Constr. 119 (Nov): 103336. https://doi.org/10.1016/j.autcon.2020.103336.
Prescott, D., and J. Andrews. 2013. “Investigating railway track asset management using a Markov analysis.” Proc. Inst. Mech. Eng., Part F: J. Rail Rapid Transit 229 (4): 402–416. https://doi.org/10.1177/0954409713511965.
Rasheed, F., K.-L. A. Yau, R. M. Noor, C. Wu, and Y.-C. Low. 2020. “Deep reinforcement learning for traffic signal control: A review.” IEEE Access 8 (Oct): 208016–208044. https://doi.org/10.1109/ACCESS.2020.3034141.
Renard, S., B. Corbett, and O. Swei. 2021. “Minimizing the global warming impact of pavement infrastructure through reinforcement learning.” Resour. Conserv. Recycl. 167 (Apr): 105240. https://doi.org/10.1016/j.resconrec.2020.105240.
Schaul, T., J. Quan, I. Antonoglou, and D. Silver. 2016. “Prioritized experience replay.” Preprint, submitted November 18, 2015. https://arxiv.org/abs/1511.05952.
Serrano, W. 2019. “Deep reinforcement learning algorithms in intelligent infrastructure.” Infrastructures (Basel) 4 (3): 52. https://doi.org/10.3390/infrastructures4030052.
Shah, S. W. A., M. N. Mahmood, and N. Das. 2016. “Strategic asset management framework for the improvement of large scale PV power plants in Australia.” In Proc., 2016 Australasian Universities Power Engineering Conf. (AUPEC), 1–5. New York: IEEE.
Shi, Y., J. Du, C. R. Ahn, and E. Ragan. 2019. “Impact assessment of reinforced learning methods on construction workers’ fall risk behavior using virtual reality.” Autom. Constr. 104 (Aug): 197–214. https://doi.org/10.1016/j.autcon.2019.04.015.
Si, C., Y. Tao, J. Qiu, S. Lai, and J. Zhao. 2021. “Deep reinforcement learning based home energy management system with devices operational dependencies.” Int. J. Mach. Learn. Cybern. 12 (6): 1687–1703. https://doi.org/10.1007/s13042-020-01266-5.
Silver, D., et al. 2016. “Mastering the game of Go with deep neural networks and tree search.” Nature 529 (7587): 484–489. https://doi.org/10.1038/nature16961.
Silver, D., G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. 2014. “Deterministic policy gradient algorithms.” In Proc., 31st Int. Conf. on Int. Conf. on Machine Learning— Volume 32, ICML’14. New York: ACM.
Sinha, K. C., S. A. Labi, B. G. McCullouch, A. Bhargava, and Q. Bai. 2009. “Updating and enhancing the Indiana bridge management system (IBMS).” Accessed January 6, 2021. https://docs.lib.purdue.edu/jtrp/1164/.
Sutton, R. S., and A. G. Barto. 2018. Reinforcement learning: An introduction. Cambridge, MA: MIT Press.
Sutton, R. S., D. McAllester, S. Singh, and Y. Mansour. 1999. “Policy gradient methods for reinforcement learning with function approximation.” In Proc., 12th Int. Conf. on Neural Information Processing Systems, NIPS’99, 1057–1063. Cambridge, MA: MIT Press.
Taneja, P., M. E. Aartsen, J. A. Annema, and M. van Schuylenburg. 2010. “Real options for port infrastructure investments.” In Next generation infrastructure systems for eco-cities, 1–6. New York: IEEE.
TexasDOT. 2020. “Road user costs.” Accessed January 6, 2021. https://www.txdot.gov/inside-txdot/division/construction/road-user-costs.html.
USGS. 2020. “Earthquake hazards program.” Accessed January 6, 2021. https://earthquake.usgs.gov/earthquakes/search/.
Valladares, W., M. Galindo, J. Gutiérrez, W.-C. Wu, K.-K. Liao, J.-C. Liao, K.-C. Lu, and C.-C. Wang. 2019. “Energy optimization associated with thermal comfort and indoor air control via a deep reinforcement learning algorithm.” Build. Environ. 155 (May): 105–117. https://doi.org/10.1016/j.buildenv.2019.03.038.
Wei, S., Y. Bao, and H. Li. 2020. “Optimal policy for structure maintenance: A deep reinforcement learning framework.” Struct. Saf. 83 (Mar): 101906. https://doi.org/10.1016/j.strusafe.2019.101906.
Xia, K., C. Sacco, M. Kirkpatrick, C. Saidy, L. Nguyen, A. Kircaliali, and R. Harik. 2021. “A digital twin to train deep reinforcement learning agent for smart manufacturing plants: Environment, interfaces and intelligence.” J. Manuf. Syst. 58 (Jan): 210–230. https://doi.org/10.1016/j.jmsy.2020.06.012.
Yang, D. Y., and D. M. Frangopol. 2020. “Risk-based portfolio management of civil infrastructure assets under deep uncertainties associated with climate change: A robust optimisation approach.” Struct. Infrastruct. Eng. 16 (4): 531–546. https://doi.org/10.1080/15732479.2019.1639776.
Yang, D. Y., D. M. Frangopol, and J.-G. Teng. 2019. “Probabilistic life-cycle optimization of durability-enhancing maintenance actions: Application to FRP strengthening planning.” Eng. Struct. 188 (Jun): 340–349. https://doi.org/https://doi.org/10.1016/j.engstruct.2019.02.055.
Yang, I.-T., Y.-M. Hsieh, and L.-O. Kung. 2012. “Parallel computing platform for multiobjective simulation optimization of bridge maintenance planning.” J. Constr. Eng. Manage. 138 (2): 215–226. https://doi.org/10.1061/(ASCE)CO.1943-7862.0000421.
Yang, L., Z. Nagy, P. Goffin, and A. Schlueter. 2015. “Reinforcement learning for optimal control of low exergy buildings.” Appl. Energy 156 (Oct): 577–586. https://doi.org/10.1016/j.apenergy.2015.07.050.
Yao, L., Q. Dong, J. Jiang, and F. Ni. 2020. “Deep reinforcement learning for long-term pavement maintenance planning.” Comput.-Aided Civ. Infrastruct. Eng. 35 (11): 1230–1245. https://doi.org/10.1111/mice.12558.
Ye, H., G. Y. Li, and B.-H. F. Juang. 2019. “Deep reinforcement learning based resource allocation for V2V communications.” IEEE Trans. Veh. Technol. 68 (4): 3163–3173. https://doi.org/10.1109/TVT.2019.2897134.
Ye, Y., D. Qiu, H. Wang, T. Yi, and G. Strbac. 2021. “Real-time autonomous residential demand response management based on twin delayed deep deterministic policy gradient learning.” Energies (Basel) 14 (3): 531. https://doi.org/10.3390/en14030531.
Yehia, A. 2020. “Understanding uncertainty: A reinforcement learning approach for project-level pavement management systems.” Ph.D. thesis, Dept. of Civil Engineering, Univ. of British Columbia.
Zhang, K., Z. Yang, and T. Başar. 2021. “Multi-agent reinforcement learning: A selective overview of theories and algorithms.” In Vol. 325 Handbook of reinforcement learning and control. Studies in systems, decision and control, edited by K. G. Vamvoudakis, Y. Wan, F. L. Lewis, and D. Cansever, 321–384. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-60990-0_12.

Information & Authors

Information

Published In

Go to Journal of Management in Engineering
Journal of Management in Engineering
Volume 39Issue 2March 2023

History

Received: Jan 13, 2022
Accepted: Sep 26, 2022
Published online: Dec 2, 2022
Published in print: Mar 1, 2023
Discussion open until: May 2, 2023

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Vahid Asghari [email protected]
Ph.D. Candidate, Dept. of Civil and Environmental Engineering, Hong Kong Polytechnic Univ., 181 Chatham Rd. South, Kowloon, Hong Kong. Email: [email protected]
School of Civil and Environmental Engineering, Univ. of Tehran, 16 Azar Ave., P.O. Box 11155-4563, Tehran, Iran. ORCID: https://orcid.org/0000-0002-9713-0543. Email: [email protected]
Associate Professor, Dept. of Civil and Environmental Engineering, Hong Kong Polytechnic Univ., 181 Chatham Rd. South, Kowloon, Hong Kong (corresponding author). ORCID: https://orcid.org/0000-0002-7232-9839. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • Hierarchical reinforcement learning for transportation infrastructure maintenance planning, Reliability Engineering & System Safety, 10.1016/j.ress.2023.109214, 235, (109214), (2023).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share