Technical Papers
Jul 22, 2024

Quantifying the Relative Change in Maintenance Costs due to Delayed Maintenance Actions in Transportation Infrastructure

Publication: Journal of Performance of Constructed Facilities
Volume 38, Issue 5

Abstract

Identifying optimal maintenance policies for transportation infrastructure such as bridges, is a challenging task that requires taking into account many aspects relating to budget availability, resource allocation and traffic rerouting. In practice, it is difficult to accurately quantify all of the aforementioned factors; accordingly, it is equally difficult to obtain network-scale optimal maintenance policies. This paper presents an approach to evaluate the costs associated with deviations from optimal bridge-level maintenance policies, specifically focusing on delays in maintenance actions. Evaluating the cost of maintenance delays is performed using a reinforcement learning (RL) approach that relies on a probabilistic deterioration model to describe the deterioration in the structural components. The RL framework provides estimates for the total expected discounted maintenance costs associated with each maintenance policy over time, allowing comparison of maintenance policies where maintenance actions are delayed against an optimal maintenance policy. The comparisons are performed by probabilistically quantifying the ratio of expected costs associated with each maintenance policy. This ratio represents the trade-offs between performing or delaying maintenance actions over time. Moreover, the proposed approach is scalable, making it applicable to bridges with numerous structural elements. Example of application using the proposed framework is demonstrated using inspection data from bridges in the Quebec province of Canada.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code used during the study were provided by a third party. Direct requests for these materials may be made to the provider as indicated in the Acknowledgments.

Acknowledgments

This project is funded by the Transportation Ministry of Quebec Province (MTQ), Canada. The authors would like to acknowledge the support of Simon Pedneault for facilitating the access to information related to this project.

References

Allah Bukhsh, Z., I. Stipanovic, and A. G. Doree. 2020. “Multi-year maintenance planning framework using multi-attribute utility theory and genetic algorithms.” Eur. Transp. Res. Rev. 12 (1): 1–13. https://doi.org/10.1186/s12544-019-0388-y.
Andriotis, C. P., and K. G. Papakonstantinou. 2019. “Managing engineering systems with large state and action spaces through deep reinforcement learning.” Reliab. Eng. Syst. Saf. 191 (Nov): 106483. https://doi.org/10.1016/j.ress.2019.04.036.
Azizinamini, A., E. H. Power, G. F. Myers, H. C. Ozyildirim, E. S. Kline, D. W. Whitmore, and D. R. Mertz. 2013. Design guide for bridges for service life, chapter 11 life-cycle cost analysis. Washington, DC: National Academies Press.
Bar-Shalom, Y., X. R. Li, and T. Kirubarajan. 2004. Estimation with applications to tracking and navigation: Theory algorithms and software. New York: Wiley.
Chang, T., G. Lee, and S. Chi. 2023. “Development of an optimized condition estimation model for bridge components using data-driven approaches.” J. Perform. Constr. Facil. 37 (3): 04023013. https://doi.org/10.1061/JPCFEV.CFENG-4359.
Cheng, M.-Y., Y.-C. Fang, Y.-F. Chiu, Y.-W. Wu, and T.-C. Lin. 2021. “Design and maintenance information integration for concrete bridge assessment and disaster prevention.” J. Perform. Constr. Facil. 35 (3): 04021015. https://doi.org/10.1061/(ASCE)CF.1943-5509.0001576.
Contreras-Nieto, C., Y. Shan, P. Lewis, and J. A. Hartell. 2019. “Bridge maintenance prioritization using analytic hierarchy process and fusion tables.” Autom. Constr. 101 (May): 99–110. https://doi.org/10.1016/j.autcon.2019.01.016.
Dulac-Arnold, G., N. Levine, D. J. Mankowitz, J. Li, C. Paduraru, S. Gowal, and T. Hester. 2021. “Challenges of real-world reinforcement learning: Definitions, benchmarks and analysis.” Mach. Learn. 110 (9): 2419–2468. https://doi.org/10.1007/s10994-021-05961-4.
Hadjidemetriou, G. M., M. Herrera, and A. K. Parlikad. 2022. “Condition and criticality-based predictive maintenance prioritisation for networks of bridges.” Struct. Infrastruct. Eng. 18 (8): 1207–1221. https://doi.org/10.1080/15732479.2021.1897146.
Hamida, Z., and J.-A. Goulet. 2020a. “Modeling infrastructure degradation from visual inspections using network-scale state-space models.” Struct. Control Health Monit. 27 (9): e2582. https://doi.org/10.1002/stc.2582.
Hamida, Z., and J.-A. Goulet. 2020b. “Network-scale deterioration modelling based on visual inspections and structural attributes.” Struct. Saf. 88 (Jan): 102024. https://doi.org/10.1016/j.strusafe.2020.102024.
Hamida, Z., and J.-A. Goulet. 2022a. “Quantifying the effects of interventions based on visual inspections of bridges network.” Struct. Infrastruct. Eng. 18 (8): 1222–1233. https://doi.org/10.1080/15732479.2021.1919149.
Hamida, Z., and J.-A. Goulet. 2022b. “A stochastic model for estimating the network-scale deterioration and effect of interventions on bridges.” Struct. Control Health Monit. 29 (4): e2916. https://doi.org/10.1002/stc.2916.
Hamida, Z., and J.-A. Goulet. 2023. “Hierarchical reinforcement learning for transportation infrastructure maintenance planning.” Reliab. Eng. Syst. Saf. 235 (Jul): 109214. https://doi.org/10.1016/j.ress.2023.109214.
Hasselt, H. V., A. Guez, and D. Silver. 2016. “Deep reinforcement learning with double Q-learning.” In Vol. 30 of Proc., AAAI Conf. on Artificial Intelligence. Palo Alto, CA: Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v30i1.10295.
Hawk, H. 2003. Bridge life-cycle cost analysis guidance manual. Washington, DC: National Cooperative Highway Research Program.
Kalman, R. E. 1960. “Contributions to the theory of optimal control.” Bol. Soc. Mat. Mexicana 5 (2): 102–119.
Kobayashi, T., and W. E. L. Ilboudo. 2021. “T-soft update of target network for deep reinforcement learning.” Neural Network 136 (Apr): 63–71. https://doi.org/10.1016/j.neunet.2020.12.023.
Lei, X., Y. Xia, L. Deng, and L. Sun. 2022. “A deep reinforcement learning framework for life-cycle maintenance planning of regional deteriorating bridges using inspection data.” Struct. Multidiscip. Optim. 65 (5): 149. https://doi.org/10.1007/s00158-022-03210-3.
Liang, Z., and A. K. Parlikad. 2020. “Predictive group maintenance for multi-system multi-component networks.” Reliab. Eng. Syst. Saf. 195 (Mar): 106704. https://doi.org/10.1016/j.ress.2019.106704.
Liu, M., and D. M. Frangopol. 2005. “Bridge annual maintenance prioritization under uncertainty by multiobjective combinatorial optimization.” Comput.-Aided Civ. Infrastruct. Eng. 20 (5): 343–353. https://doi.org/10.1111/j.1467-8667.2005.00401.x.
Liu, Z., T. Guo, J. Correia, and L. Wang. 2020. “Reliability-based maintenance strategy for gusset plate connections in steel bridges based on life-cost optimization.” J. Perform. Constr. Facil. 34 (5): 04020088. https://doi.org/10.1061/(ASCE)CF.1943-5509.0001493.
Miller, B. L. 1968. “Finite state continuous time Markov decision processes with an infinite planning horizon.” J. Math. Anal. Appl. 22 (3): 552–569. https://doi.org/10.1016/0022-247X(68)90194-7.
Mnih, V., A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. 2016. “Asynchronous methods for deep reinforcement learning.” In Proc., Int. Conf. on Machine Learning, PMLR, 1928–1937. Brookline, MA: Microtome Publishing.
MTQ (Ministère des Transports). 2014. Manuel d’Inspection des Structures. Montreal: MTQ.
Rauch, H. E., C. Striebel, and F. Tung. 1965. “Maximum likelihood estimates of linear dynamic systems.” AIAA J. 3 (8): 1445–1450. https://doi.org/10.2514/3.3166.
Rummery, G. A., and M. Niranjan. 1994. Vol. 37 of On-line Q-learning using connectionist systems. Cambridge, UK: Univ. of Cambridge.
Schulman, J., F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. 2017. “Proximal policy optimization algorithms.” Preprint, submitted July 20, 2017. http://arxiv.org/abs/1707.06347.
Sewak, M., and M. Sewak. 2019. “Temporal difference learning, SARSA, and Q-learning: Some popular value approximation based reinforcement learning approaches.” In Deep reinforcement learning: Frontiers of artificial intelligence, 51–63. Lausanne, Switzerland: Frontiers Media SA.
Simon, D., and D. L. Simon. 2010. “Constrained Kalman filtering via density function truncation for turbofan engine health estimation.” Int. J. Syst. Sci. 41 (2): 159–171. https://doi.org/10.1080/00207720903042970.
Sutton, R. S., and A. G. Barto. 2018. Reinforcement learning: An introduction. Cambridge, MA: MIT Press.
Trzcinski, A. J., and R. B. Corotis. 2007. “Alternative valuation of highway user delay costs.” Civ. Eng. Environ. Syst. 24 (2): 87–97. https://doi.org/10.1080/10286600601156632.
Valenzuela, S., H. de Solminihac, and T. Echaveguren. 2010. “Proposal of an integrated index for prioritization of bridge maintenance.” J. Bridge Eng. 15 (3): 337–343. https://doi.org/10.1061/(ASCE)BE.1943-5592.0000068.
Wang, X., S. Wang, X. Liang, D. Zhao, J. Huang, X. Xu, B. Dai, and Q. Miao. 2020. “Deep reinforcement learning: A survey.” Front. Inf. Technol. Electron. Eng. 21 (12): 1726–1744. https://doi.org/10.1631/FITEE.1900533.
Wang, Z., T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas. 2016. “Dueling network architectures for deep reinforcement learning.” In Proc., Int. Conf. on Machine Learning, 1995–2003. Brookline, MA: Microtome Publishing.
Watkins, C. J. C. H. 1989. “Learning from delayed rewards.” Ph.D. thesis, King’s College, Univ. of Cambridge.
Wei, S., Y. Bao, and H. Li. 2020. “Optimal policy for structure maintenance: A deep reinforcement learning framework.” Struct. Saf. 83 (Mar): 101906. https://doi.org/10.1016/j.strusafe.2019.101906.
Yang, D. Y. 2022. “Deep reinforcement learning-enabled bridge management considering asset and network risks.” J. Infrastruct. Syst. 28 (3): 04022023. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000704.
Zhang, N., and W. Si. 2020. “Deep reinforcement learning for condition-based maintenance planning of multi-component systems under dependent competing risks.” Reliability Eng. Syst. Saf. 203 (Nov): 107094. https://doi.org/10.1016/j.ress.2020.107094.
Zhang, W., and N. Wang. 2017. “Bridge network maintenance prioritization under budget constraint.” Struct. Saf. 67 (Jul): 96–104. https://doi.org/10.1016/j.strusafe.2017.05.001.

Information & Authors

Information

Published In

Go to Journal of Performance of Constructed Facilities
Journal of Performance of Constructed Facilities
Volume 38Issue 5October 2024

History

Received: Feb 5, 2024
Accepted: Apr 19, 2024
Published online: Jul 22, 2024
Published in print: Oct 1, 2024
Discussion open until: Dec 22, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Research Associate, Dept. of Civil, Geologic, and Mining Engineering, Polytechnique Montreal, Montreal, QC, Canada H3T 1J4 (corresponding author). ORCID: https://orcid.org/0000-0002-6963-9350. Email: [email protected]
James-A. Goulet [email protected]
Professor, Dept. of Civil, Geologic, and Mining Engineering, Polytechnique Montreal, Montreal, QC, Canada H3T 1J4. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share