Chapter
Aug 31, 2020
International Conference on Transportation and Development 2020

Reinforcement Learning-Based Signal Control Strategies to Improve Travel Efficiency at Urban Intersection

Publication: International Conference on Transportation and Development 2020

ABSTRACT

Aiming at improving the efficiency of urban intersection control, two signal control strategies based on Q-learning (QL) and deep Q-learning network (DQN), respectively. Overcoming the rough and passive defects of the traditional intersection timing control, the QL and DQN algorithm with intelligent real-time control is adopted. An algorithm framework with radar and video detector data as input, and optimal intersection control strategy as output is constructed. Based on the traffic simulation platform, a typical urban intersection is simulated and the control effect is tested. The results show that the proposed two intelligent control strategies can actively respond to different traffic states, converge in short training time, and find the optimal control strategy. QL-based control strategy and DQN-based control strategy can effectively reduce the travel time by more than 20%, and the stop delay by more than 30%. And DQN-based control strategy is more effective than the QL-based control strategy.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Abdulhai, Baher, Rob Pringle, and Grigoris J. Karakoulas. 2003. Reinforcement learning for true adaptive traffic signal control[J]. Journal of Transportation Engineering, 129(3): 278-285.
Aboudolas, K., Papageorgiou, M., Kosmatopoulos, E., 2009. Store-and-forward based methods for the signal control problem in large-scale congested urban road networks. Transportation Research Part C: Emerging Technologies, 17, 163–174.
Bai T, Yang J, Chen J, et al. Double-Task Deep Q-Learning with Multiple Views[C]. 2017 IEEE International Conference on Computer Vision Workshop (ICCVW). IEEE, 2017.
Barto. A. G. 2003. Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press.
Chiu, Stephen, and Sujeet Chand. Adaptive traffic signal control using fuzzy logic[J]. Fuzzy Systems, 1993. Second IEEE International Conference on, IEEE, 1993.
Chow, A. H. F. 2015. “Optimisation of Dynamic Motorway Traffic Via a Parsimonious and Decentralised Approach.” Transportation Research Part C: Emerging Technologies, 55, 69–84.
Chow, A. H. F., A. Santacreu, I. Tsapakis, G. Tanaksaranond, and T. Cheng. 2014. Empirical Assessment of Urban Traffic Congestion. Journal of Advanced Transportation, 48 (8): 1000–1016.
Chow, A. H. F., Li, S., 2019. Modelling and managing bus service regularity with influence of prevailing traffic. Transportmetrica B. 7 (1), 82–106.
El-Tantawy, Samah, and Baher Abdulhai. 2010. An agent-based learning towards decentralized and coordinated traffic signal control[J]. Intelligent Transportation Systems (ITSC), 2010 13th International IEEE Conference on. IEEE.
Foy, Mark D., Rahim F. Benekohal, and David E. Goldberg. Signal timing determination using genetic algorithms[J]. Transportation Research Record, 1992, 1365: 108.
Huo, Y. S., Hu J. M., Wang, G. A Summary of Traffic Signal Control Method Based on Reinforcement Learning [C]. Proceedings of the 12th China Intelligent Transportation Annual Conference, 2017.
Kotsialos, A., and M. Papageorgiou. 2001. “The Importance of Traffic Flow Modeling for Motorway Traffic Control.” Networks and Spatial Economics, 1 (1/2): 179–203.
Kouvelas, A., Chow, A. H. F., Gonzales, E., Yildirimoglu, M., Carlson, R., 2018. Emerging information and communication technologies for traffic estimation and control. Journal of Advanced Transportation, 8498054.
Kuyer, Lior, et al. 2008. Multiagent reinforcement learning for urban traffic control using coordination graphs[J]. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Springer Berlin Heidelberg.
Lammer, S., Helbing, D., 2008. Self-control of traffic lights and vehicle flows in urban road networks. Journal of Statistical Mechanics: Theory and Experiment, 04, 4-19.
Li, Y., Chow, A. H. F., Zhong, R.X., 2019. Control strategies for dynamic motorway traffic subject to flow uncertainties. Transportmetrica B, 7 (1), 559–575.
Li, Z., Liu, P., Xu, C., Duan, H., & Wang, W. 2017. Reinforcement learning-based variable speed limit control strategy to reduce traffic congestion at freeway recurrent bottlenecks. IEEE Transactions on Intelligent Transportation Systems, 1-14.
M. Papageorgiou and A. Kotsialos. 2002. Freeway ramp metering: An overview. IEEE Transactions on Intelligent Transportation Systems, 3(4): 271–281.
Qingchen, Z., Man, L., Yang, L. T., Zhikui, C., Khan, S. U., & Peng, L. (2018). A double deep q-learning model for energy-efficient edge scheduling. IEEE Transactions on Services Computing, 1-1.
R. S. Sutton and A. G. Barto. 1998. Reinforcement Learning: An Introduction. Cambridge, MA, USA: MIT Press.
S. Mahadevan, 1996. Average reward reinforcement learning: Foundations, algorithms, and empirical results, Machine Learning, 22(1-3): 159–195.
Spall, James C., and Daniel C. Chin. Traffic-responsive signal timing for system-wide traffic control[J]. Transportation Research Part C: Emerging Technologies, 1997, 5(3): 153-163.
Watkins.C. J. C. H. and Dayan. P., 1992. “Q-learning,” Mach. Learn., vol. 8, nos. 3–4, pp. 279–292.

Information & Authors

Information

Published In

Go to International Conference on Transportation and Development 2020
International Conference on Transportation and Development 2020
Pages: 109 - 118
Editor: Guohui Zhang, Ph.D., University of Hawaii
ISBN (Online): 978-0-7844-8313-8

History

Published online: Aug 31, 2020
Published in print: Aug 31, 2020

Permissions

Request permissions for this article.

Authors

Affiliations

1Duolun Technology Co. Ltd., Nanjing, China. Email: [email protected]
Shunchao Wang [email protected]
2School of Transportation, Southeast Univ., Nanjing, China. Email: [email protected]
3School of Transportation, Southeast Univ., Nanjing, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$80.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$80.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share