Chapter
Jun 13, 2024

Addressing Urban Traffic Congestion: A Hybrid DQN-Autoencoder Model with HyperOPT Tuning

Publication: International Conference on Transportation and Development 2024

ABSTRACT

In this study, we propose a novel method for managing traffic lights by integrating deep Q-networks (DQN) with auto-encoders. The aim is to enhance traffic fluidity and mitigate congestion in simulated environments. To achieve this, we incorporate the average vehicle velocity as a key metric within the system’s observation space. Additionally, we employ optimization techniques such as HyperOPT and leverage the data compression features of auto-encoders to improve decision-making quality. Our approach was evaluated on a two-way, single-intersection network, subjecting to a vehicle flow rate varying between 100 and 600 vehicles per hour. We compared our hybrid DQN-auto-encoder method against a DQN-only baseline, using average waiting time as the evaluation metric. The results indicate that our model markedly outperformed the baseline, reducing the average waiting time to 212 s (standard deviation 12 s), as opposed to 340 s observed in the baseline, while a traditional non-algorithmic approach yielded an average waiting time of about 1,100 s.

Get full access to this chapter

View all available purchase options and get full access to this chapter.

REFERENCES

Baldi, S., Michailidis, I., Ntampasi, V., Kosmatopoulos, E. B., Papamichail, I., and Papageorgiou, M. (2015). Simulation-based synthesis for approximately optimal urban traffic light management. In 2015 American Control Conference (ACC) (pp. 868–873). IEEE.
Djahel, S., Doolan, R., Muntean, G. M., and Murphy, J. (2014). A communications-oriented perspective on traffic management systems for smart cities: Challenges and innovative approaches. IEEE Communications Surveys & Tutorials, 17(1), 125–151.
Gao, J., Shen, Y., Liu, J., Ito, M., and Shiratori, N. (2017). Adaptive traffic signal control: Deep reinforcement learning algorithm with experience replay and target network. arXiv preprint arXiv:1705.02755.
Huang, R., Hu, J., Huo, Y., and Pei, X. (2019). Cooperative multi-intersection traffic signal control based on deep reinforcement learning. In CICTP 2019 (pp. 2959–2970).
Huo, Y., Hu, J., Wang, G., and Chen, J. (2018). A traffic signal control method based on asynchronous reinforcement learning. In 18th COTA International Conference of Transportation Professionals (pp. 1444–1453). Reston, VA: American Society of Civil Engineers.
Ji, Y., Ma, D., Bie, Y., and Li, Z. (2023). A Deep Reinforcement Learning Approach for Isolated Intersection Traffic Signal Control with Long-Short Term Memory Network. In CICTP 2023 (pp. 827–838).
Kim, D., and Jeong, O. (2019). Cooperative traffic signal control with traffic flow prediction in multi-intersection. Sensors, 20(1), 137.
Lin, Y., Dai, X., Li, L., and Wang, F. Y. (2018). An efficient deep reinforcement learning model for urban traffic control. arXiv preprint arXiv:1808.01876.
Maadi, S., Stein, S., Hong, J., and Murray-Smith, R. (2022). Real-time adaptive traffic signal control in a connected and automated vehicle environment: optimisation of signal planning with reinforcement learning under vehicle speed guidance. Sensors, 22(19), 7501.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., and Bellemare, M. G. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529–533.
Nishi, T., Otaki, K., Hayakawa, K., and Yoshimura, T. (2018). Traffic signal control based on reinforcement learning with graph convolutional neural nets. In 2018 21st International conference on intelligent transportation systems (ITSC) (pp. 877–883). IEEE.
Park, B., Messer, C. J., and Urbanik, T. (1999). Traffic signal optimization program for oversaturated conditions: genetic algorithm approach. Transportation Research Record, 1683(1), 133–142.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536.
Spall, J. C., and Chin, D. C. (1997). Traffic-responsive signal timing for system-wide traffic control. Transportation Research Part C: Emerging Technologies, 5(3-4), 153–163.
Sutton, R. S., and Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
Tang, C. R., Hsieh, J. W., and Teng, S. Y. (2023). Cooperative Multi-Objective Reinforcement Learning for Traffic Signal Control and Carbon Emission Reduction. arXiv preprint arXiv:2306.09662.
Teklu, F., Sumalee, A., and Watling, D. (2007). A genetic algorithm approach for optimizing traffic control signals considering routing. Computer‐Aided Civil and Infrastructure Engineering, 22(1), 31–43.
Tunc, I., and Soylemez, M. T. (2023). Fuzzy logic and deep Q learning based control for traffic lights. Alexandria Engineering Journal, 67, 343–359.
Varaiya, P. (2013). Max pressure control of a network of signalized intersections. Transportation Research Part C: Emerging Technologies, 36, 177–195.
Wang, T., Cao, J., and Hussain, A. (2021). Adaptive Traffic Signal Control for large-scale scenario with Cooperative Group-based Multi-agent reinforcement learning. Transportation research part C: emerging technologies, 125, 103046.
Wei, H., Kashyap, G., Nian, D., and Lin, W. (2022). V2I-Based Intelligent Mobility Data Fusion for Self-Adaptive Traffic Signal Control. In International Conference on Transportation and Development 2022 (pp. 200–211).
Wiering, M. A. (2000). Multi-agent reinforcement learning for traffic light control. In Machine Learning: Proceedings of the Seventeenth International Conference (ICML'2000) (pp. 1151–1158).
Xu, Y., Wang, Y., and Liu, C. (2022). Training a Reinforcement Learning Agent with AutoRL for Traffic Signal Control. In 2022 Euro-Asia Conference on Frontiers of Computer Science and Information Technology (FCSIT) (pp. 51–55). IEEE.
Zhang, X., Mo, H., Ma, H., and Luo, Q. (2022). Deep Recurrent Q Networks for Urban Traffic Signal Control. In CICTP 2022 (pp. 318–325).
Zhao, Y., Hu, J. M., Gao, M. Y., and Zhang, Z. (2020). Multi-agent deep reinforcement learning for decentralized cooperative traffic signal control. In CICTP 2020 (pp. 458–470).

Information & Authors

Information

Published In

Go to International Conference on Transportation and Development 2024
International Conference on Transportation and Development 2024
Pages: 739 - 749

History

Published online: Jun 13, 2024

Permissions

Request permissions for this article.

Authors

Affiliations

Anurag Balakrishnan [email protected]
1M.S. Student, Dept. of Mechanical and Aerospace Engineering, California State Univ. Long Beach, Long Beach, CA. Email: [email protected]
Satyam Pathak [email protected]
2M.S. Student, Dept. of Mechanical and Aerospace Engineering, California State Univ. Long Beach, Long Beach, CA. Email: [email protected]
Pedro Herrera [email protected]
3Undergraduate Student, Dept. of Mechanical and Aerospace Engineering, California State Univ. Long Beach, Long Beach, CA. Email: [email protected]
Tairan Liu, Ph.D. [email protected]
4Perception, Actuation, Control, and Network (PACK) Laboratory, Dept. of Mechanical and Aerospace Engineering, California State Univ. Long Beach, Long Beach, CA. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy-E-book
$140.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy-E-book
$140.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share