Chapter
Jun 13, 2024

Dynamically Expanding Capacity of Autonomous Driving with Near-Miss Focused Training Framework

Publication: International Conference on Transportation and Development 2024

ABSTRACT

The long-tail distribution of real driving data poses challenges for training and testing autonomous vehicles (AV), where rare yet crucial safety-critical scenarios are infrequent. And virtual simulation offers a low-cost and efficient solution. This paper proposes a near-miss focused training framework for AV. Utilizing the driving scenario information provided by sensors in the simulator, we design novel reward functions, which enable background vehicles (BV) to generate near-miss scenarios and ensure gradients exist not only in collision-free scenes but also in collision scenarios. And then leveraging the robust adversarial reinforcement learning (RARL) framework for simultaneous training of AV and BV to gradually enhance AV and BV capabilities, as well as generating near-miss scenarios tailored to different levels of AV capabilities. Results from three testing strategies indicate that the proposed method generates scenarios closer to near-miss, thus enhancing the capabilities of both AVs and BVs throughout training.

Get full access to this chapter

View all available purchase options and get full access to this chapter.

REFERENCES

Calò, A., Arcaini, P., Ali, S., Hauer, F., and Ishikawa, F. (2020, October). Generating avoidable collision scenarios for testing autonomous driving systems. In 2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST) (pp. 375–386). IEEE.
Ding, W., Xu, M., and Zhao, D. (2020, May). Cmts: A conditional multiple trajectory synthesizer for generating safety-critical driving scenarios. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 4314–4321). IEEE.
Ding, W., Xu, C., Arief, M., Lin, H., Li, B., and Zhao, D. (2023). A survey on safety-critical driving scenario generation—A methodological perspective. IEEE Transactions on Intelligent Transportation Systems.
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., and Koltun, V. (2017, October). CARLA: An open urban driving simulator. In Conference on robot learning (pp. 1–16). PMLR.
Feng, S., Feng, Y., Yu, C., Zhang, Y., and Liu, H. X. (2020). Testing scenario library generation for connected and automated vehicles, part I: Methodology. IEEE Transactions on Intelligent Transportation Systems, 22(3), 1573–1582.
Feng, S., Yan, X., Sun, H., Feng, Y., and Liu, H. X. (2021). Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nature communications,12(1),748.
Feng, S., Sun, H., Yan, X., Zhu, H., Zou, Z., Shen, S., and Liu, H. X. (2023). Dense reinforcement learning for safety validation of autonomous vehicles. Nature, 615(7953), 620–627.
Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. (2018, July). Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning (pp. 1861–1870). PMLR.
Hungar, H., Köster, F., and Mazzega, J. (2017). Test specifications for highly automated driving functions: Highway pilot.
Kuutti, S., Fallah, S., and Bowden, R. (2020, May). Training adversarial agents to exploit weaknesses in deep control policies. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 108–114). IEEE.
Niu, H., Hu, J., Cui, Z., and Zhang, Y. (2021, October). Dr2l: Surfacing corner cases to robustify autonomous driving via domain randomization reinforcement learning. In Proceedings of the 5th International Conference on Computer Science and Application Engineering (pp. 1–8).
Niu, H., Chen, Q., Li, Y., and Hu, J. (2023). Stackelberg Driver Model for Continual Policy Improvement in Scenario-Based Closed-Loop Autonomous Driving. arXiv preprint arXiv:2309.14235.
Niu, H., Ren, K., Xu, Y., Yang, Z., Lin, Y., Zhang, Y., and Hu, J. (2023). (re)2h2o: Autonomous driving scenario generation via reversely regularized hybrid offline-and-online reinforcement learning. In 2023 IEEE Intelligent Vehicles Symposium (IV) (pp. 1–8).
Niu, H., Xu, Y., Jiang, X., and Hu, J. (2023). Continual Driving Policy Optimization with Closed-Loop Individualized Curricula. arXiv preprint arXiv:2309.14209.
Pinto, L., Davidson, J., Sukthankar, R., and Gupta, A. (2017, July). Robust adversarial reinforcement learning. In International Conference on Machine Learning (pp. 2817–2826). PMLR.
Sun, H., Feng, S., Yan, X., and Liu, H. X. (2021). Corner case generation and analysis for safety assessment of autonomous vehicles. Transportation research record, 2675(11),587–600.
Tuncali, C. E., and Fainekos, G. (2019). Rapidly-exploring random trees-based test generation for autonomous vehicles. arXiv preprint arXiv:1903.10629.
Wertsch, J. V. (1984). The zone of proximal development: Some conceptual issues. New directions for child development.

Information & Authors

Information

Published In

Go to International Conference on Transportation and Development 2024
International Conference on Transportation and Development 2024
Pages: 616 - 626

History

Published online: Jun 13, 2024

Permissions

Request permissions for this article.

Authors

Affiliations

Ziyuan Yang [email protected]
1Dept. of Automation, Tsinghua Univ. Email: [email protected]
Zhaoyang Li [email protected]
2Dept. of Automation, Tsinghua Univ. Email: [email protected]
Jianming Hu [email protected]
3Associate Professor, Dept. of Automation, Tsinghua Univ. Email: [email protected]
4Professor, Dept. of Automation, Tsinghua Univ. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy-E-book
$156.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy-E-book
$156.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share