Chapter
Jun 13, 2023

A Gaze Data-Based Comparative Study to Build a Trustworthy Human-AI Collaboration in Crash Anticipation

Publication: International Conference on Transportation and Development 2023

ABSTRACT

Vehicles with a safety function for anticipating crashes in advance can enhance drivers’ ability to avoid crashes. As dashboard cameras have become a low-cost sensor device accessible to almost every vehicle, deep neural networks for crash anticipation from a dashboard camera are receiving growing interest. However, drivers’ trust in the Artificial Intelligence (AI)-enabled safety function is built on the validation of its safety enhancement toward zero deaths. This paper is motivated to establish a method that uses gaze data and corresponding measures to evaluate human drivers’ ability to anticipate crashes. A laboratory experiment is designed and performed, wherein a screen-based eye tracker collects the gaze data of six volunteers while watching 100 driving videos that include both normal and crash scenarios. Statistical analyses of the experimental data show that, on average, drivers can anticipate a crash up to 2.61 s before it occurs in this pilot study. The chance that drivers have successfully anticipated crashes before they occur is 92.8%. A state of the art AI model can anticipate crashes 1.02 s earlier than drivers on average. The study finds that crash-involving traffic agents in the driving videos can vary drivers’ instant attention level, average attention level, and spatial attention distribution. This finding supports the development of a spatial-temporal attention mechanism for AI models to strengthen their ability to anticipate crashes. Results from the comparison also suggest the development of collaborative intelligence that keeps human-in-the-loop of AI models to further enhance the reliability of AI-enabled safety functions.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Alberti, C. F., Gamberini, L., Spagnolli, A., Varotto, D., and Semenzato, L. (2012). “Using an eye-tracker to assess the effectiveness of a three-dimensional riding simulator in increasing hazard perception”. Cyberpsychology, Behavior, and Social Networking, 15(5), 274–276.
Bao, W., Yu, Q., and Kong, Y. (2020, October). “Uncertainty-based traffic accident anticipation with spatio-temporal relational learning”. In Proceedings of the 28th ACM International Conference on Multimedia (pp. 2682–2690).
Chan, F. H., Chen, Y. T., Xiang, Y., and Sun, M. (2016, November). “Anticipating accidents in dashcam videos.” In Asian Conference on Computer Vision (pp. 136–153). Springer, Cham.
Deng, T., Yan, H., Qin, L., Ngo, T., and Manjunath, B. S. (2019). “How do drivers allocate their potential attention? driving fixation prediction via convolutional neural networks.” IEEE Transactions on Intelligent Transportation Systems, 21(5), 2146–2154.
Deng, T., Yang, K., Li, Y., and Yan, H. (2016). “Where does the driver look? Top-down-based saliency detection in a traffic driving environment.” IEEE Transactions on Intelligent Transportation Systems, 17(7), 2051–2062.
Doshi, A., and Trivedi, M. M. (2009). “On the roles of eye gaze and head dynamics in predicting driver’s intent to change lanes.” IEEE Transactions on Intelligent Transportation Systems, 10(3), 453–462.
Duchowski, A. T. (2002). “A breadth-first survey of eye-tracking applications.” Behavior Research Methods, Instruments, & Computers, 34(4), 455–470.
Dutta, A., Gupta, A., and Zissermann, A. (2016). “VGG Image Annotator (VIA).” <https://www.robots.ox.ac.uk/~vgg/software/via/ >(Nov. 1, 2022).
Fisher, D. L., Pradhan, A. K., Pollatsek, A., and Knodler, M. A., Jr. (2007). “Empirical evaluation of hazard anticipation behaviors in the field and on driving simulator using eye tracker.” Transportation Research Record, 2018(1), 80–86.
Jang, Y. M., Mallipeddi, R., and Lee, M. (2014, January). “Driver’s lane-change intent identification based on pupillary variation.” In 2014 IEEE International Conference on Consumer Electronics (ICCE) (pp. 197–198). IEEE.
Karim, M. M., Li, Y., Qin, R., and Yin, Z. (2022a). “A Dynamic spatial-temporal attention Network for early anticipation of traffic accidents.” IEEE Transactions on Intelligent Transportation Systems, 23(7), 9590–9600.
Karim, M. M., Li, Y., and R. Qin. (2022b). “Toward explainable artificial intelligence for early anticipation of traffic accidents”. Transportation Research Record, 2676(6): 742–755.
Li, Y., Karim, M. M., Qin, R., Sun, Z., Wang, Z., and Yin, Z. (2021). “Crash report data analysis for creating scenario-wise, spatio-temporal attention guidance to support computer vision-based perception of fatal crash risks.” Accident Analysis & Prevention, 151, 105962.
Louw, T., and Merat, N. (2017). “Are you in the loop? Using gaze dispersion to understand driver visual attention during vehicle automation.” Transportation Research Part C: Emerging Technologies, 76, 35–50.
NSC (National Safety Council). (n.d.). “Guide to calculating costs.” Injury Facts, National Safety Council. <https://injuryfacts.nsc.org/all-injuries/costs/guide-to-calculating-costs/data-details/>(Nov. 1, 2022).
NHTSA (National Highway Traffic Safety Administration). (2020a). “Overview of motor vehicle crashes in 2019.” US Department of Transportation National Highway Traffic Safety Administration. <https://www.nhtsa.gov/press-releases/nhtsa-releases-2019-crash-fatality-data>(Nov. 1, 2022).
NHTSA (National Highway Traffic Safety Administration). (2020b). “Automated driving systems.” US Department of Transportation National Highway Traffic Safety Administration. <https://www.nhtsa.gov/vehicle-manufacturers/automated-driving-systems>(Nov. 1, 2022).
Rumagit, A. M., Akbar, I. A., and Igasaki, T. (2017). “Gazing time analysis for drowsiness assessment using eye gaze tracker.” TELKOMNIKA (Telecommunication Computing Electronics and Control), 15(2), 919–925.
Suh, W., Park, P. Y. J., Park, C. H., and Chon, K. S. (2006). “Relationship between speed, lateral placement, and drivers’ eye movement at two-lane rural highways.” Journal of transportation engineering, 132(8), 649–653.
Taylor, T., Pradhan, A. K., Divekar, G., Romoser, M., Muttart, J., Gomez, R., and Fisher, D. L. (2013). “The view from the road: The contribution of on-road glance-monitoring technologies to understanding driver behavior.” Accident Analysis & Prevention, 58, 175–186.
Xia, Y., Zhang, D., Kim, J., Nakayama, K., Zipser, K., and Whitney, D. (2018, December). “Predicting driver attention in critical situations.” In Asian conference on computer vision (pp. 658–674). Springer, Cham.
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., and Zhu, J. (2019, October). “Explainable AI: A brief survey on history, research areas, approaches and challenges.” In CCF international conference on natural language processing and Chinese computing (pp. 563–574). Springer, Cham.

Information & Authors

Information

Published In

Go to International Conference on Transportation and Development 2023
International Conference on Transportation and Development 2023
Pages: 737 - 748

History

Published online: Jun 13, 2023

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

1Ph.D. Student, Dept. of Civil Engineering, Stony Brook Univ., Stony Brook, NY. Email: [email protected]
M. M. Karim [email protected]
2Ph.D. Student, Dept. of Civil Engineering, Stony Brook Univ., Stony Brook, NY. Email: [email protected]
R. Qin, Ph.D., A.M.ASCE [email protected]
3Associate Professor, Dept. of Civil Engineering, Stony Brook Univ., Stony Brook, NY. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$126.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$126.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share