Chapter
Jan 25, 2024

Pedestrian Phone-Related Distracted Behavior Classification in Front-Facing Vehicle Cameras for Road User Safety

Publication: Computing in Civil Engineering 2023

ABSTRACT

Understanding distracted pedestrian behaviors is critical to road user safety and preventing traffic-related injuries. Front-facing vehicle cameras (a.k.a., dashcams) have increasingly become popular for documenting driving behavior and patterns. However, a relatively underexplored application of dashcam footage is to automatically identify distracted road users that may pose a threat to pedestrians. Detecting distracted behaviors in dashcam-captured imagery can enable drivers to take preventative measures and avoid potential traffic accidents. To this end, computer vision techniques powered by the prediction capability of artificial intelligence (AI) can be leveraged to identify pedestrians’ distracted behaviors when crossing streets and intersections. In this paper, pedestrians’ phone-related distracted behaviors are detected and classified in dashcam footage by leveraging convolutional neural networks (CNNs) assembled in the form of a two-stage detection and classification architecture. In particular, we propose to first detect pedestrians (stage 1) followed by classifying the most prevalent distracted behavior visible in each detected instance (stage 2). This technique has been developed on an in-house video dataset collected from urban intersections around a major university campus. Results indicate that the pedestrian detection model achieves 76% average precision (AP), and the classification of distracted behavior reaches 72% precision, 98% recall, and 83% F1-score.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

AAOS. (2008). American Academy of Orthopaedic Surgeons-AAOS.
Bewley, A., Ge, Z., Ott, L., Ramos, F., and Upcroft, B. (2016). “Simple online and realtime tracking.” 2016 IEEE International Conference on Image Processing (ICIP), 3464–3468.
Chang, M.-C., Chiang, C.-K., Tsai, C.-M., Chang, Y.-K., Chiang, H.-L., Wang, Y.-A., Chang, S.-Y., Li, Y.-L., Tsai, M.-S., and Tseng, H.-Y. (2020). “Ai city challenge 2020-computer vision for smart transportation applications.” Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 620–621.
Chen, Y., Wang, J., Li, J., Lu, C., Luo, Z., Xue, H., and Wang, C. (2018). “Lidar-video driving dataset: Learning driving policies effectively.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5870–5878.
Cheng, C.-S., Behzadan, A. H., and Noshadravan, A. (2022). “Bayesian Inference for Uncertainty-Aware Post-Disaster Damage Assessment Using Artificial Intelligence.” Computing in Civil Engineering 2021, 156–163.
Cheng, C. S., Behzadan, A. H., and Noshadravan, A. (2021). “Deep learning for post‐hurricane aerial damage assessment of buildings.” Computer‐Aided Civil Infrastructure Engineering, 36(6).
Cheng, C. S., Behzadan, A. H., and Noshadravan, A. (2022). “Uncertainty‐aware convolutional neural network for explainable artificial intelligence‐assisted disaster damage assessment.” Structural Control and Health Monitoring, e3019.
Cybenko, G., O’Leary, D. P., and Rissanen, J. (1998). The Mathematics of Information Coding, Extraction and Distribution. 107.
Dai, J., Li, Y., He, K., and Sun, J. (2016). “R-FCN: Object Detection via Region-based Fully Convolutional Networks.” In Advances in Neural Information Processing Systems 379–387.
Deng, J., Dong, W., Socher, R., Li, L., Kai, L., and Li, F.-F. (2009). “ImageNet: A Large-scale Hierarchical Image Database.” 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255.
Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., and Zisserman, A. (2010). “The Pascal Visual Object Classes (VOC) Challenge.” International Journal of Computer Vision, 88(2), 303–338.
Giovannini, E., Giorgetti, A., Pelletti, G., Giusti, A., Garagnani, M., Pascali, J. P., Pelotti, S., and Fais, P. (2021). “Importance of dashboard camera (Dash Cam) analysis in fatal vehicle–pedestrian crash reconstruction.” Forensic Science, Medicine Pathology, 17(3), 379–387.
He, K., Gkioxari, G., Dollar, P., and Girshick, R. (2017). “Mask R-CNN.” Proceedings of the IEEE International Conference on Computer Vision.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep Residual Learning for Image Recognition.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. (2012). “Improving neural networks by preventing co-adaptation of feature detectors.”.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). “Mobilenets: Efficient Convolutional Neural Networks for Mobile Vision Applications.”.
Khajwal, A. B., Cheng, C. S., and Noshadravan, A. (2023). “Post‐disaster damage classification based on deep multi‐view image fusion.” Computer‐Aided Civil and Infrastructure Engineering, 38(4), 528–544.
Khan, N. A., and Habib, M. A. (2022). “Exploring the impacts of built environment on pedestrian injury severity involving distracted driving.” Journal of safety research, 80, 97–108.
Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017). “Focal Loss for Dense Object Detection.” Proceedings of the IEEE International Conference on Computer Vision, 2980–2988.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. (2014). Microsoft COCO: Common Objects in Context. 740–755.
Mwakalonge, J., Siuhi, S., and White, J. (2015). “Distracted walking: Examining the extent to pedestrian safety problems.” Journal of traffic transportation engineering, 2(5), 327–337.
Park, S., Kim, J., Mizouni, R., and Lee, U. (2016). “Motives and concerns of dashcam video sharing.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 4758–4769.
Pi, Y., Duffield, N., Behzadan, A. H., and Lomax, T. (2021). “Computer vision and multi-object tracking for traffic measurement from campus monitoring cameras.” Computing in Civil Engineering 2021, 950–958.
Pi, Y., Duffield, N., Behzadan, A. H., and Lomax, T. (2022). “Visual recognition for urban traffic data retrieval and analysis in major events using convolutional neural networks.” Computational urban science, 2(1), 1–16.
Powers, D. M. (2011). “Evaluation: From Precision, Recall and F-measure to ROC, Informedness, Markedness and Correlation.” Journal of Machine Learning Technologies.
Rasouli, A., Kotseruba, I., and Tsotsos, J. K. (2017). “Understanding pedestrian behavior in complex traffic scenes.” IEEE Transactions on Intelligent Vehicles, 3(1), 61–70.
Rea, R. V., Johnson, C. J., Aitken, D. A., Child, K. N., and Hesse, G. (2018). “Dash Cam videos on YouTube™ offer insights into factors related to moose-vehicle collisions.” Accident Analysis Prevention, 118, 207–213.
Rezatofighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., and Savarese, S. (2019). “Generalized Intersection Over Union: A Metric and a Loss for Bounding Box Regression.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 658–666.
Russo, B. J., James, E., Aguilar, C. Y., and Smaglik, E. J. (2018). “Pedestrian behavior at signalized intersection crosswalks: observational study of factors associated with distracted walking, pedestrian violations, and walking speed.” Transportation research record, 2672(35), 1–12.
Schwebel, D. C., McClure, L. A., and Porter, B. E. (2017). “Experiential exposure to texting and walking in virtual reality: A randomized trial to reduce distracted pedestrian behavior.” Accident Analysis & Prevention, 102, 116–122.
Scopatz, R. A., and Zhou, Y. (2016). Effect of electronic device use on pedestrian safety: A literature review.
Sohail, A., Cheema, M. A., Ali, M. E., Toosi, A. N., and Rakha, H. A. (2023). “Data-driven approaches for road safety: A comprehensive systematic literature review.” Safety Science, 158, 105949.
Stimpson, J. P., Wilson, F. A., and Muelleman, R. L. (2013). “Fatalities of pedestrians, bicycle riders, and motorists due to distracted driving motor vehicle crashes in the US, 2005–2010.” Public Health Reports, 128(6), 436–442.
Terra. (2022). Texas A&M University High Performance Research Computing.
Wells, H. L., McClure, L. A., Porter, B. E., and Schwebel, D. C. (2018). “Distracted pedestrian behavior on two urban college campuses.” Journal of community health, 43(1), 96–102.
Xiang, X., Zhai, M., Lv, N., and El Saddik, A. (2018). “Vehicle counting based on vehicle detection and tracking from aerial videos.” Sensors, 18(8), 2560.
Yang, S. (2017). Driver behavior impact on pedestrians’ crossing experience in the conditionally autonomous driving context. Doctoral dissertation.

Information & Authors

Information

Published In

Go to Computing in Civil Engineering 2023
Computing in Civil Engineering 2023
Pages: 418 - 425

History

Published online: Jan 25, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Chih-Shen Cheng [email protected]
1Ph.D. Candidate, Zachry Dept. of Civil and Environmental Engineering, Texas A&M Univ., College Station, TX. Email: [email protected]
Yalong Pi, Ph.D. [email protected]
2Operation Data Scientist, Institute of Data Science, Texas A&M Univ., College Station, TX. Email: [email protected]
Tim Lomax, Ph.D. [email protected]
3Research Fellow, Texas A&M Transportation Institute, College Station, TX. Email: [email protected]
Nick Duffield, Ph.D. [email protected]
4Professor, Dept. of Electrical and Computer Engineering and Institute of Data Science, Texas A&M Univ., College Station, TX. Email: [email protected]
Amir H. Behzadan, Ph.D. [email protected]
5Professor, Dept. of Construction Science, Texas A&M Univ., College Station, TX. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$266.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$266.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share