Technical Papers
May 8, 2024

Vehicle Turn-Signal Detection for Automated Flagging Systems

Publication: Journal of Computing in Civil Engineering
Volume 38, Issue 4

Abstract

Flaggers are always required to work close to open traffic lanes. They may be hit by distracted, speeding, or intoxicated drivers, leading to injuries and fatalities. To protect them, the concept of an automated flagging system device (AFSD) has been proposed. One of the core functions of an AFSD is to recognize the turn signals of vehicles in the lanes in order to guide the traffic. However, existing studies on vehicle turn-signal recognition have mainly focused on autonomous driving scenarios. Research on vehicle turn-signal recognition for AFSDs has been limited. In this study, we propose a novel method for vehicle turn-signal recognition using a video camera on an AFSD. The method first uses object detection and tracking to locate the vehicles and identify their front lighting areas (FLAs). Then, the luminance of each vehicle’s FLA is extracted. Based on the captured luminance features over time, a convolutional operator is applied to figure out whether the left or right FLA is flashing. The proposed method was implemented and tested on real traffic videos. The results showed that the overall signal recognition accuracy of the method reached 78.62%.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code generated or used during the study are available from the corresponding author by request (testing data, training data, codes for implemented models, and evaluation metrics).

References

Ahmed, A., E.-R. Khaled, and L. Liang. 2017. “Impact of flaggers on safety and mobility of highway work zones.” In Proc., Canadian Society for Civil Engineering Annual Conf. and General Meeting 2017: Leadership in Sustainable Infrastructure, 264–271. Vancouver, Canada: Canadian Society for Civil Engineering.
Almagambetov, A., M. Casares, and S. Velipasalar. 2012. “Autonomous tracking of vehicle rear lights and detection of brakes and turn signals.” In Proc., 2012 IEEE Symp. on Computational Intelligence for Security and Defence Applications, 1–7. New York: IEEE.
American Traffic Safety Services Association. 2012. Guidance on the use of automated flagger assistance devices. Fredericksburg, VA: American Traffic Safety Services Association.
Ankerst, M., M. M. Breunig, H.-P. Kriegel, and J. Sander. 1999. “OPTICS: Ordering points to identify the clustering structure.” ACM Sigmod Rec. 28 (2): 49–60. https://doi.org/10.1145/304181.304187.
Bergmann, P., T. Meinhardt, and L. Leal-Taixe. 2019. “Tracking without bells and whistles.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 941–951. New York: IEEE.
Bewley, A., Z. Ge, L. Ott, F. Ramos, and B. Upcroft. 2016. “Simple online and realtime tracking.” In Proc., 2016 IEEE Int. Conf. on Image Processing (ICIP), 3464–3468. New York: IEEE.
Bochkovskiy, A., C.-Y. Wang, and H.-Y M. Liao. 2020. “Yolov4: Optimal speed and accuracy of object detection.” Preprint, submitted April 23, 2020. http://arxiv.org/abs/2004.10934.
Carion, N., F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. 2020. “End-to-end object detection with transformers.” In Proc., European Conf. on Computer Vision, 213–229. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-58452-8_13.
Chaabane, M., P. Zhang, J. R. Beveridge, and S. O’Hara. 2021. “Deft: Detection embeddings for tracking.” Preprint, submitted February 3, 2021. http://arxiv.org/abs/2102.02267.
Cottrell, B. H. 2006. Evaluation of the Autoflagger in Virginia. Charlottesville, VA: Virginia Transportation Research Council.
Cui, Z., S.-W. Yang, and H.-M. Tsai. 2015. “A vision-based hierarchical framework for autonomous front-vehicle taillights detection and signal recognition.” In Proc., 2015 IEEE 18th International Conference on Intelligent Transportation Systems, 931–937. New York: IEEE.
Dai, J., Y. Li, K. He, and J. Sun. 2016. “R-FCN: Object detection via region-based fully convolutional networks.” In Advances in neural information processing systems, 29. San Francisco: Morgan Kaufmann.
Dosovitskiy, A., et al. 2020. “An image is worth 16x16 words: Transformers for image recognition at scale.” Preprint, submitted October 22, 2020. http://arxiv.org/abs/2010.11929.
FHWA (Federal Highway Administration). 2009. “Manual on uniform traffic control devices.” Accessed December 1, 2023. https://nacto.org/wp-content/uploads/2012/06/ mutcd2009edition.pdf.
Finley, M. D. 2013. “Field evaluation of automated flagger assistance devices in work zones on two-lane roads.” Transp. Res. Rec. 2337 (1): 1–8.
Franke, U., D. Pfeiffer, C. Rabe, C. Knoeppel, M. Enzweiler, F. Stein, and R. Herrtwich. 2013. “Making Bertha see.” In Proc., IEEE Int. Conf. on Computer Vision Workshops, 214–221. New York: IEEE.
Fröhlich, B., M. Enzweiler, and U. Franke. 2014. “Will this car change the lane? Turn signal recognition in the frequency domain.” In Proc., 2014 IEEE Intelligent Vehicles Symp., 37–42. New York: IEEE.
Frossard, D., E. Kee, and R. Urtasun. 2019. “Deepsignals: Predicting intent of drivers through visual signals.” In Proc., 2019 Int. Conf. on Robotics and Automation (ICRA), 9697–9703. New York: IEEE.
Girshick, R. 2015. “Fast R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 1440–1448. New York: IEEE.
Girshick, R., J. Donahue, T. Darrell, and J. Malik. 2014. “Rich feature hierarchies for accurate object detection and semantic segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 580–587. New York: IEEE.
Han, W., and Z. Zhu. 2022. Vision-based automated flagging system in construction. New York: IEEE.
He, K., G. Gkioxari, P. Dollár, and R. Girshick. 2017. “Mask R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 2961–2969. New York: IEEE.
He, K., X. Zhang, S. Ren, and J. Sun. 2015. “Spatial pyramid pooling in deep convolutional networks for visual recognition.” IEEE Trans. Pattern Anal. Mach. Intell. 37 (9): 1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824.
Hsu, H.-K., Y.-H. Tsai, X. Mei, K.-H. Lee, N. Nagasaka, D. Prokhorov, and M.-H. Yang. 2017. “Learning to tell brake and turn signals in videos using CNN-LSTM structure.” In Proc., 2017 IEEE 20th Int. Conf. on Intelligent Transportation Systems (ITSC), 1–6. New York: IEEE.
Industrial Safety & Hygiene News. 2018. “Fatal injuries at road construction sites among construction workers.” Accessed March 23, 2023. https://www.ishn.com/articles/109295-fatal-injuries-at-road-construction-sites-among-construction-workers.
Jocher, G., et al. 2022. Ultralytics/yolov5: v6.1–TensorRT, TensorFlow edge TPU and OpenVINO export and inference. Geneva: Zenodo.
Kalman, R. E. 1960. “A new approach to linear filtering and prediction problems.” J. Basic Eng. 82 (1): 35–45. https://doi.org/10.1115/1.3662552.
Lee, K.-H., T. Tagawa, J.-E. Pan, A. Gaidon, and B. Douillard. 2019. “An attention-based recurrent convolutional network for vehicle taillight recognition.” In Proc., 2019 IEEE Intelligent Vehicles Symp., 2365–2370. New York: IEEE.
Li, Q., S. Garg, J. Nie, X. Li, R. W. Liu, Z. Cao, and M. S. Hossain. 2020. “A highly efficient vehicle taillight detection approach based on deep learning.” IEEE Trans. Intell. Transp. Syst. 22 (7): 4716–4726. https://doi.org/10.1109/TITS.2020.3027421.
Lin, T.-Y., P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. 2017. “Feature pyramid networks for object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2117–2125. New York: IEEE.
Liu, C., J. Yuen, and A. Torralba. 2010. “Sift flow: Dense correspondence across scenes and its applications.” IEEE Trans. Pattern Anal. Mach. Intell. 33 (5): 978–994. https://doi.org/10.1109/TPAMI.2010.147.
Liu, W., D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. 2016. “SSD: Single shot multibox detector.” In Proc., European Conf. on Computer Vision, 21–37. Cham, Switzerland: Springer.
Liu, Z., Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. 2021. “Swin transformer: Hierarchical vision transformer using shifted windows.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 10012–10022. New York: IEEE.
Park, H., S. Jang, and Y. Ha. 2020. “Real-time multiple vehicle turn signal detection system based on energy representation.” In Proc., 2020 IEEE Int. Conf. on Big Data and Smart Computing (BigComp), 448–451. New York: IEEE.
Paszke, A., et al. 2019. Pytorch: An imperative style, high-performance deep learning library. San Francisco: Morgan Kaufmann.
Qiao, S., L.-C. Chen, and A. Yuille. 2021. “Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 10213–10224. New York: IEEE.
Qing, Z., S. Zhang, H. Brown, and C. Sun. 2019. “Evaluation of truck-mounted automated flagger assistance devices in Missouri: Case study.” J. Transp. Eng. Part A: Syst. 145 (12): 5019006. https://doi.org/10.1061/JTEPBS.0000271.
Redmon, J., S. Divvala, R. Girshick, and A. Farhadi. 2016. “You only look once: Unified, real-time object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 779–788. New York: IEEE.
Redmon, J., and A. Farhadi. 2017. “YOLO9000: Better, faster, stronger.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 7263–7271. New York: IEEE.
Redmon, J., and A. Farhadi. 2018. “Yolov3: An incremental improvement.” Preprint, submitted April 8, 2018. https://arxiv.org/abs/1804.02767.
Ren, S., K. He, R. Girshick, and J. Sun. 2015. “Faster R-CNN: Towards real-time object detection with region proposal networks.” In Advances in neural information processing systems. San Francisco: Morgan Kaufmann.
Shetty, A. K., I. Saha, R. M. Sanghvi, S. A. Save, and Y. J. Patel. 2021. “A review: Object detection models.” In Proc., 2021 6th Int. Conf. for Convergence in Technology (I2CT), 1–8. New York: IEEE.
Song, W., S. Liu, T. Zhang, Y. Yang, and M. Fu. 2022. “Action-state joint learning-based vehicle taillight recognition in diverse actual traffic scenes.” IEEE Trans. Intell. Transp. Syst. 23 (10): 18088–18099. https://doi.org/10.1109/TITS.2022.3160501.
Tan, M., R. Pang, and Q. V. Le. 2020. “Efficientdet: Scalable and efficient object detection.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 10781–10790. New York: IEEE.
Thompson, B. 2008. Field evaluation of automated flagger assistance devices (AFADS). Washington, DC: Dept. of Transportation.
Tong, B., W. Chen, C. Li, L. Du, Z. Xiao, and D. Zhang. 2022. “An improved approach for real-time taillight intention detection by intelligent vehicles.” Machines 10 (8): 626. https://doi.org/10.3390/machines10080626.
Trout, N. D., M. D. Finley, and B. R. Ullman. 2013. “Motorists’ understanding of automated flagger assistance devices in work zones.” Transp. Res. Rec. 2337 (1): 42–49.
US Bureau of Labor Statistics. 2022. “33-9091 crossing guards and flaggers.” Accessed March 12, 2023. https://www.bls.gov/oes/current/oes339091.htm.
Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. 2017. “Attention is all you need.” In Advances in neural information processing systems. San Francisco: Morgan Kaufmann.
Wojke, N., A. Bewley, and D. Paulus. 2017. “Simple online and realtime tracking with a deep association metric.” In Proc., 2017 IEEE Int. Conf. on Image Processing (ICIP), 3645–3649. New York: IEEE.
Yoneda, K., A. Kuramoto, and N. Suganuma. 2017. “Convolutional neural network based vehicle turn signal recognition.” In Proc., 2017 Int. Conf. on Intelligent Informatics and Biomedical Sciences (ICIIBMS), 204–205. New York: IEEE.
Zaidi, S. S. A., M. S. Ansari, A. Aslam, N. Kanwal, M. Asghar, and B. Lee. 2022. “A survey of modern deep learning based object detection models.” Digit. Signal Process. 126 (Jun): 103514. https://doi.org/10.1016/j.dsp.2022.103514.
Zhou, X., V. Koltun, and P. Krähenbühl. 2020. “Tracking objects as points.” In Proc., European Conf. on Computer Vision, 474–490. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-58548-8_28.
Zhou, X., D. Wang, and P. Krähenbühl. 2019. “Objects as points.” Preprint, submitted April 16, 2019. https://arxiv.org/abs/1904.07850.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 38Issue 4July 2024

History

Received: Feb 21, 2023
Accepted: Jan 17, 2024
Published online: May 8, 2024
Published in print: Jul 1, 2024
Discussion open until: Oct 8, 2024

Permissions

Request permissions for this article.

Authors

Affiliations

Graduate Research Assistant, Dept. of Civil and Environmental Engineering, Univ. of Wisconsin–Madison, Madison, WI 53706. Email: [email protected]
Assistant Professor, Dept. of Civil and Environmental Engineering, Univ. of Wisconsin–Madison, Madison, WI 53706 (corresponding author). ORCID: https://orcid.org/0000-0002-4554-1770. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share