Traffic Sign Detection and Recognition for Autonomous Driving in Virtual Simulation Environment
Publication: International Conference on Transportation and Development 2022
ABSTRACT
This study developed a traffic sign detection and recognition algorithm based on the RetinaNet. Two main aspects were revised to improve the detection of traffic signs: image cropping to address the issue of large image and small traffic signs and more anchors with various scales to detect traffic signs with different sizes and shapes. The proposed algorithm was trained and tested in a series of autonomous driving front-view images in a virtual simulation environment. Results show that the algorithm performed well under good illumination and weather conditions. The drawbacks are that it sometimes failed to detect objects under bad weather conditions like snow and failed to distinguish speed limit signs with different limit values.
Get full access to this article
View all available purchase options and get full access to this chapter.
REFERENCES
Arcos-Garcia, A., Álvarez-García, J. A., and Soria-Morillo, L. M. (2018). “Evaluationof deep neural networks for traffic sign detection systems.” Neurocomputing, 316,332–344.
Chen, S., Li, J., Yao, C., Hou, W., Qin, S., Jin, W., and Tang, X. (2019). “Dubox:No-prior box objection detection via residual dual scale detectors.” arXiv preprintarXiv:1904.06883.
Dai, J., Li, Y., He, K., and Sun, J. (2016). “R-fcn: Object detection via region-basedfully convolutional networks.” Advances in neural information processing systems,379–387.
He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017). “Mask r-cnn.” Proceedings of the IEEE international conference on computer vision, 2961–2969.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep residual learning for imagerecognition.” Proceedings of the IEEE conference on computer vision and patternrecognition, 770–778.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). “Mobilenets: Efficient convolutional neural networks for mobile vision applications.” arXiv preprint arXiv:1704.04861.
Jay, P. (2018). “The intuition behind retinanet,<https://medium.com/@14prakash/the-intuition-behind-retinanet-eb636755607d>.
Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. (2017). “Focal loss for denseobject detection.” Proceedings of the IEEE international conference on computervision, 2980–2988.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. (2016). “Ssd: Single shot multibox detector.” European conference on computervision, Springer, 21–37.
Redmon, J., and Farhadi, A. (2018). “Yolov3: An incremental improvement.” arXivpreprint arXiv:1804.02767.
Ren, S., He, K., Girshick, R., and Sun, J. (2015). “Faster r-cnn: Towards real-time object detection with region proposal networks.” Advances in neural information processing systems, 91–99.
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. (2017). “Inception-v4, inception-resnet and the impact of residual connections on learning.” Thirty-First AAAI Con-ference on Artificial Intelligence.
Information & Authors
Information
Published In
History
Published online: Aug 31, 2022
Authors
Metrics & Citations
Metrics
Citations
Download citation
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.