Technical Papers
Jul 4, 2023

Multiclass Transportation Safety Hardware Asset Detection and Segmentation Based on Mask-RCNN with RoI Attention and IoMA-Merging

Publication: Journal of Computing in Civil Engineering
Volume 37, Issue 5

Abstract

Transportation assets, including retaining walls, noise barriers, rumble strips, guardrails, guardrail anchors, and central cable barriers, are important roadside safety hardware and geotechnical structures. They need to be inventoried accurately to support asset management and ensure roadway safety. There are detection methods for some of these transportation assets, which however lack flexibility and multiclass abilities. Moreover, the potential of neural networks has not been fully utilized although some detection methods have used deep learning. This paper for the first time proposes a multiclass transportation asset detection and pixel-wise segmentation model on two-dimensional images, based on region-based convolutional neural network (Mask-RCNN) with a feature pyramid network (FPN). The scale diversity and intensive continual appearance of transportation assets are identified as the main challenges, tending to result in numerous false-positive detections. A methodology with self-attention mechanisms based on the generic region of interest extractor (GRoIE) model along with the intersection over the minimum area merging (IoMA-Merging) postprocessing algorithm was then proposed. The evaluation outcomes demonstrated that our proposed methodology, including GRoIE-global context (GRoIE-GC) with IoMA-Merging achieves the best performance with a significant improvement over baseline. The precision increased by 10.0% on detection and 10.7% on segmentation. This proposed methodology will consequently improve the accuracy of asset inventory.

Practical Applications

This section serves as an illustration on how we could build a unified transportation inventory as a practical application based on the discrete detection results from our methodology. More details about the methodology can be found in the following sections. For simplicity, the demo in this section is only done with detected bounding boxes of a single object. After detection and segmentation, a tracking algorithm can play an important role in associating detection on the same object across different frames. In this way, we could recognize the distinct objects in the video frames with the help of a tracking algorithm. We also find it feasible to estimate the geographical and geometric information of the object of interest. With high-accuracy and high-frequency global positioning system (GPS) information recorded by our sensing vehicle, it is practical to locate the object and estimate the width in the physical world. Height information is also accessible with either camera calibration or combination of image and light detection and ranging (LiDAR) data.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request, including some images used for testing, model prototypes, illustrative code for the postprocessing algorithm.

References

Akofio-Sowah, M.-A., R. Boadi, A. Amekudzi, and M. Meyer. 2014. “Managing ancillary transportation assets: The state of the practice.” J. Infrastruct. Syst. 20 (1): 04013010. https://doi.org/10.1061/(ASCE)IS.1943-555X.0000162.
Aykac, D., T. Karnowski, R. Ferrell, and J. S. Goddard. 2020. “Detection and characterization of rumble strips in roadway video logs.” Electron. Imaging 2020 (6): 1–8.
Ba, J. L., J. R. Kiros, and G. E. Hinton. 2016. “Layer normalization.” Preprint, submitted July 21, 2016. http://arXiv:1607.06450.
Bottou, L. 2010. “Large-scale machine learning with stochastic gradient descent.” In Proc., COMPSTAT’2010, 177–186. New York: Springer.
Butler, C. J., M. A. Gabr, W. Rasdorf, D. J. Findley, J. C. Chang, and B. E. Hammit. 2016. “Retaining wall field condition inspection, rating analysis, and condition assessment.” J. Perform. Constr. Facil. 30 (3): 04015039. https://doi.org/10.1061/(ASCE)CF.1943-5509.0000785.
Cao, Y., J. Xu, S. Lin, F. Wei, and H. Hu. 2019. “Gcnet: Non-local networks meet squeeze-excitation networks and beyond.” In Proc., IEEE/CVF Int. Conf. on Computer Vision Workshops. New York: IEEE.
Chen, K., et al. 2019. “MMDetection: Open MMLab detection toolbox and benchmark.” Preprint, submitted June 17, 2019. http://arxiv.org/abs/1906.07155.
Chu, J., Y. Zhang, S. Li, L. Leng, and J. Miao. 2020. “Syncretic-NMS: A merging non-maximum suppression algorithm for instance segmentation.” IEEE Access 8 (Jun): 114705–114714. https://doi.org/10.1109/ACCESS.2020.3003917.
Cubuk, E. D., B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. 2018. “Autoaugment: Learning augmentation policies from data.” Preprint, submitted May 24, 2018. http://arxiv.org/abs/1805.09501.
Dai, J., K. He, and J. Sun. 2016. “Instance-aware semantic segmentation via multi-task network cascades.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3150–3158. New York: IEEE.
Dai, J., H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. 2017. “Deformable convolutional networks.” In Proc., IEEE Int. Conf. on Computer Vision, 764–773. New York: IEEE.
Deng, J., W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. “Imagenet: A large-scale hierarchical image database.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 248–255. New York: IEEE.
Eraqi, H. M., K. Soliman, D. Said, O. R. Elezaby, M. N. Moustafa, and H. Abdelgawad. 2021. “Automatic roadway features detection with oriented object detection.” Appl. Sci. 11 (8): 3531. https://doi.org/10.3390/app11083531.
Glorot, X., A. Bordes, and Y. Bengio. 2011. “Deep sparse rectifier neural networks.” In Proc., Fourteenth Int. Conf. on Artificial Intelligence and Statistics, JMLR Workshop and Conf., 315–323.
Golparvar-Fard, M., V. Balali, and J. M. de la Garza. 2015. “Segmentation and recognition of highway assets using image-based 3D point clouds and Semantic Texton Forests.” J. Comput. Civ. Eng. 29 (1): 04014023. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000283.
Harwood, D. W. 1993. Use of rumble strips to enhance safety. Washington, DC: Transportation Research Board.
He, K., G. Gkioxari, P. Dollár, and R. Girshick. 2017. “Mask R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 2961–2969. New York: IEEE.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE.
Hsieh, Y.-A., and Y.-C. J. Tsai. 2021a. “Automated asphalt pavement raveling detection and classification using convolutional neural network and macrotexture analysis.” Transp. Res. Rec. 2675 (9): 984–994. https://doi.org/10.1177/03611981211005450.
Hsieh, Y.-A., and Y.-C. J. Tsai. 2021b. “DAU-Net: Dense attention U-net for pavement crack segmentation.” In Proc., IEEE Int. Intelligent Transportation Systems Conf. (ITSC), 2251–2256. New York: IEEE.
Hu, J., L. Shen, and G. Sun. 2018. “Squeeze-and-excitation networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 7132–7141. New York: IEEE.
Illingworth, J., and J. Kittler. 1988. “A survey of the Hough transform.” Comput. Vision Graphics Image Process. 44 (1): 87–116. https://doi.org/10.1016/S0734-189X(88)80033-1.
Jiang, Y., B. He, L. Liu, R. Ai, and X. Lang. 2016. “Effective and robust corrugated beam guardrail detection based on mobile laser scanning data.” In Proc., IEEE 19th Int. Conf. on Intelligent Transportation Systems (ITSC), 1540–1545. New York: IEEE.
Lin, T. Y., P. Goyal, R. Girshick, K. He, and P. Dollár. 2017b. “Focal loss for dense object detection.” In Proc., IEEE Int. Conf. on Computer Vision, 2980–2988. New York: IEEE.
Lin, T.-Y., P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. 2017. “Feature pyramid networks for object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2117–2125. New York: IEEE.
Lin, T.-Y., M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. 2014. “Microsoft COCO: Common objects in context.” In Proc., European Conf. on Computer Vision, 740–755. New York: Springer.
Liu, S., L. Qi, H. Qin, J. Shi, and J. Jia. 2018. “Path aggregation network for instance segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 8759–8768. New York: IEEE.
Parikh, A. P., O. Täckström, D. Das, and J. Uszkoreit. 2016. “A decomposable attention model for natural language inference.” Preprint, submitted September 25, 2016. https://arxiv.org/abs/1606.01933.
Paszke, A., et al. 2019. “Pytorch: An imperative style, high-performance deep learning library.” Preprint, submitted December 3, 2019. https://doi.org/10.48550/arXiv.1912.01703.
Qian, Y., and F. Chen. 2019. “Optimization of excess bounding boxes in micro-part detection and segmentation.” In Proc., Int. Conf. on Image and Graphics, 738–749. New York: Springer.
Redmon, J., and A. Farhadi. 2018. “Yolov3: An incremental improvement.” Preprint, submitted December 17, 2022. http://arxiv.org/abs/1804.02767.
Ren, S., K. He, R. Girshick, and J. Sun. 2015. “Faster R-CNN: Towards real-time object detection with region proposal networks.” Preprint, submitted June 4, 2015. https://arxiv.org/abs/1506.01497.
Rosenfeld, A., and M. Thurston. 1971. “Edge and curve detection for visual scene analysis.” IEEE Trans. Comput. 20 (5): 562–569. https://doi.org/10.1109/T-C.1971.223290.
Rossi, L., A. Karimi, and A. Prati. 2021. “A novel region of interest extraction layer for instance segmentation.” In Proc., 25th Int. Conf. on Pattern Recognition (ICPR), 2203–2209. New York: IEEE.
Scharwächter, T., M. Schuler, and U. Franke. 2014. “Visual guard rail detection for advanced highway assistance systems.” In Proc., IEEE Intelligent Vehicles Symp., 900–905. New York: IEEE.
Sekachev, B., M. Nikita, and Z. Andrey. 2019. Computer vision annotation tool: A universal approach to data annotation, 1. San Francisco: GitHub.
Stallkamp, J., M. Schlipsing, J. Salmen, and C. Igel. 2011. “The German traffic sign recognition benchmark: A multi-class classification competition.” In Proc., 2011 Int. Joint Conf. on Neural Networks, 1453–1460. New York: IEEE.
Tabernik, D., and D. Skočaj. 2019. “Deep learning for large-scale traffic-sign detection and recognition.” IEEE Trans. Intell. Transp. Syst. 21 (4): 1427–1440. https://doi.org/10.1109/TITS.2019.2913588.
Wang, X., R. Girshick, A. Gupta, and K. He. 2018. “Non-local neural networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 7794–7803. New York: IEEE.
Zhang, A., K. C. Wang, Y. Fei, Y. Liu, S. Tao, C. Chen, J. Q. Li, and B. Li. 2018. “Deep learning–based fully automated pavement crack detection on 3D asphalt surfaces with an improved CrackNet.” J. Comput. Civ. Eng. 32 (5): 04018041. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000775.
Zhu, X., D. Cheng, Z. Zhang, S. Lin, and J. Dai. 2019. “An empirical study of spatial attention mechanisms in deep networks.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 6688–6697. New York: IEEE.
Zhu, Z., D. Liang, S. Zhang, X. Huang, B. Li, and S. Hu. 2016. “Traffic-sign detection and classification in the wild.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2110–2118. New York: IEEE.
Zou, Y., A. P. Tarko, E. Chen, and M. A. Romero. 2014. “Effectiveness of cable barriers, guardrails, and concrete barrier walls in reducing the risk of injury.” Accid. Anal. Prev. 72 (Aug): 55–65. https://doi.org/10.1016/j.aap.2014.06.013.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 5September 2023

History

Received: Jan 12, 2023
Accepted: Apr 16, 2023
Published online: Jul 4, 2023
Published in print: Sep 1, 2023
Discussion open until: Dec 4, 2023

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Ph.D. Student, Dept. of Computer and Electrical Engineering, Georgia Institute of Technology, North Ave. NW, Atlanta, GA 30332 (corresponding author). ORCID: https://orcid.org/0000-0003-4329-4435. Email: [email protected]
Ph.D. Student, Dept. of Computer and Electrical Engineering, Georgia Institute of Technology, North Ave. NW, Atlanta, GA 30332. ORCID: https://orcid.org/0000-0001-8964-2912. Email: [email protected]
Pingzhou Yu, S.M.ASCE [email protected]
Ph.D. Student, Dept. of Civil and Environmental Engineering, Georgia Institute of Technology, North Ave. NW, Atlanta, GA 30332. Email: [email protected]
Zhongyu Yang, S.M.ASCE [email protected]
Ph.D. Student, Dept. of Civil and Environmental Engineering, Georgia Institute of Technology, North Ave. NW, Atlanta, GA 30332. Email: [email protected]
Yichang James Tsai, M.ASCE [email protected]
Professor, Dept. of Civil and Environmental Engineering, Georgia Institute of Technology, North Ave. NW, Atlanta, GA 30332. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share