Chapter
Mar 18, 2024

Segmentation of Bridge Components from Various Real Scene Inspection Images

Publication: Construction Research Congress 2024

ABSTRACT

The traditional bridge inspection method, which relies on manual visual inspection, is time-consuming, labor-intensive, and could be dangerous. Recent automated bridge inspection approaches aim to utilize unmanned aerial vehicles (UAVs) and computer vision techniques to collect and analyze images to improve the inspection process. A survey of existing literature and tools shows that defect detection/segmentation has been studied extensively. However, there has been little effort focused on segmenting and characterizing the bridge components that have the defects. The identification and characterization of bridge components is essential for bridge inspection, which can contextualize the defects to determine their importance in maintenance decision making. Moreover, existing bridge component recognition approaches lack generalizability in the presence of a variety of bridge types, complex background scenes, and varying shot sizes. To address these gaps, this paper proposes a convolutional neural network (CNN)-based image segmentation method to segment bridge components, which leverages DeepLabv3+ and pre-training from ImageNet to improve feature extraction and generalizability. The proposed method was trained and tested end to end on 13 classes based on the Federal Highway Administration (FHWA)’s Bridge Inspector’s Reference Manual. It achieved a mean precision, recall, F-1 measure, and Intersection over Union (IoU) of 86.7%, 78.2%, 81.4%, and 70.4%, respectively.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Baheti, B., Innani, S., Gajre, S., and Talbar, S. (2020). “Eff-UNet: A Novel Architecture for Semantic Segmentation in Unstructured Environment.” Proc., IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
Bai, M., and Sezen, H. (2021). “Detecting cracks and spalling automatically in extreme events by end-to-end deep learning frameworks.” Proc., ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci., XXIV ISPRS Congress, International Society for Photogrammetry and Remote Sensing.
Bianchi, E., and Hebdon, M. (2022). “Visual Structural Inspection Datasets.” Autom. Constr., 139, 104299.
Cha, Y.-J., Choi, W., and Büyüköztürk, O. (2017). “Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks.” Comput.-Aided Civ. Infrastruct. Eng., 32(5), 361–378.
Cha, Y.-J., Choi, W., Suh, G., Mahmoudkhani, S., and Büyüköztürk, O. (2018). “Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types.” Comput.-Aided Civ. Infrastruct. Eng., 33(9), 731–747.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. (2014). “Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs.”.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. (2017). “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.” IEEE Trans. Pattern Anal. Mach. Intell., 40(4), 834–848.
Chen, L.-C., Papandreou, G., Schroff, F., and Adam, H. (2017). “Rethinking atrous convolution for semantic image segmentation.”.
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018). “Encoder-decoder with atrous separable convolution for semantic image segmentation.” Proc., European conference on computer vision (ECCV).
MMSegmentation Contributors. (2020). “MMSegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark.” from https://github.com/open-mmlab/mmsegmentation.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). “ImageNet: A large-scale hierarchical image database.” Proc., 2009 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.”.
Floridi, L., and Chiriatti, M. (2020). “GPT-3: Its Nature, Scope, Limits, and Consequences.” Minds and Machines, 30(4), 681–694.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep Residual Learning for Image Recognition.” Proc., IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
Hartle, R. A., Ryan, T. W., Mann, E., Danovich, L. J., Sosko, W. B., and Bouscher, J. W. (2002, October 1). Bridge Inspector’s Reference Manual: Volume 1 and Volume 2. United States Department of Transportation. https://rosap.ntl.bts.gov/view/dot/54492.
Liang, X. (2019). “Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with Bayesian optimization.” Comput.-Aided Civ. Infrastruct. Eng., 34(5), 415–430.
Liu, P. C.-Y., and El-Gohary, N. (2020). “Semantic Image Retrieval and Clustering for Supporting Domain-Specific Bridge Component and Defect Classification.” Proc., Construction Research Congress 2020: Infrastructure Systems and Sustainability, American Society of Civil Engineers Reston, VA.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). “RoBERTa: A Robustly Optimized BERT Pretraining Approach.”.
Long, J., Shelhamer, E., and Darrell, T. (2015). “Fully Convolutional Networks for Semantic Segmentation”. Proc., IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.
Narazaki, Y., Hoskere, V., Hoang, T. A., Fujino, Y., Sakurai, A., and Spencer, B. F. (2020). “Vision‐Based Automated Bridge Component Recognition with High‐Level Scene Consistency.” Comput.-Aided Civ. Infrastruct. Eng., 35(5), 465–482.
Narazaki, Y., Hoskere, V., Yoshida, K., Spencer, B. F., and Fujino, Y. (2021). “Synthetic Environments for Vision-Based Structural Condition Assessment of Japanese High-Speed Railway Viaducts.” Mech. Syst. Signal Process., 160, 107850.
Ohio Department of Transportation. State of Ohio Bridge Photos. https://brphotos.dot.state.oh.us/.
Prasanna, P., Dana, K. J., Gucunski, N., Basily, B. B., La, H. M., Lim, R. S., and Parvardeh, H. (2016). “Automated Crack Detection on Concrete Bridges.” IEEE Trans. Autom., 13(2), 591–599.
Qi, X., Liu, Z., Shi, J., Zhao, H., and Jia, J. (2016). “Augmented Feedback in Semantic Segmentation Under Image Level Supervision”. Proc., Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Part VIII 14, Springer.
Russell, B. C., Torralba, A., Murphy, K. P., and Freeman, W. T. (2008). “LabelMe: a database and web-based tool for image annotation.” Int. J. Comput. Vis., 77(1), 157–173.
Shan, B., Zheng, S., and Ou, J. (2016). “A stereovision-based crack width detection approach for concrete surface assessment.” KSCE J. Civ. Eng., 20(2), 803–812.
Shorten, C., and Khoshgoftaar, T. M. (2019). “A survey on Image Data Augmentation for Deep Learning.” J. big data, 6(1), 1–48.
Siddique, N., Paheding, S., Elkin, C. P., and Devabhaktuni, V. (2021). “U-net and its variants for medical image segmentation: A review of theory and applications.” IEEE Access, 9, 82031–82057.
Spencer, B. F., Hoskere, V., and Narazaki, Y. (2019). “Advances in Computer Vision-Based Civil Infrastructure Inspection and Monitoring.” Engineering, 5(2), 199–222.
Sultana, F., Sufian, A., and Dutta, P. (2020). “Evolution of Image Segmentation using Deep Convolutional Neural Network: A Survey.” Knowl. Based Syst., 201, 106062.
Talab, A. M. A., Huang, Z., Xi, F., and Haiming, L. (2016). “Detection crack in image using Otsu method and multiple filtering in image processing techniques.” Optik, 127(3), 1030–1033.
Xu, X., Zhao, M., Shi, P., Ren, R., He, X., Wei, X., and Yang, H. (2022). “Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN.” Sensors, 22(3), 1215.
Xu, Y., Bao, Y., Chen, J., Zuo, W., and Li, H. (2019). “Surface fatigue crack identification in steel box girder of bridges by a deep fusion convolutional neural network based on consumer-grade camera images.” Struct. Health Monit., 18(3), 653–674.
Yu, W., and Nishio, M. (2022). “Multilevel Structural Components Detection and Segmentation toward Computer Vision-Based Bridge Inspection.” Sensors, 22(9), 3502.
Zhang, L., Yang, F., Daniel Zhang, Y., and Zhu, Y. J. (2016). “Road crack detection using deep convolutional neural network.” Proc., IEEE Int. Conf. on Image Processing, 2016.

Information & Authors

Information

Published In

Go to Construction Research Congress 2024
Construction Research Congress 2024
Pages: 259 - 268

History

Published online: Mar 18, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Shengyi Wang, S.M.ASCE [email protected]
1Ph.D. Student, Dept. of Civil and Environmental Engineering, Univ. of Illinois at Urbana-Champaign, Urbana, IL. Email: [email protected]
Nora El-Gohary, Ph.D., A.M.ASCE [email protected]
2Associate Professor, Dept. of Civil and Environmental Engineering, Univ. of Illinois at Urbana-Champaign, Urbana, IL. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$276.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$276.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share