Technical Papers
Sep 15, 2022

Development of Extendable Open-Source Structural Inspection Datasets

Publication: Journal of Computing in Civil Engineering
Volume 36, Issue 6

Abstract

Recent infrastructure inspection has used deep-learning models to enhance and augment typical inspection tasks such as detecting and quantifying damage. One of the issues with this trend is that deep-learning models typically require a significant amount of data. In a data domain such as structural inspection, publicly accessible data are difficult to find, making the advancement of this research slower. Therefore, we set out to acquire bridge inspection data by selectively extracting candidate images from hundreds of thousands of bridge inspection reports from the Virginia Department of Transportation. Using this rich source of diverse data, we refined our collected data to develop four high-quality, easily extendable, publicly accessible datasets, tested with state-of-the-art models to support typical bridge inspection tasks. The four datasets: labeled cracks in the wild, 3,817 image sets of semantically segmented concrete cracks taken from diverse scenery; 3,817 image sets of semantically segmented structural inspection materials (concrete, steel, metal decking); 440 images of finely annotated steel corrosion condition state (good, fair, poor, severe); and 1,470 images of fatigue-prone structural steel bridge details (bearings, gusset plates, cover plate terminations, and out-of-plane stiffeners) for object detection. To ensure the extendibility of the datasets, the authors have proposed annotation guidelines to maintain consistent growth through annotation collaboration. Researchers can use these trained models and data for auxiliary inspection tasks such as damage detection, damage forecasting, automatic report generation, and, coupled with the assistance of unmanned aerial systems, for autonomous flight path planning and object avoidance. The procedures, concepts, and repositories provided in this paper will help to set a course for the advancement of better detection models using high-quality accessible and extendable datasets.

Practical Applications

Researchers can use these datasets and trained models for auxiliary inspection tasks such as damage detection, damage forecasting, automatic report generation, and, coupled with the assistance of unmanned aerial systems, for autonomous flight path planning and object avoidance. The models may also be used as starting points for researchers to extend or begin their own use-case datasets for this domain or similar domain of data. The authors have presented three main reasons for including annotations guidelines for any public dataset: to establish consistent annotations for the training and testing data; to clearly present the expected model prediction; for guiding future collaborators. Annotation guidelines are a practical tool for any researcher procuring a dataset for supervised machine learning.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

All data, models, and code generated or used during the study appear in the published article. The code and the links to the model weights and datasets can be found at https://github.com/beric7/structural_inspection_main.

References

AASHTO. 2018. Manual for bridge element inspection. 1st ed. Washington, DC: AASHTO.
Alipour, M., D. K. Harris, and G. R. Miller. 2019. “Robust pixel-level crack detection using deep fully convolutional neural networks.” J. Comput. Civ. Eng. 33 (6): 04019040. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000854.
Arya, D., H. Maeda, S. K. Ghosh, D. Toshniwal, and Y. Sekimoto. 2021. “RDD2020: An annotated image dataset for automatic road damage detection using deep learning.” Data Brief 36 (Jun): 107133. https://doi.org/10.1016/j.dib.2021.107133.
Bianchi, E., A. L. Abbott, P. Tokekar, and M. Hebdon. 2021. “COCO-bridge: Structural detail data set for bridge inspections.” J. Comput. Civ. Eng. 35 (3): 04021003. https://doi.org/10.1061/(asce)cp.1943-5487.0000949.
Bianchi, E., and T. Evencio. 2021. “Semantic segmentation tutorial using Labelme software.” Accessed February 25, 2021. https://www.youtube.com/watch?v=XtYUPe_JfRw&t=500s.
Bianchi, E., and M. Hebdon. 2021a. COCO-bridge 2021+ dataset (no. 1). Blacksburg, VA: Virginia Tech Univ. Digital Library data repository. https://doi.org/10.7294/16624495.v1.
Bianchi, E., and M. Hebdon. 2021b. Concrete crack conglomerate dataset (no. 1). Blacksburg, VA: Virginia Tech Univ. Digital Library data repository. https://doi.org/10.7294/16625056.v1.
Bianchi, E., and M. Hebdon. 2021c. Corrosion condition state semantic segmentation dataset (no. 1). Blacksburg, VA: Virginia Tech Univ. Digital Library data repository. https://doi.org/10.7294/16624663.v1.
Bianchi, E., and M. Hebdon. 2021d. “Forecasting infrastructure deterioration with inverse GANs.” In Applications of machine learning, edited by M. E. Zelinski, T. M. Taha, and J. Howe. Blacksburg, VA: Virginia Tech Univ. Digital Library data repository. https://doi.org/10.1117/12.2595111.
Bianchi, E., and M. Hebdon. 2021e. Labeled cracks in the wild (LCW) dataset (no. 2). Blacksburg, VA: Virginia Tech Univ. Digital Library data repository. https://doi.org/10.7294/16624672.v2.
Bianchi, E., and M. Hebdon. 2021f. Structural material semantic segmentation dataset (no. 1). Blacksburg, VA: Virginia Tech Univ. Digital Library data repository. https://doi.org/10.7294/16624648.v1.
Bianchi, E., and M. Hebdon. 2021g. “Table of dataset links for visual structural inspection image data.” Accessed January 6, 2022. https://github.com/beric7/structural_inspection_main/.
Bochkovskiy, A., C.-Y. Wang, and H.-Y. M. Liao. 2020. “YOLOv4: Optimal speed and accuracy of object detection.” Preprint, submitted April 23, 2020. https://arxiv.org/abs/2004.10934.
Cha, Y. J., W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk. 2019. “Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types.” Comput.-Aided Civ. Infrastruct. Eng. 34 (8): 637. https://doi.org/10.1111/mice.12479.
Cha, Y.-J., W. Choi, and O. Büyüköztürk. 2017. “Deep learning-based crack damage detection using convolutional neural networks.” Comput.-Aided Civ. Infrastruct. Eng. 32 (5): 361–378. https://doi.org/10.1111/mice.12263.
Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2016. “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (4): 834–848. https://doi.org/10.1109/TPAMI.2017.2699184.
Chen, L.-C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. 2018. “Encoder-decoder with atrous separable convolution for semantic image segmentation.” In Vol. 11211 of Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 833–851. Berlin: Springer. https://doi.org/10.1007/978-3-030-01234-2_49.
Cheng, J., W. Xiong, W. Chen, Y. Gu, and Y. Li. 2018. Pixel-level crack detection using U-net. In Proc., TENCON 2018-2018 IEEE Region 10 Conf., 462–466. New York: IEEE. https://doi.org/10.1109/TENCON.
Cheng, J., W. Xiong, W. Chen, Y. Gu, and Y. Li. 2019. “Pixel-level crack detection using u-net.” In Proc., TENCON, 2018, IEEE Region 10 Annual Int. Conf., 462–466. New York: IEEE. https://doi.org/10.1109/TENCON.2018.8650059.
Dorafshan, S., R. J. Thomas, and M. Maguire. 2018. “SDNET2018: An annotated image dataset for non-contact concrete crack detection using deep convolutional neural networks.” Data Brief 21 (Dec): 1664–1668. https://doi.org/10.1016/j.dib.2018.11.015.
Eisenbach, M., R. Stricker, D. Seichter, K. Amende, K. Debes, M. Sesselmann, D. Ebersbach, U. Stoeckert, and H. M. Gross. 2017. “How to get pavement distress detection ready for deep learning? A systematic approach.” In Proc., Int. Joint Conf. on Neural Networks, 2017-May, 2039–2047. New York: IEEE. https://doi.org/10.1109/IJCNN.2017.7966101.
Feng, C., M. Y. Liu, C. C. Kao, and T. Y. Lee. 2017. “Deep active learning for civil infrastructure defect detection and classification.” In Proc., Congress on Computing in Civil Engineering, 298–306. Reston, VA: ASCE. https://doi.org/10.1061/9780784480823.036.
He, K., G. Gkioxari, P. Dollár, and R. Girshick. 2017. “Mask R-CNN.” CoRR 42 (2): 386–397. https://doi.org/10.1109/TPAMI.2018.2844175.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE. https://doi.org/10.1109/CVPR.2016.90.
Hoskere, V., Y. Narazaki, T. A. Hoang, and B. F. Spencer. 2018. “Towards automated post-earthquake inspections with deep learning-based condition-aware models.” In Proc., 7th World Conf. on Structural Control and Monitoring. Harbin, China: Harbin Institute of Technology.
Hoskere, V., Y. Narazaki, T. A. Hoang, and B. F. Spencer. 2020. “MaDnet: Multi-task semantic segmentation of multiple types of structural materials and damage in images of civil infrastructure.” J. Civ. Struct. Health Monit. 10 (5): 757–773. https://doi.org/10.1007/s13349-020-00409-0.
Huang, G. B., M. Mattar, T. Berg, and E. Learned-Miller. 2008. “Labeled Faces in the wild: A database for studying face recognition in unconstrained environments.” In Proc., Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition. Berlin: Springer.
Huang, Y., C. Qiu, X. Wang, S. Wang, and K. Yuan. 2020. “A compact convolutional neural network for surface defect inspection.” Sensors 20 (7): 1–19. https://doi.org/10.3390/s20071974.
Kalfarisi, R., Z. Y. Wu, and K. Soh. 2020. “Crack detection and segmentation using deep learning with 3D reality mesh model for quantitative assessment and integrated visualization.” J. Comput. Civ. Eng. 34 (3): 04020010. https://doi.org/10.1061/(asce)cp.1943-5487.0000890.
Kim, H., J. Yoon, and S. H. Sim. 2020. “Automated bridge component recognition from point clouds using deep learning.” Struct. Control Health Monit. 27 (9): e2591. https://doi.org/10.1002/stc.2591.
Kolkin, N., J. Salavon, and G. Shakhnarovich. 2019. “Style transfer by relaxed optimal transport and self-similarity.” In Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2019-June, 10043–10052. New York: IEEE. https://doi.org/10.1109/CVPR.2019.01029.
Li, G., et al. 2020. “Automatic crack recognition for concrete bridges using a fully convolutional neural network and naive Bayes data fusion based on a visual detection system.” Meas. Sci. Technol. 31 (7): 17. https://doi.org/10.1088/1361-6501/ab79c8.
Li, S., and X. Zhao. 2019. “Image-based concrete crack detection using convolutional neural network and exhaustive search technique.” In Advances in civil engineering, 1–12. https://doi.org/10.1155/2019/6520620. London: Hindawi.
Li, Z., H. Zhu, and M. Huang. 2021. “A deep learning-based fine crack segmentation network on full-scale steel bridge images with complicated backgrounds.” IEEE Access 9 (Aug): 114989–114997. https://doi.org/10.1109/ACCESS.2021.3105279.
Liu, W., D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. 2016. “SSD: Single shot multibox detector.” Accessed December 10, 2018. https://github.com/weiliu89/caffe/tree/ssd.
Liu, Y., J. Yao, X. Lu, R. Xie, and L. Li. 2019. “DeepCrack: A deep hierarchical feature learning architecture for crack segmentation.” Neurocomputing 338 (Apr): 139–153. https://doi.org/10.1016/j.neucom.2019.01.036.
Narazaki, Y., V. Hoskere, T. A. Hoang, and B. F. Spencer. 2018a. “Automated bridge component recognition using video data.” In Proc., 7th World Conf. on Structural Control and Monitoring. Harbin, China: Harbin Institute of Technology.
Narazaki, Y., V. Hoskere, T. A. Hoang, and B. F. Spencer. 2018b. “Vision-based automated bridge component recognition integrated with high-level scene understanding.” In Proc., 13th Int. Workshop on Advanced Smart Materials and Smart Structures Technology. Dalian, China: Asian-Pacific Network of Centers for Research in Smart Structures Technology.
Narazaki, Y., V. Hoskere, K. Yoshida, B. F. Spencer, and Y. Fujino. 2021. “Synthetic environments for vision-based structural condition assessment of Japanese high-speed railway viaducts.” Mech. Syst. Sig. Process. 160 (Nov): 107850. https://doi.org/10.1016/j.ymssp.2021.107850.
Prappacher, N., M. Bullmann, G. Bohn, F. Deinzer, and A. Linke. 2020. “Defect detection on rolling element surface scans using neural image segmentation.” Appl. Sci. 10 (9): 3290. https://doi.org/10.3390/app10093290.
Prasanna, P., K. J. Dana, N. Gucunski, B. B. Basily, H. M. La, R. S. Lim, H. Parvardeh, N. Gucunski, and B. B. Basily. 2016. “Automated crack detection on concrete bridges.” IEEE Trans. Autom. Sci. Eng. 13 (2): 591–599. https://doi.org/10.1109/TASE.2014.2354314.
Ren, Y., J. Huang, Z. Hong, W. Lu, J. Yin, L. Zou, and X. Shen. 2020. “Image-based concrete crack detection in tunnels using deep fully convolutional networks.” Constr. Build. Mater. 234 (Feb): 117367. https://doi.org/10.1016/j.conbuildmat.2019.117367.
Ryan, T. W., J. E. Mann, and Z. M. Chill. 2012. Vol. 1 of FHWA bridge inspector’s reference manual (BIRM). Washington, DC: FHWA.
Shi, Y., L. Cui, Z. Qi, F. Meng, and Z. Chen. 2016. “Automatic road crack detection using random structured forests.” IEEE Trans. Intell. Transp. Syst. 17 (12): 3434–3445. https://doi.org/10.1109/TITS.2016.2552248.
Song, C., L. Wu, Z. Chen, H. Zhou, P. Lin, S. Cheng, and Z. Wu. 2019. “Pixel-level crack detection in images using SegNet.” In Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 247–254. Berlin: Springer. https://doi.org/10.1007/978-3-030-33709-4_22.
Spencer, B. F., V. Hoskere, and Y. Narazaki. 2019. “Advances in computer vision-based civil infrastructure inspection and monitoring.” Engineering 5 (2): 199–222. https://doi.org/10.1016/j.eng.2018.11.030.
Tan, M., and Q. V. Le. 2019. “EfficientNet: Rethinking model scaling for convolutional neural networks.” In Proc., 36th Int. Conf. on Machine Learning, ICML 2019, 10691–10700. Cambridge, MA: PMLR.
The GIMP Development Team. 2019. “GIMP (2.10.12).” Accessed June 20, 2020. https://www.gimp.org.
Wada, K. 2016. “Labelme: Image polygonal annotation with python.” Accessed March 10, 2019. https://github.com/wkentaro/labelme.
Weng, W., and X. Zhu. 2015. INet: Convolutional networks for biomedical image segmentation. https://doi.org/10.1109/ACCESS.2021.3053408.
Xia, W., Y. Zhang, Y. Yang, J.-H. Xue, B. Zhou, and M.-H. Yang. 2021. “GAN inversion: A survey.” In Vol. 1 of IEEE Transactions on Pattern Analysis & Machine Intelligence, 1–17. Manhattan, New York: IEEE. https://doi.org/10.1109/TPAMI.2022.3181070.
Xu, Y., Y. Bao, J. Chen, W. Zuo, and H. Li. 2019. “Surface fatigue crack identification in steel box girder of bridges by a deep fusion convolutional neural network based on consumer-grade camera images.” Struct. Health Monit. 18 (3): 653–674. https://doi.org/10.1177/1475921718764873.
Yang, F., L. Zhang, S. Yu, D. Prokhorov, X. Mei, and H. Ling. 2020. “Feature pyramid and hierarchical boosting network for pavement crack detection.” IEEE Trans. Intell. Transp. Syst. 21 (4): 1525–1535. https://doi.org/10.1109/TITS.2019.2910595.
Zhou, S., and W. Song. 2021. “Crack segmentation through deep convolutional neural networks and heterogeneous image fusion.” Autom. Constr. 125 (Oct): 103605. https://doi.org/10.1016/j.autcon.2021.103605.
Zou, Q., Y. Cao, Q. Li, Q. Mao, and S. Wang. 2012. “CrackTree: Automatic crack detection from pavement images.” Pattern Recognit. Lett. 33 (3): 227–238. https://doi.org/10.1016/j.patrec.2011.11.004.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 36Issue 6November 2022

History

Received: Feb 9, 2022
Accepted: May 25, 2022
Published online: Sep 15, 2022
Published in print: Nov 1, 2022
Discussion open until: Feb 15, 2023

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Associate Professor, Virginia Tech, 750 Drillfield Dr. 200 Patton Hall, Blacksburg, VA 24061 (corresponding author). ORCID: https://orcid.org/0000-0001-9003-1414. Email: [email protected]
Civil, Architectural and Environmental Engineering, Univ. of Texas at Austin, 301 E. Dean Keeton St., Austin, TX 78712. ORCID: https://orcid.org/0000-0002-9115-0279. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • Deep Learning Enhanced Crack Detection for Tunnel Inspection, International Conference on Transportation and Development 2024, 10.1061/9780784485514.064, (732-741), (2024).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share