Technical Papers
Dec 22, 2023

Semantic Segmentation of Cracks on Masonry Surfaces Using Deep-Learning Techniques

Publication: Practice Periodical on Structural Design and Construction
Volume 29, Issue 2

Abstract

Detecting cracks can be challenging, especially on rough surfaces such as masonry. This research paper focuses on the detection of surface cracks on masonry surfaces using deep-learning techniques. This study compared the performance of various networks trained using deep-learning techniques for semantic segmentation of cracks on masonry surfaces. For the semantic segmentation of cracks, the segmentation models U-Net, feature pyramid network (FPN), DeepLabV3+, and PSPNet were integrated with several convolutional neural networks (CNNs) acting as the network’s backbone. Two loss functions, binary cross entropy and binary focal loss, were used in the study. Comparisons among networks using different metrics were performed to find the most promising approaches. Over the training and validation masonry data sets, a total of 23 networks were examined. The results of this study show that three networks can also accurately detect finer surface cracks on masonry surfaces. Based on performance metrics [dice coefficient, intersection over union (IoU), and F1 score], the three best networks were FPN(#2a) (86.9%, 74.9%, 59.3%), FPN(#2c) (85.6%, 75.4%, 56.3%), DeepLabV3+(#1a) (83.1%, 72.0%, 54.4%), respectively. Trained networks have demonstrated proficient performance on existing masonry culverts. This study can significantly aid the detection of cracks in the masonry substructure of old railway bridges.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code generated or used during the study are available in a repository or online in accordance with funder data retention policies.
The masonry image data set relevant to the study was taken from the GitHub repository (GitHub 2022).
Codes and networks relevant to the current study are available in the GitHub repository (GitHub 2023).

References

Aliu, A. A., N. R. M. Ariff, D. S. Ametefe, and D. John. 2023. “Automatic classification and isolation of cracks on masonry surfaces using deep transfer learning and semantic segmentation.” J. Build. Pathol. Rehabil. 8 (1): 28. https://doi.org/10.1007/s41024-023-00274-6.
Badrinarayanan, V., A. Kendall, and R. Cipolla. 2017. “Segnet: A deep convolutional encoder-decoder architecture for image segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (12): 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.
Bishop, C. M., and N. M. Nasrabadi. 2006. Vol. 4 of Pattern recognition and machine learning. New York: Springer.
Cha, Y. J., W. Choi, and O. Büyüköztürk. 2017. “Deep learning-based crack damage detection using convolutional neural networks.” Comput.-Aided Civ. Infrastruct. Eng. 32 (5): 361–378. https://doi.org/10.1111/mice.12263.
Chaiyasarn, K., W. Khan, L. Ali, M. Sharma, D. Brackenbury, and M. DeJong. 2018. “Crack detection in masonry structures using convolutional neural networks and support vector machines.” In Proc., Int. Symp. on Automation and Robotics in Construction. Paris: IAARC Publications.
Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2018a. “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (4): 834–848. https://doi.org/10.1109/TPAMI.2017.2699184.
Chen, L. C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. 2018b. “Encoder–decoder with atrous separable convolution for semantic image segmentation.” In Lecture notes in computer science, edited by V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, 833–851. Berlin: Springer.
Chollet, F. 2015. “Keras: Deep Learning for humans.” Accessed April 10, 2023. https://keras.io.
Dais, D., İ. E. Bal, E. Smyrou, and V. Sarhosis. 2021. “Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning.” Autom. Constr. 125 (May): 103606. https://doi.org/10.1016/j.autcon.2021.103606.
Dang, L. M., H. Wang, Y. Li, L. Q. Nguyen, T. N. Nguyen, H. K. Song, and H. Moon. 2022. “Deep learning-based masonry crack segmentation and real-life crack length measurement.” Constr. Build. Mater. 359 (Dec): 129438. https://doi.org/10.1016/j.conbuildmat.2022.129438.
Dao, L., and N. Q. Ly. 2023. “A comprehensive study on medical image segmentation using deep neural networks.” Int. J. Adv. Comput. Sci. Appl. 14 (3): 167–184. https://doi.org/10.14569/IJACSA.2023.0140319.
Dung, C. V., and L. D. Anh. 2019. “Autonomous concrete crack detection using deep fully convolutional neural network.” Autom. Constr. 99 (Mar): 52–58. https://doi.org/10.1016/j.autcon.2018.11.028.
Elharrouss, O., Y. Akbari, N. Almaadeed, and S. Al-Maadeed. 2022. “Backbones-review: Feature extraction networks for deep learning and deep reinforcement learning approaches.” Preprint, submitted June 16, 2022. https://arxiv.org/abs/2206.08016.
Ellenberg, A., A. Kontsos, I. Bartoli, and A. Pradhan. 2014. “Masonry crack detection application of an unmanned aerial vehicle.” In Computing in civil and building engineering, 1788–1795. Reston, VA: ASCE.
Garcia-Garcia, A., S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. Garcia-Rodriguez. 2017. “A review on deep learning techniques applied to semantic segmentation.” Preprint, submitted April 22, 2017. https://arxiv.org/abs/1704.06857.
GitHub. 2022. “Crack detection for masonry surfaces.” Accessed March 10, 2023. https://github.com/dimitrisdais/crack_detection_CNN_masonry.
GitHub. 2023. “Pranjal-bisht (Pranjal Bisht).” Accessed August 13, 2023. https://github.com/Pranjal-bisht/Crack_segmentation_using_deep_learning_techniques.
Gonzales, R. C., R. E. Woods, and S. L. Eddins. 2004. Digital image processing using MATLAB. Upper Saddle River: Pearson Prentice Hall.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 770–778. New York: IEEE Publications.
Howard, A. G., M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. 2017. “MobileNets: Efficient convolutional neural networks for mobile vision applications.” Preprint, submitted April 17, 2017. https://arxiv.org/abs/1704.04861.
Hsieh, Y. A., and Y. J. Tsai. 2020. “Machine learning for crack detection: Review and model performance comparison.” J. Comput. Civ. Eng. 34 (5): 04020038. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000918.
Kingma, D. P., and J. L. Ba. 2015. “Adam: A method for stochastic optimization.” Preprint, submitted December 22, 2014. https://arxiv.org/abs/1412.6980.
Lee, D., J. Kim, and D. Lee. 2019. “Robust concrete crack detection using deep learning-based semantic segmentation.” Int. J. Aeronaut. Space Sci. 20 (1): 287–299. https://doi.org/10.1007/s42405-018-0120-5.
Li, S., and X. Zhao. 2019. “Image-based concrete crack detection using convolutional neural network and exhaustive search technique.” Adv. Civ. Eng. 2019 (Apr): 1–12. https://doi.org/10.1155/2019/6520620.
Lin, T., P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie. 2017. “Feature pyramid networks for object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 936–944. Honolulu, HI: IEEE. https://doi.org/10.1109/CVPR.2017.106.
Lin, T. Y., P. Goyal, R. Girshick, K. He, and P. Dollár. 2020. “Focal loss for dense object detection.” IEEE Trans. Pattern Anal. Mach. Intell. 42 (2): 318–327. https://doi.org/10.1109/TPAMI.2018.2858826.
Liu, Y., J. Yao, X. Lu, R. Xie, and L. Li. 2019. “DeepCrack: A deep hierarchical feature learning architecture for crack segmentation.” Neurocomputing 338 (Apr): 139–153. https://doi.org/10.1016/j.neucom.2019.01.036.
Long, J., E. Shelhamer, and T. Darrell. 2015. “Fully convolutional networks for semantic segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3431–3440. New York: IEEE.
Marin, B., K. Brown, and M. S. Erden. 2021. “Automated masonry crack detection with faster R-CNN.” In Vol. 2021 of Proc., 17th Int. Conf. on Automation Science and Engineering, 333–340. New York: IEEE Publications.
Rezaie, A., R. Achanta, M. Godio, and K. Beyer. 2020. “Comparison of crack segmentation using digital image correlation measurements and deep learning.” Constr. Build. Mater. 261 (Nov): 120474. https://doi.org/10.1016/j.conbuildmat.2020.120474.
Ronneberger, O., P. Fischer, and T. Brox. 2015. “U-Net: Convolutional networks for biomedical image segmentation.” In Proc., Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, 234–241. Cham: Springer.
Simonyan, K., and A. Zisserman. 2014. “Very deep convolutional networks for large-scale image recognition.” Preprint, submitted September 4, 2014. https://arxiv.org/abs/1409.1556.
Szegedy, C., V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. 2016. “Rethinking the inception architecture for computer vision.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2818–2826. New York: IEEE Publications.
Yakubovskiy, P. 2019. “Segmentation models, GitHub.” Accessed April 10, 2023. https://github.com/qubvel/segmentation_models.
Yang, X., H. Li, Y. Yu, X. Luo, T. Huang, and X. Yang. 2018. “Automatic pixel-level crack detection and measurement using fully convolutional network.” Comput.-Aided Civ. Infrastruct. Eng. 33 (12): 1090–1109. https://doi.org/10.1111/mice.12412.
Zhao, H., J. Shi, X. Qi, X. Wang, and J. Jia. 2017. “Pyramid scene parsing network.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 6230–6239. Honolulu, HI: IEEE.
Zhou, S., C. Canchila, and W. Song. 2023. “Deep learning-based crack segmentation for civil infrastructure: Data types, architectures, and benchmarked performance.” Autom. Constr. 146 (Feb): 104678. https://doi.org/10.1016/j.autcon.2022.104678.

Information & Authors

Information

Published In

Go to Practice Periodical on Structural Design and Construction
Practice Periodical on Structural Design and Construction
Volume 29Issue 2May 2024

History

Received: Jun 19, 2023
Accepted: Oct 16, 2023
Published online: Dec 22, 2023
Published in print: May 1, 2024
Discussion open until: May 22, 2024

Permissions

Request permissions for this article.

Authors

Affiliations

Research Scholar, Dept. of Civil Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh 221005, India (corresponding author). ORCID: https://orcid.org/0000-0003-0034-4287. Email: [email protected]
Pranjal Bisht [email protected]
Dept. of Civil Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh 221005, India. Email: [email protected]
Professor, Dept. of Civil Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh 221005, India. ORCID: https://orcid.org/0000-0001-9853-6054. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share