Technical Papers
Aug 9, 2021

Semantic Deep Learning Integrated with RGB Feature-Based Rule Optimization for Facility Surface Corrosion Detection and Evaluation

Publication: Journal of Computing in Civil Engineering
Volume 35, Issue 6

Abstract

Over the last few years, convolutional neural networks (CNNs) have been applied to detect corrosion in images. Unfortunately, the corrosion is detected in bounding boxes, without precisely segmenting the corrosion elements in irregular boundary shapes, and thus it is difficult to assess them quantitatively, such as in terms of corrosion areas and corrosion severity, which are important for engineers to evaluate the performance and condition of an inspection target. In addition, training an effective CNN model requires creating a training data set by labeling the corrosion pixels in each image, which is tedious and labor intensive. This paper presents a semantic segmentation deep learning approach together with an efficient image labelling tool for rapidly preparing large training data sets, and effectively detecting, segmenting, and evaluating corrosions in the images. The image labeling tool was developed by implementing a texture-based unsupervised image segmentation method, integrated with red-green-blue (RGB) feature-based classifier optimization. The tool enables users to construct a pixel-based corrosion classifier with small set of manually labeled images. This small set of labeled images is used for optimizing the pixel-based corrosion classifier to automatically generate corrosion segments for a large number of training images. A CNN model with semantic segmentation feature then is trained for corrosion detection and segmentation. Finally, a corrosion evaluation method is proposed for classifying each pixel of a corrosion segment into user-prescribed categories such as heavy corrosion, medium corrosion, and light corrosion. The integrated approach was tested on images collected by professional inspection engineers. The results indicated that the proposed approach is practically applicable for corrosion assessment for a wide range of industrial facilities and civil infrastructures.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

All image data, trained models, and the data labeling code generated or used during the study are proprietary or confidential in nature and may be provided only with restrictions.

Acknowledgments

The authors are grateful to Taylor Gilmore and his team from the Bentley Asset Performance product advancement unit for providing tens of thousands of infrastructure inspection images for training, validating, and testing the models. Their support was essential for the research project and therefore is greatly appreciated.

References

AbdelRazig, Y. A. 1999. “Construction quality assessment: A hybrid decision support model using image processing and neural learning for intelligent defects recognition.” Ph.D. dissertation, Dept. of Civil Engineering, Purdue Univ.
Alkanhal, T. A. 2014. “Image processing techniques applied for pitting corrosion analysis.” Int. J. Res. Eng. Technol. 3 (1): 385–391.
Atha, D. J., and M. R. Jahanshahi. 2018. “Evaluation of deep learning approaches based on convolutional neural networks for corrosion detection.” Struct. Health Monit. 17 (5): 1110–1128. https://doi.org/10.1177/1475921717737051.
Badrinarayanan, V., A. Kendall, and R. Cipolla. 2017. “Segnet: A deep convolutional encoder-decoder architecture for image segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (12): 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.
Bentley Systems. 2018. “InspectTech—Inspection, maintenance, and asset management for transportation infrastructure: Bridge, tunnel, road, rail, and transit.” Accessed May 20, 2018. https://prod-bentleycdn.azureedge.net/-/media/files/documents/product-data-sheet/pds_inspecttech_ltr_en_lr.pdf?la=en&modified=20181030213411.
Bonnin-Pascual, F., and A. Ortiz. 2014. “Corrosion detection for automated visual inspection.” Chap. 25 in Developments in corrosion protection, edited by M. Aliofkhazraei. Rijeka, Croatia: Intech Open.
Castrejón, L., K. Kundu, R. Urtasun, and S. Fidler. 2017. “Annotating object instances with a polygon-RNN.” Preprint, submitted April 18, 2017. https://arxiv.org/abs/1704.05548.
Chen, L., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2018. “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (4): 834–848. https://doi.org/10.1109/TPAMI.2017.2699184.
Chen, P.-H., and L.-M. Chang. 2003. “Artificial intelligence application to bridge painting assessment.” Autom. Constr. 12 (4): 431–445. https://doi.org/10.1016/S0926-5805(03)00016-5.
Chen, P.-H., Y.-C. Yang, and L.-M. Chang. 2009. “Automated bridge coating defect recognition using adaptive ellipse approach.” Autom. Constr. 18 (5): 632–643. https://doi.org/10.1016/j.autcon.2008.12.007.
Chen, X., R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. Yuille. 2014 “Detect what you can: Detecting and representing objects using holistic models and body parts.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Choi, K. Y., and S. S. Kim. 2005. “Morphological analysis and classification of types of surface corrosion damage by digital image processing.” Corros. Sci. 47 (1): 1–15. https://doi.org/10.1016/j.corsci.2004.05.007.
Cordts, M., M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. 2016. “The Cityscapes dataset for semantic urban scene understanding.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Enikeev, M., I. Gubaydullin, and M. Maleeva. 2017. “Analysis of corrosion process development on metals by means of computer vision.” Eng. J. 21 (4): 183–192.
Everingham, M., S. M. Ali Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserma. 2015. “The PASCAL visual object classes challenge: A retrospective.” Int. J. Comput. Vision 111 (1): 98–136. https://doi.org/10.1007/s11263-014-0733-5.
Farabet, C., C. Couprie, L. Najman, and Y. LeCun. 2013. “Learning hierarchical features for scene labeling.” IEEE Trans. Pattern Anal. Mach. Intell. 35 (8): 1915–1929. https://doi.org/10.1109/TPAMI.2012.231.
Forkan, A. R. M., Y.-B. Kang, P. P. Jayaraman, K. Liaoa, R. Kaul, G. Morgan, R. Ranjan, and S. Sinha. 2021. “CorrDetector: A framework for structural corrosion detection from drone images using ensemble deep learning.” Accessed February 18, 2021. https://arxiv.org/pdf/2102.04686.pdf.
Fulkerson, B., A. Vedaldi, and S. Soatto. 2009. “Class segmentation and object localization with superpixel neighborhoods.” In Proc., 2009 IEEE 12th Int. Conf. on Computer Vision, 670–677. New York: IEEE.
Furuta, H., T. Deguchi, and M. Kushida. 1995. “Neural network analysis of structural damage due to corrosion.” In Proc., 3rd Int. Symp. on Uncertainty Modeling and Analysis and Annual Conf. of the North American Fuzzy Information Processing Society, 109–114. New York: IEEE.
Ghanta, S., T. Karp, and S. Lee. 2011. “Wavelet domain detection of rust in steel bridge images.” In Proc., 2011 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), 1033–1036. New York: IEEE.
He, K., G. Gkioxari, P. Dollár, and R. Girshick. 2017. “Mask R-CNN.” In Proc., 2017 IEEE Int. Conf. on Computer Vision (ICCV), 2980–2988. New York: IEEE.
Hoskere, V., Y. Narazaki, T. A. Hoang, and B. F. Spencer Jr. 2017. “Vision-based structural inspection using multiscale deep convolutional neural networks.” Preprint, submitted May 2, 2018. https://arxiv.org/abs/1805.01055.
Jahanshahi, M. R., and S. F. Masri. 2013. “Parametric performance evaluation of wavelet-based corrosion detection algorithms for condition assessment of civil infrastructure systems.” J. Comput. Civ. Eng. 27 (4): 345–357. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000225.
Jin Lim, H., S. Hwang, H. Kim, and H. Sohn. 2021. “Steel bridge corrosion inspection with combined vision and thermographic images.” Struct. Health Monit. 1–20. https://doi.org/10.1177/1475921721989407.
Kalfarisi, R., Z. Y. Wu, and K. Soh. 2020. “Crack detection and segmentation using deep learning with 3D mesh model for quantitative assessment and visualization.” J. Comput. Civ. Eng. 34 (3): 04020010. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000890.
Koch, G. H., M. P. H. Brongers, N. G. Thompson, Y. P. Virmani, and J. H. Payer. 2016. “Corrosion costs and prevention strategy in the United States.” Accessed May 20, 2018. http://impact.nace.org/documents/ccsupp.pdf.
Lecun, Y., L. Bottou, Y. Bengio, and P. Haffner. 1998. “Gradient-based learning applied to document recognition.” Proc. IEEE 86 (11): 2278–2324. https://doi.org/10.1109/5.726791.
Lee, S., L.-M. Chang, and P.-H. Chen. 2005. “Performance comparison of bridge coating defect recognition methods.” Corrosion 61 (1): 12–20. https://doi.org/10.5006/1.3278155.
Lee, S., L.-M. Chang, and M. Skibniewski. 2006. “Automated recognition of surface defects using digital color image processing.” Autom. Constr. 15 (4): 540–549. https://doi.org/10.1016/j.autcon.2005.08.001.
Lin, T.-Y., P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. 2017. “Feature pyramid networks for object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2117–2125. New York: IEEE.
Ling, H., J. Gao, A. Kar, W. Chen, and S. Fidler. 2019. “Fast interactive object annotation with Curve-GCN.” In Proc., 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Liu, W., A. Rabinovich, and A. C. Berg. 2015. “ParseNet: Looking wider to see better.” Preprint, submitted June 15, 2015. https://arxiv.org/abs/1506.04579.
Long, J., E. Shelhamer, and T. Darrell. 2015. “Fully convolutional networks for semantic segmentation.” In Proc., 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 3431–3440. New York: IEEE.
Maninis, K.-K., S. Caelles, J. Pont-Tuset, and L. Van Gool. 2018. “Deep extreme cut: From extreme points to object segmentation.” In Proc., 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). New York: IEEE.
Mottaghi, R., X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. 2014. “The role of context for object detection and semantic segmentation in the wild.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Nash, W., T. Drummond, and N. Birbilis. 2018. “Quantity beats quality for semantic segmentation of corrosion in images.” Preprint, submitted June 30, 2018. http://arxiv.org/abs/1807.03138.
Noh, H., S. Hong, and B. Han. 2015. “Learning deconvolution network for semantic segmentation.” In Proc., 2015 IEEE Int. Conf. on Computer Vision (ICCV), 1520–1528. New York: IEEE.
Petricca, L., T. Moss, G. Figueroa, and S. Broen. 2016. “Corrosion detection using A.I.: A comparison of standard computer vision techniques and deep learning model.” Comput. Sci. Inf. Technol. 6: 91–99. https://doi.org/10.5121/csit.2016.60608.
Pidaparti, R. M., B. S. Aghazadeh, A. Whitfield, A. S. Rao, and G. P. Mercier. 2010. “Classification of corrosion defects in NiAl bronze through image analysis.” Corros. Sci. 52 (11): 3661–3666. https://doi.org/10.1016/j.corsci.2010.07.017.
Ronneberger, O., P. Fischer, and T. Brox. 2015. “U-Net: Convolutional networks for biomedical image segmentation.” In Medical image computing and computer-assisted intervention–MICCAI 2015, edited by N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, 234–241. Cham, Switzerland: Springer.
Russakovsky, O., et al. 2015. “ImageNet large scale visual recognition challenge.” Int. J. Comput. Vision 115 (3): 211–252. https://doi.org/10.1007/s11263-015-0816-y.
Shotton, J., M. Johnson, and R. Cipolla. 2008. “Semantic texton forests for image categorization and segmentation.” In Proc., 2008 IEEE Conf. on Computer Vision and Pattern Recognition, 1–8. New York: IEEE.
Shotton, J., J. Winn, C. Rother, and A. Criminisi. 2009. “TextonBoost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context.” Int. J. Comput. Vision 81 (1): 2–23. https://doi.org/10.1007/s11263-007-0109-1.
Simonyan, K., and A. Zisserman. 2014. “Very deep convolutional networks for large-scale image recognition.” Preprint, submitted September 4, 2014. https://arxiv.org/abs/1409.1556.
Tu, Z., and X. Bai. 2010. “Auto-context and its application to high-level vision tasks and 3D brain image segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 32 (10): 1744–1757. https://doi.org/10.1109/TPAMI.2009.186.
Wu, Z. Y., Q. Wang, S. Butala, T. Mi, and Y. Song. 2012. Darwin optimization user manual. Watertown, CT: Bentley Systems.
Yuan, J., D. Wang, and A. M. Cheriyadat. 2015. “Factorization-based texture segmentation.” IEEE Trans. Image Process. 24 (11): 3488–3497. https://doi.org/10.1109/TIP.2015.2446948.
Zeiler, M. D., and R. Fergus. 2014. “Visualizing and understanding convolutional networks.” In Computer vision—ECCV 2014, edited by D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, 818–833. Cham, Switzerland: Springer.
Zhao, H., J. Shi, X. Qi, X. Wang, and J. Jia. 2017. “Pyramid scene parsing network.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition. New York: IEEE.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 35Issue 6November 2021

History

Received: Jan 11, 2021
Accepted: Apr 13, 2021
Published online: Aug 9, 2021
Published in print: Nov 1, 2021
Discussion open until: Jan 9, 2022

Permissions

Request permissions for this article.

Authors

Affiliations

Atiqur Rahman, Ph.D.
Former Research Intern, Bentley Systems Inc., Watertown, CT; presently, Research Scientist, Facebook Inc., 1 Hacker Way #15, Menlo Park, CA 94025.
Zheng Yi Wu, Ph.D., M.ASCE [email protected]
Bentley Fellow, Director of Applied Research, Bentley Systems Inc., 27 Siemon Company Dr., Watertown, CT 06795 (corresponding author). Email: [email protected]
Rony Kalfarisi, Ph.D.
Software Engineer II, Bentley Systems Singapore Pte. Ltd., 1 Harbour Front Place, Harbour Front Tower One, #18-01 to 03, Singapore 098633.

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • Novel Method for Bridge Structural Full-Field Displacement Monitoring and Damage Identification, Applied Sciences, 10.3390/app13031756, 13, 3, (1756), (2023).
  • Extracting Worker Unsafe Behaviors from Construction Images Using Image Captioning with Deep Learning–Based Attention Mechanism, Journal of Construction Engineering and Management, 10.1061/JCEMD4.COENG-12096, 149, 2, (2023).
  • Impact of UAV Hardware Options on Bridge Inspection Mission Capabilities, Drones, 10.3390/drones6030064, 6, 3, (64), (2022).
  • undefined, 2022 IEEE 16th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET), 10.1109/TCSET55632.2022.9766947, (564-568), (2022).
  • Integration of deep learning and extended reality technologies in construction engineering and management: a mixed review method, Construction Innovation, 10.1108/CI-04-2022-0075, 22, 3, (671-701), (2022).
  • Automated Rust Removal: Rust Detection and Visual Servo Control, Automation in Construction, 10.1016/j.autcon.2021.104043, 134, (104043), (2022).
  • Sequence of U-Shaped Convolutional Networks for Assessment of Degree of Delamination Around Scribe, International Journal of Computational Intelligence Systems, 10.1007/s44196-022-00141-1, 15, 1, (2022).
  • Automatic pixel-level detection and measurement of corrosion-related damages in dim steel box girders using Fusion-Attention-U-net, Journal of Civil Structural Health Monitoring, 10.1007/s13349-022-00631-y, 13, 1, (199-217), (2022).

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share