ABSTRACT

For lifeline systems, routine visual inspections play a vital role in gathering structure-level data to inform asset maintenance strategies. Current research has shown that data repeatability and objectivity may be enhanced with computer vision and robotics, whereby manual collection, cataloguing, and quantification of defects is automated. For instance, low-cost lidar sensors integrated with a robotic platform can collect 3D geometric information of a structure via simultaneous localization and mapping (SLAM), with image data fusion allowing for defect measurement. While low-cost lidars provide a fast and inexpensive way of obtaining defect measurements, maps they produce often lack the density required for accurate quantification. To remedy this, we combine depth completion enhanced point cloud data from a lidar with labelled image data to extract a more complete surface estimate in physical scale. Given a 3D map, image data with known camera poses, and camera intrinsic calibrations, our approach first transforms the 3D map to a depth map (within camera frame) through ray casting. Depth maps are then densified using a depth completion algorithm that employs image processing techniques tailored to suit sparse lidar map data. Lastly, the combination of labeled defect images and dense depth maps is exploited to extract high-resolution area measurements for defects such as spalls and delaminations. The accuracy of our method has been assessed by comparing results to those obtained from non-depth completed data and to ground truth measurements obtained using a high-resolution monocular camera with manual scale input to each image. Using a data set containing six concrete area defects, our method was shown to reduce error, on average, by 14.2%, with depth completed results having an average area error of 5.3% from the ground truth. Our results show that depth completion on sparse 3D maps can be effective in fine-scale defect quantification, providing a more complete assessment of lifeline system condition using affordable sensors.

Get full access to this article

View all available purchase options and get full access to this chapter.

REFERENCES

Charron, N., McLaughlin, E., Phillips, S., Goorts, K., Narasimhan, S., and Waslander, S. L. (2019). “Automated bridge inspection using mobile ground robotics.” Journal of Structural Engineering, 145(11), 04019137.
Chen, L.-C., Papandreou, G., Schroff, F., and Adam, H. (2017). “Rethinking atrous convolution for semantic image segmentation.” arXiv preprint arXiv:1706.05587.
Everingham, M., Van Gool, L., Williams, C. K., Winn, J., and Zisserman, A. (2010). “The pascal visual object classes (voc) challenge.” International journal of computer vision, 88(2), 303–338.
Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013). “Vision meets robotics: The kitti dataset.” International Journal of Robotics Research (IJRR).
Jaritz, M., De Charette, R., Wirbel, E., Perrotton, X., and Nashashibi, F. (2018). “Sparse and dense data with cnns: Depth completion and semantic segmentation.” 2018 International Conference on 3D Vision (3DV), IEEE, 52–60.
Ku, J., Harakeh, A., and Waslander, S. L. (2018). “In defense of classical image processing: Fast depth completion on the cpu.” 2018 15th Conference on Computer and Robot Vision (CRV), IEEE, 16–22.
McLaughlin, E., Charron, N., and Narasimhan, S. (2020). “Automated defect quantification in concrete bridges using robotics and deep learning.” Journal of Computing in Civil Engineering, 34(5), 04020029.
Phillips, S., and Narasimhan, S. (2019). “Automating data collection for robotic bridge inspections.” Journal of Bridge Engineering, 24(8), 04019075.
Qiu, J., Cui, Z., Zhang, Y., Zhang, X., Liu, S., Zeng, B., and Pollefeys, M. (2019). “Deeplidar: Deep surface normal guided depth prediction for outdoor scene from sparse lidar data and single color image.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3313–3322.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2018). “Mobilenetv2: Inverted residuals and linear bottlenecks.” Proceedings of the IEEE conference on computer vision and pattern recognition, 4510–4520.
Van Gansbeke, W., Neven, D., De Brabandere, B., and Van Gool, L. (2019). “Sparse and noisy lidar completion with rgb guidance and uncertainty.” 2019 16th International Conference on Machine Vision Applications (MVA), IEEE, 1–6.

Information & Authors

Information

Published In

Go to Lifelines 2022
Lifelines 2022
Pages: 566 - 576

History

Published online: Nov 16, 2022

Permissions

Request permissions for this article.

Authors

Affiliations

Jake McLaughlin [email protected]
1Structural Dynamics Identification and Controls Laboratory, Dept. of Mechanical and Mechatronics Engineering, Univ. of Waterloo, Waterloo, ON, Canada. Email: [email protected]
Alexander Thoms [email protected]
2Structural Dynamics Identification and Controls Laboratory, Dept. of Civil and Environmental Engineering, Univ. of Waterloo, Waterloo, ON, Canada. Email: [email protected]
Nicholas Charron [email protected]
3Structural Dynamics Identification and Controls Laboratory, Dept. of Civil and Environmental Engineering and Mechanical and Mechatronics Engineering, Univ. of Waterloo, Waterloo, ON, Canada. Email: [email protected]
Sriram Narasimhan, Ph.D., M.ASCE [email protected]
P.E.
4Sensing and Robotics for Infrastructure Laboratory, Dept. of Civil and Environmental Engineering, Univ. of California, Los Angeles, CA. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$140.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Paper
$35.00
Add to cart
Buy E-book
$140.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share