Abstract

The rail running band is a mathematical representation describing the continuous strip-shaped spatial surface resulting from the rolling contact operation of train wheels on the rail surface, which establishes a direct mapping relationship with the wheel–rail interaction, and the nature of this interaction significantly influences the safety and comfort of train operations. Therefore, accurate detection of the running band is crucial for enhancing the safety and comfort of train travel. Traditional running band detection relies on manual inspection methods, utilizing a scale for measurements on the rail. However, this approach is characterized by high labor costs, slow detection speeds, and a lack of systematic data preservation. This paper proposes R2Bnet, a lightweight semantic segmentation algorithm that achieves pixel-level detection of rail running bands. R2Bnet is an enhanced encoder-decoder architecture built upon ShuttleNet. Different from ShuttleNet, R2Bnet optimizes the number of repetitive codecs in ShuttleNet and redesigns the encoder’s residual structure to match the unique characteristics of rail running bands, allowing the backbone network to effectively capture long-range dependencies. Furthermore, R2Bnet integrates an efficient channel attention mechanism to enhance focus on critical regions and optimize feature representations. The F-measure and mean intersection over union (mIOU) achieved by R2Bnet on 300 testing images were 98.47% and 0.9617, respectively. Notably, R2Bnet outperformed six state-of-the-art models for semantic segmentation and demonstrated a significant 39% improvement in speed compared with the average speed of the six networks provided.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

All data, models, or code that support the findings of this paper are available from the corresponding author upon reasonable request.

Acknowledgments

The present work has been supported by Technology Research and Development Program of China National Railway Group Co. Ltd. (K2022G034), National Natural Science Foundation of China (51908474), Natural Science Foundation of Sichuan Province (2023NSFSC0398 and 2023NSFSC0884), and Fundamental Research Funds for the Central Universities (2682022ZTPY067).
Author contributions: Xiancai Yang: network conception and design; experiment design and analysis of results; and manuscript preparation. Mingjing Yu: network conception and design. Allen A. Zhang: data preparation; and experiment design and analysis of results. Yao Qian: data preparation; and manuscript preparation. Zeyu Liu: data preparation. Jingmang Xu: data preparation. Ping Wang: data preparation.

References

Archard, J. F. 1953. “Contact and rubbing of flat surfaces.” J. Appl. Phys. 24 (8): 981–988. https://doi.org/10.1063/1.1721448.
Badrinarayanan, V., A. Kendall, and R. Cipolla. 2017. “SegNet: A deep convolutional encoder-decoder architecture for image segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (12): 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.
Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2015. “Semantic image segmentation with deep convolutional nets and fully connected CRFs.” Preprint, submitted December 22, 2014. https://arxiv.org/abs/1412.7062.
Chen, L. C., G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. 2017. “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (4): 834–848. https://doi.org/10.1109/TPAMI.2017.2699184.
Chen, L. C., Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. 2018a. “Encoder-decoder with atrous separable convolution for semantic image segmentation.” In Proc., European Conf. on Computer Vision (ECCV), 833–851. Berlin: Springer. https://doi.org/10.1007/978-3-030-01234-2_49.
Chen, Y., Y. Kalantidis, J. Li, S. Yan, and J. Feng. 2018b. “A2-Nets: Double attention networks.” In Proc., of the 32nd Int. Conf. on Neural Information Processing Systems, 350–359. Red Hook, NY: Curran Associates.
Gao, Z., J. Xie, Q. Wang, and P. Li. 2019. “Global second-order pooling convolutional networks.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition 2019, 3024–3033. New York: IEEE.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Hou, Q., L. Zhang, M. M. Cheng, and J. Feng. 2020. “Strip pooling: Rethinking spatial pooling for scene parsing.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Hu, J., L. Shen, S. Albanie, G. Sun, and A. Vedaldi. 2018a. “Gather-excite: Exploiting feature context in convolutional neural networks.” In Advances in neural information processing systems. Cambridge, MA: MIT Press.
Hu, J., L. Shen, and G. Sun. 2018b. “Squeeze-and-excitation net-works.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition 2018. New York: IEEE.
Li, Q., and S. Ren. 2012. “A real-time visual inspection system for discrete surface defects of rail heads.” IEEE Trans. Instrum. Meas. 61 (8): 2189–2199. https://doi.org/10.1109/TIM.2012.2184959.
Lin, G., A. Milan, C. Shen, and I. Reid. 2017. “RefineNet: Multi-path refinement networks for high-resolution semantic segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 5168–5177. New York: IEEE. https://doi.org/10.1109/CVPR.2017.549.
Long, J., E. Shelhamer, and T. Darrell. 2015. “Fully convolutional networks for semantic segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (4): 640–651. https://doi.org/10.1109/tpami.2016.2572683.
Ma, K., T. F. Vicente, and D. Samaras. 2016. “Texture classification for rail surface condition evaluation.” In Proc., IEEE Winter Conf. on Applications of Computer Vision. New York: IEEE.
Milletari, F., N. Navab, and S.-A. Ahmadi. 2016. “V-net: Fully convolutional neural networks for volumetric medical image segmentation.” In Proc., 2016 Fourth Int. Conf. on 3D vision (3DV). New York: IEEE.
Ronneberger, O., P. Fischer, and B. Thomas. 2015. “U-Net: Convolutional Networks for Biomedical Image segmentation.” In Proc., Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th Int. Conf., 234–241. Cham, Switzerland: Springer.
Sambo, B., A. Bevan, and C. Pislaru. 2016. “A novel application of image processing for the detection of rail surface RCF damage and incorporation in a crack growth model.” In Proc., Int. Conf. on Railway Engineering (ICRE). New York: IEEE.
Wang, J., et al. 2020a. “Deep high-resolution representation learning for visual recognition.” IEEE Trans. Pattern Anal. Mach. Intell. 43 (10): 3349–3364. https://doi.org/10.1109/TPAMI.2020.2983686.
Wang, Q., T. Gao, Q. He, Y. Liu, J. Wu, and P. Wang. 2023. “Severe rail wear detection with rail running band images.” Comput.-Aided Civ. Infrastruct. Eng. 38 (9): 1162–1180. https://doi.org/10.1111/mice.12948.
Wang, Q., B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu. 2020b. “ECA-Net: Efficient channel attention for deep convolutional neural networks.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition. New York: IEEE.
Wang, Z., X. Wu, G. Yu, and M. Li. 2018. “Efficient rail area detection using convolutional neural network.” IEEE Access 6 (Nov): 77656–77664. https://doi.org/10.1109/ACCESS.2018.2883704.
Woo, S., J. Park, J. Y. Lee, and I. S. Kweon. 2018. “CBAM: Convolutional block attention module.” In Proc., European Conf. on Computer Vision (ECCV), 3–19. Cham, Switzerland: Springer.
Wu, Y., Y. Qin, Y. Qian, F. Guo, Z. Wang, and L. Jia. 2022. “Hybrid deep learning architecture for rail surface segmentation and surface defect detection.” Comput.-Aided Civ. Infrastruct. Eng. 37 (2): 227–244. https://doi.org/10.1111/mice.12710.
Xie, E., W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Lou. 2021. “SegFormer: Simple and efficient design for semantic segmentation with transformers.” Preprint, submitted May 31, 2021. https://arxiv.org/abs/2105.15203.
Yanan, S., Z. Hui, L. Li, and Z. Hang. 2018. “Rail surface defect detection method based on YOLOv3 deep learning networks.” In Proc., 2018 Chinese Automation Congress, Xi’an, China, 1563–1568. New York: IEEE.
Ye, W., J. Ren, A. A. Zhang, and C. Lu. 2023. “Automatic pixel-level crack detection with multi-scale feature fusion for slab tracks.” Comput.-Aided Civ. Infrastruct. Eng. 38 (18): 2648–2665. https://doi.org/10.1111/mice.12984.
Zhang, A. A., et al. 2022. “Intelligent pixel-level detection of multiple distresses and surface design features on asphalt pavements.” Comput.-Aided Civ. Infrastruct. Eng. 37 (13): 1654–1673. https://doi.org/10.1111/mice.12909.
Zhao, H., J. Shi, X. Qi, X. Wang, and J. Jia. 2017. “Pyramid scene parsing network.” In Proc., 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 6230–6239. New York: IEEE. https://doi.org/10.1109/CVPR.2017.660.

Information & Authors

Information

Published In

Go to Journal of Infrastructure Systems
Journal of Infrastructure Systems
Volume 30Issue 3September 2024

History

Received: Oct 13, 2023
Accepted: Feb 21, 2024
Published online: May 6, 2024
Published in print: Sep 1, 2024
Discussion open until: Oct 6, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Xiancai Yang [email protected]
Master’s Student, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China. Email: [email protected]
Mingjing Yue [email protected]
Doctoral Student, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China. Email: [email protected]
Allen A. Zhang, Ph.D. [email protected]
Professor, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China. Email: [email protected]
Yao Qian, Ph.D. [email protected]
Professor, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China (corresponding author). Email: [email protected]
Jingmang Xu, Ph.D. [email protected]
Professor, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China. Email: [email protected]
Ping Wang, Ph.D. [email protected]
Professor, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China. Email: [email protected]
Master’s Student, Key Laboratory of High-Speed Railway Engineering, Ministry of Education, School of Civil Engineering, Southwest Jiaotong Univ., Chengdu, Sichuan 610031, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share