Abstract

Passengers pass ticket barrier gates without paying in metro stations all over the world. This kind of behavior is called fare evasion, and it is troublesome and costly to prevent. As a typical type of fare evasion, tailgating refers to following a fare-paying passenger through the gate. It can be dangerous because the passenger risks being injured by the barrier gate. To detect tailgating fare evasions automatically, the existing surveillance cameras in stations can be utilized to provide a visual-based method at a low cost and efficiently. However, occlusion by crowds during rush hours can lower the accuracy of regular recognition methods based on convolutional neural networks. Moreover, the behavior of a tailgater can be similar to that of other fare-paying passengers if the positional relationship is not taken into account. Thus, we propose a tailgating recognition method that uses videos as input. First, the estimated human pose data in each frame is obtained, of which incomplete skeletons are retained. Second, the multiple persons appearing in adjacent frames are matched, after which a sequence of skeleton data is generated for each pedestrian. Third, a time series of the positional relationship between passengers and the ticket barrier gate is extracted and the passing interval of passengers is defined as the indicator for detecting tailgating. Our experiments showed that tailgaters could be distinguished effectively from fare-paying passengers, and the time series can cope with missing joints caused by occlusion or misidentification in a few frames.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (Grant No. 61703308) and the Sichuan Province Science and Technology Program (2019YFG0040). The authors gratefully acknowledge the invaluable contribution of the reviewers.

References

Albusac, J., D. Vallejo, J. J. Castro-Schez, C. Glez-Morcillo, and L. Jiménez. 2014. “Dynamic weighted aggregation for normality analysis in intelligent surveillance systems.” Expert Syst. Appl. 41 (4): 2008–2022. https://doi.org/10.1016/j.eswa.2013.08.097.
Arroyo, R., J. J. Yebes, L. M. Bergasa, I. G. Daza, and J. Almazán. 2015. “Expert video-surveillance system for real-time detection of suspicious behaviors in shopping malls.” Expert Syst. Appl. 42 (21): 7991–8005. https://doi.org/10.1016/j.eswa.2015.06.016.
Aslan, M., A. Sengur, Y. Xiao, H. Wang, M. C. Ince, and X. Ma. 2015. “Shape feature encoding via Fisher Vector for efficient fall detection in depth-videos.” Appl. Soft Comput. J. 37 (Dec): 1023–1028. https://doi.org/10.1016/j.asoc.2014.12.035.
Bellamine, I., and H. Tairi. 2016. “Motion detection using color space-time interest points.” In Proc., Mediterranean Conf. on Information & Communication Technologies 2015. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-30301-7_11.
Bermejo Nievas, E., O. Deniz Suarez, G. Bueno García, and R. Sukthankar. 2011. “Violence detection in video using computer vision techniques.” In Vol. 6855 of Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 332–339. Berlin: Springer.
Cao, Z., G. Hidalgo, T. Simon, S. E. Wei, and Y. Sheikh. 2019. “OpenPose: Realtime multiperson 2D pose estimation using part affinity fields.” IEEE Trans. Pattern Anal. Mach. Intell. 43 (1): 172–186. https://doi.org/10.1109/TPAMI.2019.2929257.
Cheng, K., Y. Zhang, X. He, W. Chen, J. Cheng, and H. Lu. 2020. “Skeleton-based action recognition with shift graph convolutional network.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 183–192. New York: IEEE.
Cheng, K. W., Y. T. Chen, and W. H. Fang. 2016. “An efficient subsequence search for video anomaly detection and localization.” Multimedia Tools Appl. 75 (22): 15101–15122. https://doi.org/10.1007/s11042-015-2453-4.
Cho, S. H., and H. B. Kang. 2014. “Abnormal behavior detection using hybrid agents in crowded scenes.” Pattern Recognit. Lett. 44(Jul): 64–70. https://doi.org/10.1016/j.patrec.2013.11.017.
Cosar, S., G. Donatiello, V. Bogorny, C. Garate, L. O. Alvares, and F. Bremond. 2017. “Toward abnormal trajectory and event detection in video surveillance.” IEEE Trans. Circuits Syst. Video Technol. 27 (3): 683–695. https://doi.org/10.1109/TCSVT.2016.2589859.
Escorcia, V., M. A. Dávila, M. Golparvar-Fard, and J. C. Niebles. 2012. “Automated vision-based recognition of construction worker actions for building interior construction operations using RGBD Cameras.” In Proc., Construction Research Congress 2012: Construction Challenges in a Flat World, 879–888. Reston, VA: ASCE.
Gnanavel, V. K., and A. Srinivasan. 2015. “Abnormal event detection in crowded video scenes.” In Advances in intelligent systems and computing, 441–448. New York: Springer.
Gunduz, A. E., T. T. Temizel, and A. Temizel. 2014. “Pedestrian zone anomaly detection by non-parametric temporal modelling.” In Proc., 11th IEEE Int. Conf. on Advanced Video and Signal-Based Surveillance, AVSS 2014, 131–135. New York: IEEE.
Huang, S., W. Chen, B. Sun, T. Tao, and L. Yang. 2020. “Arc detection and recognition in the pantograph-catenary system based on multiinformation fusion.” Transp. Res. Rec. 2674 (10): 229–240. https://doi.org/10.1177/0361198120937964.
Huang, S., L. Yang, W. Chen, T. Tao, and B. Zhang. 2021. “A specific perspective: Subway driver behaviour recognition using CNN and time-series diagram.” IET Intel. Transp. Syst. 15 (3): 387–395. https://doi.org/10.1049/itr2.12032.
Huang, S., Y. Zhai, M. Zhang, and X. Hou. 2019. “Arc detection and recognition in pantograph–catenary system based on convolutional neural network.” Inf. Sci. 501 (Oct): 363–376. https://doi.org/10.1016/j.ins.2019.06.006.
Kim, J., S. Chi, and M. Choi. 2019. “Sequential pattern learning of visual features and operation cycles for vision-based action recognition of earthmoving excavators.” In Computing in civil engineering 2019: Data, sensing, and analytics, 298–304. Reston, VA: ASCE.
Kim, J., S. Chi, and B.-G. Hwang. 2017. “Vision-based activity analysis framework considering interactive operation of construction equipment.” In Computing in civil engineering 2017, 162–170. Reston, VA: ASCE.
Li, C., Q. Zhong, D. Xie, and S. Pu. 2018. “Co-occurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation.” In Proc., IJCAI Int. Joint Conf. on Artificial Intelligence, 786–792. Palo Alto, CA: AAAI Press.
Li, N., X. Wu, H. Guo, D. Xu, Y. Ou, and Y. L. Chen. 2015. “Anomaly detection in video surveillance via Gaussian process.” Int. J. Pattern Recognit. Artif. Intell. 29 (6): 1–25. https://doi.org/10.1142/S0218001415550113.
Liu, J., G. Wang, L. Y. Duan, K. Abdiyeva, and A. C. Kot. 2018. “Skeleton-based human action recognition with global context-aware attention LSTM networks.” IEEE Trans. Image Process. 27 (4): 1586–1599. https://doi.org/10.1109/TIP.2017.2785279.
Liu, M., D. Hong, S. Han, and S. Lee. 2016. “Silhouette-based on-site human action recognition in single-view video.” In Proc., Construction Research Congress 2016, 951–959. Reston, VA: ASCE.
Miao, Y., and J. Song. 2014. “Abnormal event detection based on SVM in video surveillance.” In Proc., 2014 IEEE Workshop on Advanced Research and Technology in Industry Applications, WARTIA 2014, 1379–1383. New York: IEEE.
Munkres, J. 1957. “Algorithms for the assignment and transportation problems.” J. Soc. Ind. Appl. Math. 5 (1): 32–38. https://doi.org/10.1137/0105003.
Rublee, E., V. Rabaud, K. Konolige, and G. Bradski. 2011. “ORB: An efficient alternative to SIFT or SURF.” In Proc., IEEE Int. Conf. on Computer Vision, 2564–2571. New York: IEEE.
Wang, J., and Z. Xu. 2015. “Crowd anomaly detection for automated video surveillance.” IET Seminar Digest 2015 (5): 2–7. https://doi.org/10.1049/ic.2015.0102.
Xiu, Y., J. Li, H. Wang, Y. Fang, and C. Lu. 2019. “Pose flow: Efficient online pose tracking.” In Proc., British Machine Vision Conf. 2018, BMVC 2018, 1–12. Durham, NC: BMVA.
Yan, S., Y. Xiong, and D. Lin. 2018. “Spatial temporal graph convolutional networks for skeleton-based action recognition.” In Proc., 32nd AAAI Conf. on Artificial Intelligence, AAAI 2018, 7444–7452. Palo Alto, CA: Association for the Advancement of Artificial Intelligence.
Zhang, T., W. Jia, B. Yang, J. Yang, X. He, and Z. Zheng. 2017. “MoWLD: A robust motion image descriptor for violence detection.” Multimedia Tools Appl. 76 (1): 1419–1438. https://doi.org/10.1007/s11042-015-3133-0.
Zhang, Y., and Z. J. Liu. 2007. “Irregular behavior recognition based on treading track.” In Vol. 3 of Proc., 2007 Int. Conf. on Wavelet Analysis and Pattern Recognition, ICWAPR ’07, 1322–1326. New York: IEEE.

Information & Authors

Information

Published In

Go to Journal of Transportation Engineering, Part A: Systems
Journal of Transportation Engineering, Part A: Systems
Volume 148Issue 7July 2022

History

Received: Nov 9, 2021
Accepted: Jan 24, 2022
Published online: Apr 21, 2022
Published in print: Jul 1, 2022
Discussion open until: Sep 21, 2022

Permissions

Request permissions for this article.

Authors

Affiliations

Associate Professor, Shanghai Key Laboratory of Rail Infrastructure Durability and System Safety, Tongji Univ., Shanghai 201804, China. ORCID: https://orcid.org/0000-0003-3217-7452. Email: [email protected]
Master’s Degree Candidate, The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji Univ., Shanghai 201804, China. ORCID: https://orcid.org/0000-0003-0232-3316. Email: [email protected]
CASCO Signal Ltd., No. 15 Bldg., Shibei Hi-tech Park, No. 299 Wenshui Rd., Jing’an District, Shanghai 200070, China (corresponding author). ORCID: https://orcid.org/0000-0003-0650-8174. Email: [email protected]
Master’s Degree Candidate, The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji Univ., Shanghai 201804, China. Email: [email protected]
Xiaowen Liu [email protected]
Master’s Degree Candidate, The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji Univ., Shanghai 201804, China. Email: [email protected]
Master’s Degree Candidate, The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji Univ., Shanghai 201804, China. ORCID: https://orcid.org/0000-0003-3462-4551. Email: [email protected]
Zhaoxin Zhang [email protected]
Ph.D. Candidate, The Key Laboratory of Road and Traffic Engineering, Ministry of Education, Tongji Univ., Shanghai 201804, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share