Technical Papers
May 19, 2023

Digitalization of Traffic Scenes in Support of Intelligent Transportation Applications

Publication: Journal of Computing in Civil Engineering
Volume 37, Issue 5

Abstract

Digitalization of real-world traffic scenes is a fundamental task in development of digital twins of road transportation. However, the existing digitalization approaches are either expensive in equipment costs or inapplicable to collect granular level data of traffic scenes. This study proposed a vision-based method for real-time digitalization of traffic scenes through modeling and merging the road infrastructure (static components) and road users (dynamic components) progressively. Specifically, the former is reconstructed by leveraging unmanned aerial vehicles (UAVs) and structure from motion; and the latter is digitized via using roadside surveillance videos and a new reconstruction process through applying deep learning and view geometry. Last, the digital model of the traffic scene is built by merging the digital models of static and dynamic components. A field experiment was performed to evaluate the performance of the proposed method. The results showed that the traffic scene can be successfully digitalized by the proposed method with promising accuracy, thus signifying the method’s potential for the development of the digital twins of road transportation in support of intelligent transportation applications.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

The annotations for the image data sets and the created models that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This work was sponsored by a grant from the Center for Integrated Asset Management for Multimodal Transportation Infrastructure Systems (CIAMTIS), a US Department of Transportation University Transportation Center, under federal Grant No. 69A3551847103. The authors are grateful for the support. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the CIAMTIS.

References

Aryan, A., F. Bosché, and P. Tang. 2021. “Planning for terrestrial laser scanning in construction: A review.” Autom. Constr. 125 (May): 103551. https://doi.org/10.1016/j.autcon.2021.103551.
Bao, L., Q. Wang, and Y. Jiang. 2021. “Review of digital twin for intelligent transportation system.” In Proc., Int. Conf. on Information Control, Electrical Engineering and Rail Transit (ICEERT), 309–315. New York: IEEE.
Bhatti, M. T., M. G. Khan, M. Aslam, and M. J. Fiaz. 2021. “Weapon detection in real-time CCTV videos using deep learning.” IEEE Access 9 (Feb): 34366–34382. https://doi.org/10.1109/ACCESS.2021.3059170.
Caesar, H., V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom. 2020. “nuScenes: A multimodal dataset for autonomous driving.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 11621–11631. New York: IEEE.
Cai, Z., and N. Vasconcelos. 2019. “Cascade R-CNN: High quality object detection and instance segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 43 (5): 1483–1498. https://doi.org/10.1109/TPAMI.2019.2956516.
Chen, J., Z. Kira, and Y. K. Cho. 2019. “Deep learning approach to point cloud scene understanding for automated scan to 3D reconstruction.” J. Comput. Civ. Eng. 33 (4): 04019027. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000842.
Cordts, M., M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. 2016. “The cityscapes dataset for semantic urban scene understanding.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3213–3223. New York: IEEE.
Criminisi, A., I. Reid, and A. Zisserman. 2000. “Single view metrology.” Int. J. Comput. Vision 40 (2): 123–148. https://doi.org/10.1023/A:1026598000963.
Dutta, A., and A. Zisserman. 2019. “The VIA annotation software for images, audio and video.” In Proc., 27th ACM Int. Conf. on Multimedia, 2276–2279. New York: Association for Computing Machinery.
El Marai, O., T. Taleb, and J. Song. 2020. “Roads infrastructure digital twin: A step toward smarter cities realization.” IEEE Network 35 (2): 136–143. https://doi.org/10.1109/MNET.011.2000398.
Furukawa, Y., B. Curless, S. M. Seitz, and R. Szeliski. 2010. “Towards internet-scale multi-view stereo.” In Proc., 2010 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 1434–1441. New York: IEEE.
Gao, Y., S. Qian, Z. Li, P. Wang, F. Wang, and Q. He. 2021. “Digital twin and its application in transportation infrastructure.” In Proc., 2021 IEEE 1st Int. Conf. on Digital Twins and Parallel Intelligence (DTPI), 298–301. New York: IEEE.
Giannakeris, P., V. Kaltsa, K. Avgerinakis, A. Briassouli, S. Vrochidis, and I. Kompatsiaris. 2018. “Speed estimation and abnormality detection from surveillance cameras.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition Workshops, 93–99. New York: IEEE.
Gupta, A., J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi. 2018. “Social GAN: Socially acceptable trajectories with generative adversarial networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 2255–2264. New York: IEEE.
Hartley, R., and A. Zisserman. 2003. Multiple view geometry in computer vision. Cambridge, UK: Cambridge University Press.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE.
Hu, C., W. Fan, E. Zeng, Z. Hang, F. Wang, L. Qi, and M. Z. A. Bhuiyan. 2021. “Digital twin-assisted real-time traffic data prediction method for 5G-enabled internet of vehicles.” IEEE Trans. Ind. Inf. 18 (4): 2811–2819. https://doi.org/10.1109/TII.2021.3083596.
Hui, Y., Q. Wang, N. Cheng, R. Chen, X. Xiao, and T. H. Luan. 2021. “Time or reward: Digital-twin enabled personalized vehicle path planning.” In Proc., 2021 IEEE Global Communications Conf. (GLOBECOM), 1–6. New York: IEEE.
ICIL (Integrated Construction Informatics Laboratory). 2022. “Digital twinning of traffic scenes.” Accessed September 29, 2022. https://www.youtube.com/watch?v=tDywTX8pEoY.
Indyk, P., and R. Motwani. 1998. “Approximate nearest neighbors: Towards removing the curse of dimensionality.” In Proc., 13th Annual ACM Symp. on Theory of Computing, 604–613. New York: Association for Computing Machinery.
Johnson, J. M., and T. M. Khoshgoftaar. 2019. “Survey on deep learning with class imbalance.” J. Big Data 6 (27): 1–54.
Kar, A., S. Tulsiani, J. Carreira, and J. Malik. 2015. “Category-specific object reconstruction from a single image.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 1966–1974. New York: IEEE.
Ke, L., S. Li, Y. Sun, Y.-W. Tai, and C.-K. Tang. 2020. “GSNet: Joint vehicle pose and shape reconstruction with geometrical and scene-aware supervision.” In Proc., European Conf. on Computer Vision, 515–532. New York: Springer.
Khaloo, A., and D. Lattanzi. 2017. “Hierarchical dense structure-from-motion reconstructions for infrastructure condition assessment.” J. Comput. Civ. Eng. 31 (1): 04016047. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000616.
Kim, B., C. M. Kang, J. Kim, S. H. Lee, C. C. Chung, and J. W. Choi. 2017. “Probabilistic vehicle trajectory prediction over occupancy grid map via recurrent neural network.” In Proc., 2017 IEEE 20th Int. Conf. on Intelligent Transportation Systems (ITSC), 399–404. New York: IEEE.
Kumar, S. A., R. Madhumathi, P. R. Chelliah, L. Tao, and S. Wang. 2018. “A novel digital twin-centric approach for driver intention prediction and traffic congestion avoidance.” J. Reliab. Intell. Environ. 4 (4): 199–209. https://doi.org/10.1007/s40860-018-0069-y.
Lee, S., S. Kim, and S. Moon. 2022. “Development of a car-free street mapping model using an integrated system with unmanned aerial vehicles, aerial mapping cameras, and a deep learning algorithm.” J. Comput. Civ. Eng. 36 (3): 04022003. https://doi.org/10.1061/(ASCE)CP.1943-5487.0001013.
Lin, T. Y., M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. 2014. “Microsoft COCO: Common objects in context.” In Proc., European Conf. on Computer Vision, 740–755. New York: Springer.
Liu, S., B. Yu, J. Tang, and Q. Zhu. 2021a. “Towards fully intelligent transportation through infrastructure-vehicle cooperative autonomous driving: Challenges and opportunities.” In Proc., 2021 58th ACM/IEEE Design Automation Conf. (DAC), 1323–1326. New York: IEEE.
Liu, Z., Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. 2021b. “Swin transformer: Hierarchical vision transformer using shifted windows.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 10012–10022. New York: IEEE.
Lowe, D. G. 1999. “Object recognition from local scale-invariant features.” In Proc., 7th IEEE Int. Conf. on Computer Vision, 1150–1157. New York: IEEE.
Lu, L., and F. Dai. 2022. “A unified normalization method for homography estimation using combined point and line correspondences.” Comput.-Aided Civ. Infrastruct. Eng. 37 (8): 1010–1026. https://doi.org/10.1111/mice.12788.
Lu, L., and F. Dai. 2023. “Automated visual surveying of vehicle heights to help measure the risk of overheight collisions using deep learning and view geometry.” Comput.-Aided Civ. Infrastruct. Eng. 38 (2): 194–210. https://doi.org/10.1111/mice.12842.
Lv, Z., Y. Li, H. Feng, and H. Lv. 2021. “Deep learning for security in digital twins of cooperative intelligent transportation systems.” IEEE Trans. Intell. Transp. Syst. 23 (9): 16666–16675. https://doi.org/10.1109/TITS.2021.3113779.
Mottaghi, R., Y. Xiang, and S. Savarese. 2015. “A coarse-to-fine model for 3D pose estimation and sub-category recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 418–426. New York: IEEE.
Muller, K., A. Smolic, M. Drose, P. Voigt, and T. Wiegand. 2005. “3-D reconstruction of a dynamic environment with a fully calibrated background for traffic scenes.” IEEE Trans. Circuits Syst. Video Technol. 15 (4): 538–549. https://doi.org/10.1109/TCSVT.2005.844452.
Niesen, U., and J. Unnikrishnan. 2020. “Camera-radar fusion for 3-D depth reconstruction.” In Proc., 2020 IEEE Intelligent Vehicles Symp. (IV), 265–271. New York: IEEE.
Nikouei, S. Y., Y. Chen, S. Song, R. Xu, B.-Y. Choi, and T. R. Faughnan. 2018. “Real-time human detection as an edge service enabled by a lightweight CNN.” In Proc., 2018 IEEE Int. Conf. on Edge Computing (EDGE), 125–129. New York: IEEE.
Pan, Y., N. Wu, T. Qu, P. Li, K. Zhang, and H. Guo. 2021. “Digital-twin-driven production logistics synchronization system for vehicle routing problems with pick-up and delivery in industrial park.” Int. J. Comput. Integr. Manuf. 34 (7–8): 814–828. https://doi.org/10.1080/0951192X.2020.1829059.
Reddy, N. D., M. Vo, and S. G. Narasimhan. 2018. “CarFusion: Combining point tracking and part detection for dynamic 3D reconstruction of vehicles.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 1906–1915. New York: IEEE.
Sharma, R., and A. Sungheetha. 2021. “An efficient dimension reduction based fusion of CNN and SVM model for detection of abnormal incident in video surveillance.” J. Soft Comput. Paradigm 3 (2): 55–69. https://doi.org/10.36548/jscp.2021.2.001.
Smith, M. W., and D. Vericat. 2015. “From experimental plots to experimental landscapes: Topography, erosion and deposition in sub-humid badlands from structure-from-motion photogrammetry.” Earth Surf. Processes Landforms 40 (12): 1656–1671. https://doi.org/10.1002/esp.3747.
Sochor, J., A. Herout, and J. Havel. 2016. “Boxcars: 3D boxes as CNN input for improved fine-grained vehicle recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3006–3015. New York: IEEE.
Strigel, E., D. Meissner, F. Seeliger, B. Wilking, and K. Dietmayer. 2014. “The ko-per intersection laserscanner and video dataset.” In Proc., 17th Int. IEEE Conf. on Intelligent Transportation Systems (ITSC), 1900–1901. New York: IEEE.
Su, H., C. R. Qi, Y. Li, and L. J. Guibas. 2015. “Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views.” In Proc., IEEE Int. Conf. on Computer Vision, 2686–2694. New York: IEEE.
Szeliski, R. 2010. Computer vision: Algorithms and applications. Berlin: Springer Science & Business Media.
Van Hulse, J., T. M. Khoshgoftaar, and A. Napolitano. 2007. “Experimental perspectives on learning from imbalanced data.” In Proc., 24th Int. Conf. on Machine Learning, 935–942. New York: Association for Computing Machinery.
Williams, N., and M. Barth. 2020. “A qualitative analysis of vehicle positioning requirements for connected vehicle applications.” IEEE Intell. Transp. Syst. Mag. 13 (1): 225–242. https://doi.org/10.1109/MITS.2019.2953521.
Wojke, N., A. Bewley, and D. Paulus. 2017. “Simple online and realtime tracking with a deep association metric.” In Proc., 2017 IEEE Int. Conf. on Image Processing (ICIP), 3645–3649. New York: IEEE.
Xia, Y., W. Xu, L. Zhang, X. Shi, and K. Mao. 2015. “Integrating 3D structure into traffic scene understanding with RGB-D data.” Neurocomputing 151 (Mar): 700–709. https://doi.org/10.1016/j.neucom.2014.05.091.
Yang, B., W. Luo, and R. Urtasun. 2018. “PIXOR: Real-time 3D object detection from point clouds.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 7652–7660. New York: IEEE.
Yang, L., M. Li, X. Song, Z. Xiong, C. Hou, and B. Qu. 2019. “Vehicle speed measurement based on binocular stereovision system.” IEEE Access 7 (Jul): 106628–106641. https://doi.org/10.1109/ACCESS.2019.2932120.
Zhang, X., Y. Feng, P. Angeloudis, and Y. Demiris. 2022. “Monocular visual traffic surveillance: A review.” IEEE Trans. Intell. Transp. Syst. 23 (9): 14148–14165. https://doi.org/10.1109/TITS.2022.3147770.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 37Issue 5September 2023

History

Received: Sep 29, 2022
Accepted: Feb 1, 2023
Published online: May 19, 2023
Published in print: Sep 1, 2023
Discussion open until: Oct 19, 2023

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Linjun Lu, S.M.ASCE [email protected]
Graduate Research Assistant, Wadsworth Dept. of Civil and Environmental Engineering, West Virginia Univ., Morgantown, WV 26506. Email: [email protected]
Associate Professor, Wadsworth Dept. of Civil and Environmental Engineering, West Virginia Univ., Morgantown, WV 26506 (corresponding author). ORCID: https://orcid.org/0000-0002-8868-2821. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share