Technical Papers
May 25, 2024

Improved Nighttime Vehicle Detection Using the Cross-Domain Image Translation

Publication: Journal of Transportation Engineering, Part A: Systems
Volume 150, Issue 8

Abstract

Accurate detection of vehicles at nighttime is essential for transportation monitoring and management. However, annotating nighttime vehicle data is challenging, and vehicle features differ significantly between day and night, introducing difficulties in nighttime detection using pretrained models trained on daytime data. In this study, the nighttime vehicle detection performance is improved by employing a patchwise contrastive learning technique to enhance the representation of informative features for various traffic instances. An object detection network with reduced computational complexity and hyperparameters is utilized to conduct vehicle detection at night. Extensive experiments have been performed using images acquired from a section of Jingshi Road in Jinan, China. The impacts of learning rates and crop sizes are discussed. Three commonly adopted indicators, including mean average precision (mAP), precision, and recall, have been used to evaluate the training performance of the adopted FreeAnchor detector. Experimental results indicate that using a crop size of 320 and a learning rate of 2e-4, the developed generative adversarial network (GAN) achieves the best performance in image translation. Moreover, with a ratio of 60% real images to 40% fake images in model training, the FreeAnchor detector achieves the highest mAP of 96.6%. Visualized results for both image translation and nighttime vehicle detection demonstrate improved performance, underscoring the effectiveness of the proposed framework. This study paves the way for leveraging GAN-based networks to assist in vehicle detection under nighttime conditions.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank research assistants for the image data collection and labeling work. This work is partially supported by the Natural Science Foundation of China (Grant No. 52308457) and Natural Science Foundation of Shandong Province (Grant No. ZR2023QE220). All the supports are highly appreciated.

References

Appathurai, A., R. Sundarasekar, C. Raja, E. J. Alex, C. A. Palagan, and A. Nithya. 2020. “An efficient optimal neural network-based moving vehicle detection in traffic video surveillance system.” Circuits Syst. Signal Process. 39 (2): 734–756. https://doi.org/10.1007/s00034-019-01224-9.
Arora, N., Y. Kumar, R. Karkra, and M. Kumar. 2022. “Automatic vehicle detection system in different environment conditions using fast R-CNN.” Multimedia Tools Appl. 81 (13): 18715–18735. https://doi.org/10.1007/s11042-022-12347-8.
Bouguettaya, A., H. Zarzour, A. Kechida, and A. M. Taberkit. 2021. “Vehicle detection from UAV imagery with deep learning: A review.” IEEE Trans. Neural Networks Learn. Syst. 33 (11): 6047–6067. https://doi.org/10.1109/TNNLS.2021.3080276.
Chen, M., and Y. Tan. 2023. “SF-FWA: A self-adaptive fast fireworks algorithm for effective large-scale optimization.” Swarm Evol. Comput. 80 (Jul): 101314. https://doi.org/10.1016/j.swevo.2023.101314.
Cong, Y., Z. Li, J. Liang, and P. Liu. 2023. “Research on video-based moving object tracking.” In Proc., 2023 IEEE Int. Conf. on Mechatronics and Automation (ICMA), 375–379. New York: IEEE.
Dulebenets, M. A. 2021. “An adaptive polyploid memetic algorithm for scheduling trucks at a cross-docking terminal.” Inf. Sci. 565 (Jul): 390–421. https://doi.org/10.1016/j.ins.2021.02.039.
Fu, L., H. Yu, F. Juefei-Xu, J. Li, Q. Guo, and S. Wang. 2021. “Let there be light: Improved traffic surveillance via detail preserving night-to-day transfer.” IEEE Trans. Circuits Syst. Video Technol. 32 (12): 8217–8226. https://doi.org/10.1109/TCSVT.2021.3081999.
Girshick, R. 2015. “Fast R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 1440–1448. New York: IEEE.
Goodfellow, I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2020. “Generative adversarial networks.” Commun. ACM 63 (11): 139–144. https://doi.org/10.1145/3422622.
Guan, H. 2022. “Vehicle detection algorithm based on video image processing in intelligent transportation system.” In Proc., Int. Conf. on Big Data Analytics for Cyber-Physical System in Smart City, 829–836. New York: Springer.
Gui, J., Z. Sun, Y. Wen, D. Tao, and J. Ye. 2021. “A review on generative adversarial networks: Algorithms, theory, and applications.” IEEE Trans. Knowl. Data Eng. 35 (4): 3313–3332. https://doi.org/10.1109/TKDE.2021.3130191.
Guo, F., Z. Jiang, Y. Wang, C. Chen, and Y. Qian. 2022. “Dense traffic detection at highway-railroad grade crossings.” IEEE Trans. Intell. Transp. Syst. 23 (9): 15498–15511. https://doi.org/10.1109/TITS.2022.3140948.
Guo, F., J. Liu, Q. Xie, and H. Chang. 2023. “Improved nighttime traffic detection using day-to-night image transfer.” Transp. Res. Rec. 2677 (11): 711–721. https://doi.org/10.1177/03611981231166686.
Guo, Y., and M. Zhao. 2022. “Nighttime vehicle detection on highway based on improved faster R-CNN model.” In Proc., 2nd Int. Conf. on Computer Vision, Image, and Deep Learning, 168–174. Bellingham, WA: SPIE.
Husain, A. A., T. Maity, and R. K. Yadav. 2020. “Vehicle detection in intelligent transport system under a hazy environment: A survey.” IET Image Proc. 14 (1): 1–10. https://doi.org/10.1049/iet-ipr.2018.5351.
Isola, P., J.-Y. Zhu, T. Zhou, and A. A. Efros. 2017. “Image-to-image translation with conditional adversarial networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 1125–1134. New York: IEEE.
Jain, N. K., R. Saini, and P. Mittal. 2019. “A review on traffic monitoring system techniques.” In Proc., SoCTA 2017: Soft Computing: Theories and Applications, 569–577. New York: Springer.
Jung, C., G. Kwon, and J. C. Ye. 2022. “Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks.” Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 18260–18269. New York: IEEE.
Lin, C.-T., S.-W. Huang, Y.-Y. Wu, and S.-H. Lai. 2020. “GAN-based day-to-night image style transfer for nighttime vehicle detection.” IEEE Trans. Intell. Transp. Syst. 22 (2): 951–963. https://doi.org/10.1109/TITS.2019.2961679.
Lin, T.-Y., P. Goyal, R. Girshick, K. He, and P. Dollár. 2017. “Focal loss for dense object detection.” In Proc., IEEE Int. Conf. on Computer Vision, 2980–2988. New York: IEEE.
Liu, J., C. Wu, S. Le Vine, and S. Jian. 2023. “Panel data analysis of Chinese households’ car ownership and expenditure patterns.” Transp. Res. Part D: Transp. Environ. 123 (Oct): 103915. https://doi.org/10.1016/j.trd.2023.103915.
Mittal, U., P. Chawla, and R. Tiwari. 2023. “EnsembleNet: A hybrid approach for vehicle detection and estimation of traffic density based on faster R-CNN and YOLO models.” Neural Comput. Appl. 35 (6): 4755–4774. https://doi.org/10.1007/s00521-022-07940-9.
Mo, Y., G. Han, H. Zhang, X. Xu, and W. Qu. 2019. “Highlight-assisted nighttime vehicle detection using a multi-level fusion network and label hierarchy.” Neurocomputing 355 (Aug): 13–23. https://doi.org/10.1016/j.neucom.2019.04.005.
Pal, S. K., A. Pramanik, J. Maiti, and P. Mitra. 2021. “Deep learning in multi-object detection and tracking: State of the art.” Appl. Intell. 51 (9): 6400–6429. https://doi.org/10.1007/s10489-021-02293-7.
Pan, Z., W. Yu, X. Yi, A. Khan, F. Yuan, and Y. Zheng. 2019. “Recent progress on generative adversarial networks (GANs): A survey.” IEEE Access 7 (Mar): 36322–36333. https://doi.org/10.1109/ACCESS.2019.2905015.
Paszke, A., S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, and L. Antiga. 2019. “Pytorch: An imperative style, high-performance deep learning library.” Adv. Neural Inf. Process. Syst. 32: 8026–8037.
Redmon, J., S. Divvala, R. Girshick, and A. Farhadi. 2016. “You only look once: Unified, real-time object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 779–788. New York: IEEE.
Ren, S., K. He, R. Girshick, and J. Sun. 2015. “Faster R-CNN: Towards real-time object detection with region proposal networks.” Adv. Neural Inf. Process. Syst. 28.
Shao, X., C. Wei, Y. Shen, and Z. Wang. 2020. “Feature enhancement based on CycleGAN for nighttime vehicle detection.” IEEE Access 9: 849–859. https://doi.org/10.1109/ACCESS.2020.3046498.
Shiotsuka, D., J. Lee, Y. Endo, E. Javanmardi, K. Takahashi, K. Nakao, and S. Kamijo. 2022. “GAN-based semantic-aware translation for day-to-night images.” In Proc., 2022 IEEE Int. Conf. on Consumer Electronics (ICCE), 1–6. New York: IEEE.
Singh, E., and N. Pillay. 2022. “A study of ant-based pheromone spaces for generation constructive hyper-heuristics.” Swarm Evol. Comput. 72 (Jul): 101095. https://doi.org/10.1016/j.swevo.2022.101095.
Singh, P., J. Pasha, R. Moses, J. Sobanjo, E. E. Ozguven, and M. A. Dulebenets. 2022. “Development of exact and heuristic optimization methods for safety improvement projects at level crossings under conflicting objectives.” Reliab. Eng. Syst. Saf. 220 (Apr): 108296. https://doi.org/10.1016/j.ress.2021.108296.
Sun, Y. 2022. “Road traffic safety analysis and countermeasures.” In Proc., 6th Int. Conf. on Electromechanical Control Technology and Transportation (ICECTT 2021), 906–912. Bellingham, WA: SPIE.
Tang, X., Z. Zhang, and Y. Qin. 2021. “On-road object detection and tracking based on radar and vision fusion: A review.” IEEE Intell. Transp. Syst. Mag. 14 (5): 103–128. https://doi.org/10.1109/MITS.2021.3093379.
Wang, C.-Y., A. Bochkovskiy, and H.-Y. M. Liao. 2023. “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors.” In Proc., IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 7464–7475. New York: IEEE.
Wang, T.-C., M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. 2018. “High-resolution image synthesis and semantic manipulation with conditional GANs.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 8798–8807. New York: ACM.
Wang, Z., Q. She, and T. E. Ward. 2021. “Generative adversarial networks in computer vision: A survey and taxonomy.” ACM Comput. Surv. 54 (2): 1–38.
Wei, Y., Q. Tian, J. Guo, W. Huang, and J. Cao. 2019. “Multi-vehicle detection algorithm through combining Harr and HOG features.” Math. Comput. Simul. 155 (Jan): 130–145. https://doi.org/10.1016/j.matcom.2017.12.011.
Wen, L., D. Du, Z. Cai, Z. Lei, M.-C. Chang, H. Qi, J. Lim, M.-H. Yang, and S. Lyu. 2020. “UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking.” Comput. Vis. Image Understanding 193 (Apr): 102907. https://doi.org/10.1016/j.cviu.2020.102907.
Zhang, X., B. Story, and D. Rajan. 2021. “Night time vehicle detection and tracking by fusing vehicle parts from multiple cameras.” IEEE Trans. Intell. Transp. Syst. 23 (7): 8136–8156. https://doi.org/10.1109/TITS.2021.3076406.
Zhang, X., F. Wan, C. Liu, R. Ji, and Q. Ye. 2019a. “Freeanchor: Learning to match anchors for visual object detection.” Adv. Neural Inf. Process. Syst. 32: 147–155.
Zhang, Z., Y. Liu, T. Liu, Z. Lin, and S. Wang. 2019b. “DAGN: A real-time UAV remote sensing image vehicle detection framework.” IEEE Geosci. Remote Sens. Lett. 17 (11): 1884–1888. https://doi.org/10.1109/LGRS.2019.2956513.
Zhu, J. Y., T. Park, P. Isola, and A. A. Efros. 2017. “Unpaired image-to-image translation using cycle-consistent adversarial networks.” In Proc., IEEE Int. Conf. on Computer Vision, 2223–2232. New York: IEEE.
Zou, Q., H. Ling, S. Luo, Y. Huang, and M. Tian. 2015. “Robust nighttime vehicle detection by tracking and grouping headlights.” IEEE Trans. Intell. Transp. Syst. 16 (5): 2838–2849. https://doi.org/10.1109/TITS.2015.2425229.
Zou, Q., H. Ling, Y. Pang, Y. Huang, and M. Tian. 2017. “Joint headlight pairing and vehicle tracking by weighted set packing in nighttime traffic videos.” IEEE Trans. Intell. Transp. Syst. 19 (6): 1950–1961. https://doi.org/10.1109/TITS.2017.2745683.

Information & Authors

Information

Published In

Go to Journal of Transportation Engineering, Part A: Systems
Journal of Transportation Engineering, Part A: Systems
Volume 150Issue 8August 2024

History

Received: Oct 12, 2023
Accepted: Mar 6, 2024
Published online: May 25, 2024
Published in print: Aug 1, 2024
Discussion open until: Oct 25, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Postdoctoral Fellow, School of Qilu Transportation, Shandong Univ., Jinan, Shandong Province 290061, China. ORCID: https://orcid.org/0000-0002-7414-087X. Email: [email protected]
Graduate Research Assistant, School of Civil Engineering and Transportation, South China Univ. of Technology, Guangzhou, Guangdong Province 510641, China. Email: [email protected]
Honglei Chang [email protected]
Associate Professor, School of Qilu Transportation, Shandong Univ., Jinan, Shandong Province 290061, China (corresponding author). Email: [email protected]
Associate Professor, School of Civil Engineering and Transportation, South China Univ. of Technology, Guangzhou, Guangdong Province 510641, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share