Novel Method of Monocular Real-Time Feature Point Tracking for Tethered Space Robots
Publication: Journal of Aerospace Engineering
Volume 27, Issue 6
Abstract
This paper proposes a visual perception system for a tethered space robot’s (TSR) automatic rendezvous from 100 to 0.15 m. The core problem, tracking the entire contour of noncooperative moving targets in real time, is emphasized in this work. Given numerous challenges in a dynamic scene, a novel feature tracking algorithm is developed, i.e., the monocular real-time robust feature tracking algorithm (MRRFT). To generate a robust target model, improved speeded-up robust features (SURF) are used to extract features from a marked target box. The tracker then uses the pyramid Kanade-Lucas-Tomasi (P-KLT) matching algorithm and eliminates mismatched points by a statistical method. The greedy snake algorithm is applied to obtain the exact location of the target box and to update it automatically. A discrete feature filter and an adaptive feature updating strategy are also designed to enhance robustness. A three-dimensional (3D) simulation and a semiphysical system are developed to evaluate the method. Numerous experiments demonstrate that the tracker can stably track satellite models with simple structures with improved accuracy and time savings than good features to track (GFTT)+P-KLT or scale invariant feature transform (SIFT)+P-KLT.
Get full access to this article
View all available purchase options and get full access to this article.
Acknowledgments
The author would like to acknowledge Khvedchenya Eugene and Liu Yu for their support. This research is sponsored by the National Natural Science Foundation of China (Grant No. 61005062, 11272256) and the Doctorate Foundation of Northwestern Polytechnical University (Grant No. CX201304).
References
Bay, H., Tuytelaars, T., and Van Gool, L. (2006). “Surf: Speeded up robust features.” Computer Vision—ECCV 2006, Springer, Berlin, 404–417.
Benfold, B., and Reid, I. (2011). “Stable multi-target tracking in real-time surveillance video.” Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conf., IEEE, New York, 3457–3464.
Boning, P., and Dubowsky, S. (2010). “Coordinated control of space robot teams for the on-orbit construction of large flexible space structures.” Adv. Rob., 24(3), 303–323.
Bradski, G., and Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV library, O’Reilly Media, 329–330.
Calonder, M., Lepetit, V., Strecha, C., and Fua, P. (2010). “Brief: Binary robust independent elementary features.” Computer Vision—ECCV 2010, Springer, Berlin, 778–792.
Delta Tau Data Systems. (2003). “Pewin32PRO software reference manual.” 〈http://www.irtfweb.ifa.hawaii.edu/~tcs3/tcs3/vendor_info/DeltaTau/ref/PEWIN32%20PRO.pdf〉 (May, 2012).
Fleet, D. J. (1992). Measurement of image velocity, Univ. of Toronto, Toronto, ON, Canada.
Gong, J., Liu, F., Song, C., Cui, J., and Li, Z. (2012). “Research on the moving vehicle detection algorithm based on the motion vector.” Instrumentation, measurement, circuits and systems, AISC, Vol. 127, Springer, Berlin, 41–49.
Intel. (2012). “Open computer vision library.” 〈http://opencv.willowgarage.com/wiki/〉 (May 2012).
Khvedchenya, E. (2011). “Feature descriptor comparison report.” 〈http:// computer-vision-talks.com/〉.
Leinz, M. R., et al. (2008). “Orbital express autonomous rendezvous and capture sensor system (ARCSS) flight test results.” SPIE Defense and Security Symp., International Society for Optics and Photonics, 69580A–69580A.
Lenz, P., Ziegler, J., Geiger, A., and Roser, M. (2011). “Sparse scene flow segmentation for moving object detection in urban environments.” Proc., Intelligent Vehicles Symp. (IV), IEEE, New York, 926–932.
Li, Y. C., Luo, Y. F., Li, Y. Z., and Hu, J. (2007). “Spacecraft visual simulation based on Vega Prime.” J. Projectiles, Rockets, Missiles and Guidance, 27(4), 222–225.
Liu, Y., Wang, J. D., and Li, P. (2011). “A feature point tracking method based on the combination of SIFT algorithm and KLT matching algorithm.” J. Astronaut. Sci., 32(7), 1618–1625 (in Chinese).
Lowe, D. G. (1999). “Object recognition from local scale-invariant features.” Computer Vision, 1999. Proc., 7th IEEE Int. Conf., Vol. 2, IEEE, New York, 1150–1157.
Lowe, D. G. (2004). “Distinctive image features from scale-invariant keypoints.” Int. J. Comput. Vision, 60(2), 91–110.
Lucas, B. D., and Kanade, T. (1981). “An iterative image registration technique with an application to stereo vision.” IJCAI, Vol. 81, 674–679.
Luo, J., and Oubong, G. (2009). “A Comparison of SIFT, PCA-SIFT and SURF.” Int. J. Image Process., 3(4), 143–152.
Lyn, C., and Mooney, G. (2007). “Computer vision systems for robotic servicing of the Hubble Space Telescope.” Proc., AIAA SPACE 2007 Conf. and Exposition, AIAA, Reston, VA.
Maji, S. (2006). A comparison of feature descriptors, Univ. of California, Berkeley, Berkeley, CA.
Makhmalbaf, A., Park, M. W., Yang, J., Brilakis, I., and Vela, P. A. (2010). “2D vision tracking methods’ performance comparison for 3D tracking of construction resources.” Proc., Construction Research Congress, Vol. 1, 459–469.
Mian, A. S. (2008). “Realtime visual tracking of aircrafts.” Proc., 2008 Digital Image Computing: Techniques and Applications, IEEE, New York, 351–356.
Nishida, S. I., Kawamoto, S., Okawa, Y., Terui, F., and Kitamura, S. (2009). “Space debris removal system using a small satellite.” Acta Astronaut., 65(1), 95–102.
Nohmi, M. (2009). “Mission design of a tethered robot satellite ‘STARS’ for orbital experiment.” Control applications, (CCA) & intelligent control, (ISIC), IEEE, New York, 1075–1080.
Park, M. W., Koch, C., and Brilakis, I. (2012). “Three-dimensional tracking of construction resources using an on-site camera system.” J. Comput. Civ. Eng., 541–549.
Park, M. W., Palinginis, E., and Brilakis, I. (2012). “Detection of construction workers in video frames for automatic initialization of vision trackers.” Proc., Construction Research Congress 2012, 940–949.
Rabaud, V., and Belongie, S. (2006). “Counting crowded moving objects.” Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conf., Vol. 1, IEEE, New York, 705–711.
Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011). “ORB: An efficient alternative to SIFT or SURF.” Computer Vision (ICCV), 2011 IEEE Int. Conf. on, IEEE, New York, 2564–2571.
Shi, J. B., and Tomasi, C. (1994). “Good features to track.” 1994 Proc., IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR'94), IEEE Computer Society Press, Seattle, WA, 593–600.
SolidWorks API [Computer software]. Dassault Systèmes SolidWorks, Waltham, MA.
Song, L. H. (2011). The study of target positioning technology based on UAV image sequence. The Institute of Surveying and Mapping, PLA Information Engineering Univ. (in Chinese).
Sugimura, D., Kitani, K. M., Okabe, T., Sato, Y., and Sugimoto, A. (2009). “Tracking people in crowds based on clustering feature trajectories using gait features and local appearances.” Proc., MIRU 2009, 135–142.
Szeliski, R. (2010). Computer vision: Algorithms and applications, Springer, New York, 234–236.
Takacs, G., Chandrasekhar, V., Tsai, S., Chen, D., Grzeszczuk, R., and Girod, B. (2010). “Unified real-time tracking and recognition with rotation-invariant fast features.” 2010 IEEE Conf. on Computer Pattern Recognition (CBPR), IEEE, New York, 934–941.
Tao, J. S. H. (2003). “The study of imaging quality and resolution of CCD space camera based on model experiment and computer simulation.” Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Science, Changchun, China.
Thienel, J. K., Van Eepoel, J. M., and Sanner, R. M. (2006). “Accurate state estimation and tracking of a non-cooperative target vehicle.” Proc., AIAA Guidance, Navigation, and Control Conf., 5511–5522.
Tomasi, C., and Kanade, T. (1991). Detection and tracking of point features, School of Computer Science, Carnegie Mellon Univ., Pittsburgh.
Tsuduki, Y., and Fujiyoshi, H. (2009). “A method for visualizing pedestrian traffic flow using sift feature point tracking.” PSIVT '09 Proc., 3rd Pacific Rim Symp. on Advances in Image and Video Technology, Springer, Berlin, 25–36.
Tsuduki, Y., Fujiyoshi, H., and Kanade, T. (2007). “Mean shift-based point feature tracking using sift.” Proc., IPSJ: Computer Vision and Image Media (CVIM), Vol. 1, 101–108.
Vaudrey, T., Wedel, A., Rabe, C., Klappstein, J., Klette, R. (2008). “Evaluation of moving object segmentation comparing 6D-vision and monocular motion constraints.” Image and Vision Computing New Zealand, 2008. IVCNZ 2008. 23rd Int. Conf., IEEE, New York, 1–6.
Wang, S. H., Li, Z. H., Zhang, Z., and Tan, J. Y. (2012). “Video-based traffic parameter extraction with an improved vehicle tracking algorithm.” Proc., CICTP 2012, 856–866.
Williams, D. J., and Shah, M. (1992). “A fast algorithm for active contours and curvature estimation.” Comput. Vis. Graph Image Process., 55(1), 14–26.
Xiong, C. Z., Pang, Y. G., Li, Z. X., Liu, Y. L., and Li, Y. H. (2009). “Vehicle tracking from videos based on mean shift algorithm.” Proc., ICCTP 2009: Critical Issues in Transportation Systems Planning, Development, and Management, ASCE, Reston, VA, 1–8.
Yilmaz, A., Javed, O., and Shah, M. (2006). “Object tracking: A survey.” ACM J. Comput. Surveys, 38(4).
Zeng, D. X., and Du, X. P. (2011). “Influence of detector’s pixel size on performance of optical detection system.” Chinese Space Sci. Technol., 6(3), 51–55 (in Chinese).
Zhai, G., Qiu, Y., Liang, B., and Li, C. (2009). “On-orbit capture with flexible tether—net system.” Acta Astronaut., 65(5), 613–623.
Information & Authors
Information
Published In
Copyright
© 2014 American Society of Civil Engineers.
History
Received: Dec 12, 2012
Accepted: Jun 28, 2013
Published online: Jul 2, 2013
Discussion open until: Oct 27, 2014
Published in print: Nov 1, 2014
Authors
Metrics & Citations
Metrics
Citations
Download citation
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.