Expert Demonstration Collection of Long-Horizon Construction Tasks in Virtual Reality
Publication: Computing in Civil Engineering 2023
ABSTRACT
With the shortage of skilled labors in recent years, there is a pressing need for utilizing robots to perform repetitive and heavy construction tasks. Reinforcement learning (RL)-based robots become a promising solution because of their robustness and adaptability to unseen scenarios. However, long training time and complex reward design for these robots remain challenging. An effective solution is to collect expert demonstrations as inputs to better initialize policies of RL agents, or directly train inverse reinforcement learning (IRL) agents to recover reward functions. Therefore, this paper proposes a comprehensive virtual reality (VR)-based platform for expert demonstration collection. To show the effectiveness of our platform, a collaborative long-horizon construction task is implemented. We gathered 20 expert demonstrations as input to train a behavior cloning (BC) model. Results showed that the learned policy achieved reasonable success rates in completing the task, indicating the effectiveness of our demonstration collection platform.
Get full access to this article
View all available purchase options and get full access to this chapter.
REFERENCES
Apolinarska, A. A., Pacher, M., Li, H., Cote, N., Pastrana, R., Gramazio, F., and Kohler, M. (2021). Robotic assembly of timber joints using reinforcement learning. Automation in Construction, 125, 103569.
Bullet Physics. (2022). Bullet real-time physics simulation | Home of Bullet and PyBullet: Physics simulation for games, Visual effects, Robotics and Reinforcement learning. Retrieved March 21, 2022.
Ding, Z., and Dong, H. (2020). Challenges of reinforcement learning. In H. Dong, Z. Ding, & S. Zhang (Eds.), Deep reinforcement learning: Fundamentals, Research and Applications (pp. 249–272). Springer.
Eschmann, J. (2021). Reward function design in reinforcement learning. In B. Belousov, H. Abdulsamad, P. Klink, S. Parisi, & J. Peters (Eds.), Reinforcement learning algorithms: Analysis and Applications (pp. 25–33). Springer International Publishing.
Ho, J., and Ermon, S. Generative Adversarial Imitation Learning. arXiv, 10 June 2016.
Kim, S., Chang, S., and Castro-Lacouture, D. (2020). Dynamic modeling for analyzing impacts of skilled labor shortage on construction project management. Journal of Management in Engineering, 36(1), 04019035.
Lee, D., and Kim, M. (2021). Autonomous construction hoist system based on deep reinforcement learning in high-rise building construction. Automation in Construction, 128, 103737.
Liu, Y., Gupta, A., Abbeel, P., and Levine, S. (2018). Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1118–1125). IEEE.
Mrad, F., Abdul-Malak, M. A., Sadek, S., and Khudr, Z. (2002). Automated excavation in construction using robotics trajectory and envelop generation. Engineering, Construction and Architectural Management, 9(4), 325–335.
Nagata, M., Baba, N., Tachikawa, H., Shimizu, I., and Aoki, T. (1997). Steel frame welding robot systems and their application at the construction site. Computer-Aided Civil and Infrastructure Engineering, 12(1), 15–30.
Ng, A., and Russell, S. (2000). Algorithms for inverse reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning (ICML 2000) (pp. 663–670). Morgan Kaufmann Publishers.
Pritschow, G., Dalacker, M., Kurz, J., and Zeiher, J. (1994). A mobile robot for on-site construction of masonry. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’94) (Vol. 3, pp. 1701–1707).
Saidi, K. S., Bock, T., and Georgoulas, C. (2016). Robotics in Construction. In B. Siciliano & O. Khatib (Eds.), Springer Handbook of Robotics (pp. 1493–1520). Springer International Publishing.
Stasiak-Betlejewska, R., and Potkány, M. (2015). Construction costs analysis and its importance to the economy. Procedia Economics and Finance, International Scientific Conference: Business Economics and Management (BEM2015), 34, 35–42.
Sutton, R. S., and Barto, A. G. (1998). Reinforcement Learning: An Introduction. IEEE Transactions on Neural Networks, 9(5), 1054–1054.
Pomerleau, D. A. (1991). Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1), 88–97.
Gleave, A., Taufeeque, M., Rocamonde, J., Jenner, E., Wang, S. H., Toyer, S., Ernestus, M., Belrose, N., Emmons, S., and Russell, S. (2022). imitation: Clean Imitation Learning Implementations.
Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2016). End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39), 1–40.
Information & Authors
Information
Published In
History
Published online: Jan 25, 2024
ASCE Technical Topics:
Authors
Metrics & Citations
Metrics
Citations
Download citation
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.