Abstract

The construction industry has long been plagued by low productivity and high injury and fatality rates. Robots have been envisioned to automate the construction process, thereby substantially improving construction productivity and safety. Despite the enormous potential, teaching robots to perform complex construction tasks is challenging. We present a generalizable framework to harness human teleoperation data to train construction robots to perform repetitive construction tasks. First, we develop a teleoperation method and interface to control robots on construction sites, serving as an intermediate solution toward full automation. Teleoperation data from human operators, along with context information from the job site, can be collected for robot learning. Second, we propose a new method for extracting keyframes from human operation data to reduce noise and redundancy in the training data, thereby improving robot learning efficacy. We propose a hierarchical imitation learning method that incorporates the keyframes to train the robot to generate appropriate trajectories for construction tasks. Third, we model the robot’s visual observations of the working space in a compact latent space to improve learning performance and reduce computational load. To validate the proposed framework, we conduct experiments teaching a robot to generate appropriate trajectories for excavation tasks from human operators’ teleoperations. The results suggest that the proposed method outperforms state-of-the-art approaches, demonstrating its significant potential for application.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This research was funded by the National Science Foundation (NSF) via Grant Nos. 2129003 and 2222810. The authors gratefully acknowledge NSF’s support. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of NSF and The University of Tennessee, Knoxville.

References

Abdolmaleki, A., J. T. Springenberg, Y. Tassa, R. Munos, N. Heess, and M. Riedmiller. 2018. “Maximum a posteriori policy optimisation.” In Proc., 6th Int. Conf. on Learning Representations, ICLR 2018. Appleton, WI: International Conference on Learning Representations.
Apolinarska, A. A., M. Pacher, H. Li, N. Cote, R. Pastrana, F. Gramazio, and M. Kohler. 2021. “Robotic assembly of timber joints using reinforcement learning.” Autom. Constr. 125 (May): 103569. https://doi.org/10.1016/j.autcon.2021.103569.
Arora, S., and P. Doshi. 2021. “A survey of inverse reinforcement learning: Challenges, methods and progress.” Artif. Intell. 297 (Aug): 103500. https://doi.org/10.1016/j.artint.2021.103500.
Belousov, B., B. Wibranek, J. Schneider, T. Schneider, G. Chalvatzaki, J. Peters, and O. Tessmann. 2022. “Robotic architectural assembly with tactile skills: Simulation and optimization.” Autom. Constr. 133 (Jan): 104006. https://doi.org/10.1016/j.autcon.2021.104006.
Bharadhwaj, H., A. Garg, and F. Shkurti. 2020. “LEAF: Latent exploration along the frontier.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 677–684. New York: IEEE.
Chane-Sane, E., C. Schmid, and I. Laptev. 2021. “Goal-Conditioned Reinforcement Learning with Imagined Subgoals.” In Vol. 139 of Proc., Int. Conf. on Machine Learning, 1430–1440. New York: Proceedings of Machine Learning Research.
Chen, X., and J. D. Storey. 2015. “Consistent estimation of low-dimensional latent structure in high-dimensional data.” Preprint, submitted October 13, 2015. https://arxiv.org/abs/1510.03497.
Codevilla, F., E. Santana, A. M. López, and A. Gaidon. 2019. “Exploring the limitations of behavior cloning for autonomous driving.” In Proc., IEEE/CVF Int. Conf. on Computer Vision, 9329–9338. New York: IEEE.
David, O., F.-X. Russotto, M. Da Silva Simoes, and Y. Measson. 2014. “Collision avoidance, virtual guides and advanced supervisory control teleoperation techniques for high-tech construction: Framework design.” Autom. Constr. 44 (Aug): 63–72. https://doi.org/10.1016/j.autcon.2014.03.020.
Ding, Y., C. Florensa, M. Phielipp, and P. Abbeel. 2019. “Goal-conditioned imitation learning.” In Proc., Advances in Neural Information Processing Systems, 32. New York: Neural Information Processing Systems Foundation.
Duan, K., and Z. Zou. 2023. “Learning from demonstrations: An intuitive VR environment for imitation learning of construction robots.” Preprint, submitted May 23, 2023. https://arxiv.org/abs/2305.14584.
Falanga, D., S. Kim, and D. Scaramuzza. 2019. “How fast is too fast? The role of perception latency in high-speed sense and avoid.” IEEE Rob. Autom. Lett. 4 (2): 1884–1891. https://doi.org/10.1109/LRA.2019.2898117.
Fang, B., S. Jia, D. Guo, M. Xu, S. Wen, and F. Sun. 2019. “Survey of imitation learning for robotic manipulation.” Int. J. Intell. Rob. Appl. 3 (4): 362–369. https://doi.org/10.1007/s41315-019-00103-5.
Fang, K., Y. Zhu, A. Garg, A. Kurenkov, V. Mehta, L. Fei-Fei, and S. Savarese. 2018. “Learning task-oriented grasping for tool manipulation from simulated self-supervision.” Int. J. Rob. Res. 39 (2–3): 202–216. https://doi.org/10.1177/0278364919872545.
Finn, C., X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel. 2015. “Deep spatial autoencoders for visuomotor learning.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 512–519. New York: IEEE.
Fujimoto, S., H. Hoof, and D. Meger. 2018. “Addressing function approximation error in actor-critic methods.” In Proc., Int. Conf. on Machine Learning, 1587–1596. New York: Proceedings of Machine Learning Research.
Gambao, E., C. Balaguer, and F. Gebhart. 2000. “Robot assembly system for computer-integrated construction.” Autom. Constr. 9 (5–6): 479–487. https://doi.org/10.1016/S0926-5805(00)00059-5.
Haarnoja, T., A. Zhou, P. Abbeel, and S. Levine. 2018. “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.” In Vol. 5 of Proc., 35th Int. Conf. on Machine Learning, ICML 2018, 2976–2989. New York: International Machine Learning Society.
Hartmann, V. N., A. Orthey, D. Driess, O. S. Oguz, and M. Toussaint. 2022. “Long-horizon multi-robot rearrangement planning for construction assembly.” IEEE Trans. Rob. 39 (1): 239–252. https://doi.org/10.1109/TRO.2022.3198020.
Hasunuma, H., M. Kobayashi, H. Moriyama, T. Itoko, Y. Yanagihara, T. Ueno, K. Ohya, and K. Yokoi. 2002. “A tele-operated humanoid robot drives a lift truck.” In Vol. 3 of Proc., 2002 IEEE Int. Conf. on Robotics and Automation (ICRA), 2246–2252. New York: IEEE.
Hasunuma, H., K. Nakashima, M. Kobayashi, F. Mifune, Y. Yanagihara, T. Ueno, K. Ohya, and K. Yokoi. 2003. “A tele-operated humanoid robot drives a backhoe.” In Proc., 2003 IEEE Int. Conf. on Robotics and Automation (ICRA), 2998–3004. New York: IEEE.
Hentout, A., M. Aouache, A. Maoudj, and I. Akli. 2019. “Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017.” Adv. Rob. 33 (15–16): 764–799. https://doi.org/10.1080/01691864.2019.1636714.
Hester, T., et al. 2017a. “Deep Q-learning from demonstrations.” In Proc., 32nd AAAI Conf. on Artificial Intelligence, AAAI 2018, 3223–3230 Washington, DC: Association for the Advancement of Artificial Intelligence.
Hester, T., et al. 2017b. Learning from demonstrations for real world reinforcement learning Washington, DC: Association for the Advancement of Artificial Intelligence.
Higgins, I., L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, A. Lerchner, and G. Deepmind. 2016. “beta-VAE: Learning basic visual concepts with a constrained variational framework.” In Proc., ICLR 2017 Conf. Appleton, WI: International Conference on Learning Representations.
Ho, J., and S. Ermon. 2016. “Generative adversarial imitation learning.” In Proc., Advances in Neural Information Processing Systems, 4572–4580. New York: Curran Associates.
Huang, L., Z. Zhu, and Z. Zou. 2023. “To imitate or not to imitate: Boosting reinforcement learning-based construction robotic control for long-horizon tasks using virtual demonstrations.” Autom. Constr. 146 (Feb): 104691. https://doi.org/10.1016/j.autcon.2022.104691.
Jayaraman, D., F. Ebert, A. Efros, and S. Levine. 2018. “Time-agnostic prediction: Predicting predictable video frames.” In Proc., 7th Int. Conf. on Learning Representations, ICLR 2019. Appleton, WI: International Conference on Learning Representations.
Jing, G., T. Tosun, M. Yim, and H. Kress-Gazit. 2018. “Accomplishing high-level tasks with modular robots.” Auton. Rob. 42 (7): 1337–1354. https://doi.org/10.1007/s10514-018-9738-1.
Kabir, A. M., A. Kanyuck, R. K. Malhan, A. V. Shembekar, S. Thakar, B. C. Shah, and S. K. Gupta. 2019. “Generation of synchronized configuration space trajectories of multi-robot systems.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 8683–8690. New York: IEEE.
Kabir, A. M., S. Thakar, P. M. Bhatt, R. K. Malhan, P. Rajendran, B. C. Shah, and S. K. Gupta. 2020. “Incorporating motion planning feasibility considerations during task-agent assignment to perform complex tasks using mobile manipulators.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 5663–5670. New York: IEEE.
Kadane, J. B. 2023. “Two Kadane algorithms for the maximum sum subarray problem.” Algorithms 16 (11): 519. https://doi.org/10.3390/a16110519.
Keating, S., and N. Oxman. 2013. “Compound fabrication: A multi-functional robotic platform for digital design and fabrication.” Rob. Comput. Integr. Manuf. 29 (6): 439–448. https://doi.org/10.1016/j.rcim.2013.05.001.
Kim, D., J. Kim, K. Lee, C. Park, J. Song, and D. Kang. 2009. “Excavator tele-operation system using a human arm.” Autom. Constr. 18 (2): 173–182. https://doi.org/10.1016/j.autcon.2008.07.002.
Lee, D., and M. Kim. 2021. “Autonomous construction hoist system based on deep reinforcement learning in high-rise building construction.” Autom. Constr. 128 (Aug): 103737. https://doi.org/10.1016/j.autcon.2021.103737.
Lenz, I., H. Lee, and A. Saxena. 2013. “Deep learning for detecting robotic grasps.” Int. J. Rob. Res. 34 (4–5): 705–724. https://doi.org/10.1177/0278364914549607.
Levine, S., C. Finn, T. Darrell, and P. Abbeel. 2016a. “End-to-end training of deep visuomotor policies.” Preprint, submitted April 19, 2015. https://arxiv.org/abs/1504.00702.
Levine, S., P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. 2016b. “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection.” Learning 37 (4–5): 421–436. https://doi.org/10.1177/0278364917710318.
Li, R., and Z. Zou. 2023. “Enhancing construction robot learning for collaborative and long-horizon tasks using generative adversarial imitation learning.” Adv. Eng. Inf. 58 (Oct): 102140. https://doi.org/10.1016/j.aei.2023.102140.
Liang, C.-J., V. Kamat, and C. Menassa. 2019. “Teaching robots to perform construction tasks via learning from demonstration.” In Proc., 36th Int. Symp. on Automation and Robotics in Construction, ISARC, 1305–1311. Banff, AB, Canada: International Association for Automation and Robotics in Construction.
Liang, C.-J., V. R. Kamat, and C. C. Menassa. 2020. “Teaching robots to perform quasi-repetitive construction tasks through human demonstration.” Autom. Constr. 120 (Dec): 103370. https://doi.org/10.1016/j.autcon.2020.103370.
Lillicrap, T. P., J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. 2016. “Continuous control with deep reinforcement learning.” In Proc., 4th Int. Conf. on Learning Representations, ICLR 2016. Appleton, WI: International Conference on Learning Representations.
Liu, D., Z. Wang, B. Lu, M. Cong, H. Yu, and Q. Zou. 2020. “A reinforcement learning-based framework for robot manipulation skill acquisition.” IEEE Access 8 (Jun): 108429–108437. https://doi.org/10.1109/ACCESS.2020.3001130.
Liu, Y., A. Gupta, P. Abbeel, and S. Levine. 2018. “Imitation from observation: Learning to imitate behaviors from raw video via context translation.” In Proc., 2018 IEEE Int. Conf. on Robotics and Automation (ICRA), 1118–1125. New York: IEEE.
Liu, Y., M. Habibnezhad, and H. Jebelli. 2021. “Brain-computer interface for hands-free teleoperation of construction robots.” Autom. Constr. 123 (Mar): 103523. https://doi.org/10.1016/j.autcon.2020.103523.
Lublasser, E., T. Adams, A. Vollpracht, and S. Brell-Cokcan. 2018. “Robotic application of foam concrete onto bare wall elements—Analysis, concept and robotic experiments.” Autom. Constr. 89 (May): 299–306. https://doi.org/10.1016/j.autcon.2018.02.005.
Luck, J. P., P. L. McDermott, L. Allender, and D. C. Russell. 2006. “An investigation of real world control of robotic assets under communication latency.” In Proc., HRI 2006: Proc. of the 2006 ACM Conf. on Human-Robot Interaction, 202–209. New York: Association for Computing Machinery.
Luo, Y., K. Dong, L. Zhao, Z. Sun, E. Cheng, H. Kan, C. Zhou, and B. Song. 2021. “Calibration-free monocular vision-based robot manipulations with occlusion awareness.” IEEE Access 9 (May): 85265–85276. https://doi.org/10.1109/ACCESS.2021.3082947.
Mandlekar, A., F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox. 2019. “IRIS: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 4414–4420. New York: IEEE.
Mnih, V., K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. “Playing Atari with deep reinforcement learning.” Preprint, submitted December 19, 2013. https://arxiv.org/abs/1312.5602.
Nasiriany, S., V. H. Pong, S. Lin, and S. Levine. 2019. “Planning with goal-conditioned policies.” In Proc., Advances in Neural Information Processing Systems, 32. San Diego: Neural Information Processing Systems Foundation.
NIOSH (National Institute for Occupational Safety and Health). 2019. “NIOSH strategic plan: FYs 2019 – 2026.” Accessed February 22, 2020. https://www.cdc.gov/niosh/about/strategicplan/default.html.
Occupational Safety and Health Administration. 2018. “Commonly used statistics.” OSHA Data and Statistics. Accessed February 22, 2020. https://www.osha.gov/oshstats/commonstats.html.
Pfeiffer, M., S. Shukla, M. Turchetta, C. Cadena, A. Krause, R. Siegwart, and J. Nieto. 2018. “Reinforced imitation: Sample efficient deep reinforcement learning for mapless navigation by leveraging prior demonstrations.” IEEE Rob. Autom. Lett. 3 (4): 4423–4430. https://doi.org/10.1109/LRA.2018.2869644.
Pore, A., E. Tagliabue, M. Piccinelli, D. Dall’Alba, A. Casals, and P. Fiorini. 2021. “Learning from demonstrations for autonomous soft-tissue retraction.” In Proc., 2021 Int. Symp. on Medical Robotics (ISMR), 1–7. New York: IEEE.
Praveena, P., G. Subramani, B. Mutlu, and M. Gleicher. 2019. “Characterizing input methods for human-to-robot demonstrations.” In Proc., ACM/IEEE Int. Conf. on Human-Robot Interaction, 344–353. New York: IEEE.
Reynolds, A. 2022. “Challenges and opportunities in the post COVID-19 world.” Rider Levett Bucknall. Accessed February 16, 2023. https://www.rlb.com/oceania/insight/challenges-and-opportunities-in-the-post-covid-19-world/.
Saidi, K. S., T. Bock, and C. Georgoulas. 2016. “Robotics in construction.” In Springer handbook of robotics, edited by B. Siciliano and O. Khatib, 1493–1520. Cham, Switzerland: Springer.
Schulman, J., F. Wolski, P. Dhariwal, A. Radford, and O. K. Openai. 2017. “Proximal policy optimization algorithms.” Preprint, submitted July 20, 2017. https://arxiv.org/abs/1707.06347.
Song, S., A. Zeng, J. Lee, and T. Funkhouser. 2020. “Grasping in the wild: Learning 6DoF closed-loop grasping from low-cost demonstrations.” IEEE Rob. Autom. Lett. 5 (3): 4978–4985. https://doi.org/10.1109/LRA.2020.3004787.
Sriram, C., M. Azam, and M. van Nieuwland. 2015. “The construction productivity imperative.” Accessed February 13, 2023. https://www.mckinsey.com/capabilities/operations/our-insights/the-construction-productivity-imperative.
Thakar, S., L. Fang, B. Shah, and S. Gupta. 2018. “Towards time-optimal trajectory planning for pick-and-transport operation with a mobile manipulator.” In Proc., IEEE Int. Conf. on Automation Science and Engineering, 981–987. New York: IEEE.
Wang, X., X. S. Dong, S. D. Choi, and J. Dement. 2017. “Work-related musculoskeletal disorders among construction workers in the United States from 1992 to 2014.” Occup. Environ. Med. 74 (5): 374. https://doi.org/10.1136/oemed-2016-103943.
Xia, P., F. Xu, Z. Song, S. Li, and J. Du. 2023. “Sensory augmentation for subsea robot teleoperation.” Comput. Ind. 145 (Feb): 103836. https://doi.org/10.1016/j.compind.2022.103836.
Yokoi, K., K. Nakashima, M. Kobayashi, H. Mihune, H. Hasunuma, Y. Yanagihara, T. Ueno, T. Gokyuu, and K. Endou. 2006. “A tele-operated humanoid operator.” Int. J. Rob. Res. 25 (5–6): 593–602. https://doi.org/10.1177/0278364906065900.
Yu, S.-N., B.-G. Ryu, S.-J. Lim, C.-J. Kim, M.-K. Kang, and C.-S. Han. 2009. “Feasibility verification of brick-laying robot using manipulation trajectory and the laying pattern optimization.” Autom. Constr. 18 (5): 644–655. https://doi.org/10.1016/j.autcon.2008.12.008.
Zakka, K., A. Zeng, J. Lee, and S. Song. 2019. “Form2Fit: Learning shape priors for generalizable assembly from disassembly.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 9404–9410. New York: IEEE.
Zeng, A., et al. 2020. “Transporter networks: Rearranging the visual world for robotic manipulation.” In Vol. 155 of Proc., Int. Conf. on Robot Learning, 726–747. New York: Proceedings of Machine Learning Research.
Zeng, C., H. Zhou, W. Ye, and X. Gu. 2022. “iArm: Design an educational robotic arm kit for inspiring students’ computational thinking.” Sensors 22 (8): 2957. https://doi.org/10.3390/s22082957.
Zhang, T., S. Guo, T. Tan, X. Hu, and F. Chen. 2020. “Generating adjacency-constrained subgoals in hierarchical reinforcement learning.” In Proc., Advances in Neural Information Processing Systems. San Diego: Neural Information Processing Systems Foundation.
Zhao, X., and C. C. Cheah. 2023. “BIM-based indoor mobile robot initialization for construction automation using object detection.” Autom. Constr. 146 (Feb): 104647. https://doi.org/10.1016/j.autcon.2022.104647.
Zhou, H., J. Xiao, H. Kang, X. Wang, W. Au, and C. Chen. 2022. “Learning-based slip detection for robotic fruit grasping and manipulation under leaf interference.” Sensors 22 (15): 5483. https://doi.org/10.3390/s22155483.
Zhou, T., P. Xia, Y. Ye, and J. Du. 2023a. “Embodied robot teleoperation based on high-fidelity visual-haptic simulator: Pipe-fitting example.” J. Constr. Eng. Manage. 149 (12): 04023129. https://doi.org/10.1061/JCEMD4.COENG-13916.
Zhou, T., Q. Zhu, Y. Ye, and J. Du. 2023b. “Humanlike inverse kinematics for improved spatial awareness in construction robot teleoperation: Design and experiment.” J. Constr. Eng. Manage. 149 (7): 04023044. https://doi.org/10.1061/JCEMD4.COENG-13350.
Zhu, Y., J. Tremblay, S. Birchfield, and Y. Zhu. 2020. “Hierarchical planning for long-horizon manipulation with geometric and symbolic scene graphs.” In Proc., IEEE Int. Conf. on Robotics and Automation (ICRA), 6541–6548. New York: IEEE.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 38Issue 6November 2024

History

Received: Dec 14, 2023
Accepted: May 10, 2024
Published online: Aug 6, 2024
Published in print: Nov 1, 2024
Discussion open until: Jan 6, 2025

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Postdoctoral Researcher, Dept. of Civil and Environmental Engineering, Univ. of Tennessee, Knoxville, TN 37996. Email: [email protected]
Ph.D. Candidate, Dept. of Civil and Environmental Engineering, Univ. of Tennessee, Knoxville, TN 37996. ORCID: https://orcid.org/0000-0003-3243-6805. Email: [email protected]
Mengjun Wang, S.M.ASCE [email protected]
Ph.D. Candidate, Dept. of Civil and Environmental Engineering, Univ. of Tennessee, Knoxville, TN 37996. Email: [email protected]
Associate Professor, Dept. of Civil and Environmental Engineering, Univ. of Tennessee, Knoxville, TN 37996 (corresponding author). ORCID: https://orcid.org/0000-0003-2869-9346. Email: [email protected]
Professor, Dept. of Mechanical, Aerospace, and Biomedical Engineering, Univ. of Tennessee, Knoxville, TN 37996. ORCID: https://orcid.org/0000-0003-0339-8811. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share