Technical Papers
Feb 7, 2023

Suppression of Roll Oscillations of a Canard-Configuration Model Using Fluid Effector and Reinforcement Learning

Publication: Journal of Aerospace Engineering
Volume 36, Issue 3

Abstract

High-angle-of-attack uncommanded roll oscillations are dangerous and can cause significant challenges in flight control. This paper constructs a stability augmented system to suppress roll oscillations with nonzero mean roll angles in a canard-configuration model. To overcome the problem of weak traditional ailerons caused by large-scale flow separations at high angles of attack, spanwise blowing was used as fluid effectors to generate lateral control moments. The control effect and mechanism of spanwise blowing were analyzed through the results of force measurements and experiments using particle image velocimetry (PIV), respectively. Spanwise blowing generates the control moment by changing the trajectory of the leading-edge vortex and delaying vortex breakdown. Subsequently, virtual flight experiment technology was used to train a policy for the stability augmented system based on real-world data using deep reinforcement learning in the wind tunnel. When testing the agent, the transient flow fields around the model were obtained synchronously using time-resolved particle image velocimetry (TR-PIV). The test results showed that the agent learned to keep the model roll at approximately zero by effectively controlling the flow field using fluid effectors.

Practical Applications

The rapid development of artificial intelligence (AI) brings new ideas for various industries. Among various AI technologies, deep reinforcement learning is a self-evolving technique that is suitable for solving complex control and decision-making problems. on the other hand, the complex dynamic characteristics of aircraft at high angles of attack leads to the emergence of uncommanded motions, making flight dangerous. Therefore, this paper focuses on the suppression of uncommanded motion of canard configuration at high angles of attack, using deep reinforcement learning. The technology of jet flow control was used to play the role of aileron. After enough training in wind tunnel, the AI finally learned how to suppress the uncommanded motion of the aircraft and showed interesting behavioral logic. The results of this paper show that deep reinforcement learning can be used for complex control problems in aerospace science; however, the practicality of deep reinforcement learning in real flight needs further verification.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

This work was supported by Natural Science Foundation of Jiangsu Province (Grant No. BK20200482) and the National Natural Science Foundation of China (Grant No. 12002166).

References

Abbeel, P., A. Coates, M. Quigley, and A. Ng. 2006. “An application of reinforcement learning to aerobatic helicopter flight.” In Advances in Neural Information Processing Systems 19: Proc. of the 2006 Conf., 19. New York: IEEE.
Arulkumaran, K., M. P. Deisenroth, M. Brundage, and A. A. Bharath. 2017. “Deep reinforcement learning: A brief survey.” IEEE Signal Process. Mag. 34 (6): 26–38. https://doi.org/10.1109/MSP.2017.2743240.
Bakaul, S. R., W. Yankui, and W. Guangxing. 2014. “Suppression of wing rock in slender delta wing by horizontal strakes.” AIAA J. 52 (12): 2739–2750. https://doi.org/10.2514/1.J052957.
Clarke, S. G., and I. Hwang. 2020. “Deep reinforcement learning control for aerobatic maneuvering of agile fixed-wing aircraft.” In Proc., AIAA Scitech 2020 Forum 2020. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2020-0136.
Fujimoto, S., H. Hoof, and D. Meger. 2018. “Addressing function approximation error in actor-critic methods.” In Proc., Int. Conf. on Machine Learning, 1587–1596. Stockholm, Sweden: ICML.
Gaudet, B., R. Furfaro, and R. Linares. 2020. “Reinforcement learning for angle-only intercept guidance of maneuvering targets.” Aerosp. Sci. Technol. 99 (99): 105746. https://doi.org/10.1016/j.ast.2020.105746.
Gursul, I., and Z. Wang. 2018. “Flow control of tip/edge vortices.” AIAA J. 56 (5): 1731–1749. https://doi.org/10.2514/1.J056586.
Haarnoja, T., et al. 2018. “Soft actor-critic algorithms and applications.” Preprint, submitted December 13, 2018. https://arxiv.org/abs/1812.05905.
Hu, T., Z. Wang, and I. Gursul. 2014a. “Attenuation of self-excited roll oscillations of low-aspect-ratio wings by using acoustic forcing.” AIAA J. 52 (4): 843–854. https://doi.org/10.2514/1.J052689.
Hu, T., Z. Wang, and I. Gursul. 2014b. “Passive control of roll oscillations of low-aspect-ratio wings using bleed.” Exp. Fluids 55 (6): 1–16. https://doi.org/10.1007/s00348-014-1752-2.
Hu, T., Z. Wang, I. Gursul, and C. Bowen. 2013. “Active control of self-induced roll oscillations of a wing using synthetic jet.” Int. J. Flow Control 5 (3): 201–214. https://doi.org/10.1260/1756-8250.5.3-4.201.
Hwangbo, J., I. Sa, R. Siegwart, and M. Hutter. 2017. “Control of a quadrotor with reinforcement learning.” IEEE Rob. Autom. Lett. 2 (4): 2096–2103. https://doi.org/10.1109/LRA.2017.2720851.
Kaelbling, L. P., M. L. Littman, and A. W. Moore. 1996. “Reinforcement learning: A survey.” J. Artif. Intell. Res. 4 (May): 237–285. https://doi.org/10.1613/jair.301.
Katz, J. 1999. “Wing/vortex interactions and wing rock.” Prog. Aerosp. Sci. 35 (7): 727–750. https://doi.org/10.1016/S0376-0421(99)00004-4.
Kingma, D. P., and J. Ba. 2014. “Adam: A method for stochastic optimization.” Preprint, submitted July 5, 2022. https://arxiv.org/abs/1412.6980.
LeCun, Y., Y. Bengio, and G. Hinton. 2015. “Deep learning.” Nature 521 (7553): 436–444. https://doi.org/10.1038/nature14539.
Lillicrap, T. P., J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. 2015. “Continuous control with deep reinforcement learning.” Preprint, submitted November 14, 2022. https://arxiv.org/abs/1509.02971v6.
Mnih, V. 2015. “Human-level control through deep reinforcement learning.” Nature 518 (7540): 529–533. https://doi.org/10.1038/nature14236.
Mnih, V., K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. 2013. “Playing atari with deep reinforcement learning.” Preprint, submitted January 1, 2013. https://arxiv.org/abs/1312.5602.
Nair, V., and G. E. Hinton. 2010. “Rectified linear units improve restricted Boltzmann machines.” In Proc., ICML. Haifa, Israel: ICML.
Shalumov, V. 2020. “Cooperative online guide-launch-guide policy in a target-missile-defender engagement using deep reinforcement learning.” Aerosp. Sci. Technol. 104 (Sep): 105996. https://doi.org/10.1016/j.ast.2020.105996.
Shkarayev, S. V., P. G. Ifju, J. C. Kellogg, and T. J. Mueller. 2007. Introduction to the design of fixed-wing micro air vehicles including three case studies. Reston, VA: American Institute of Aeronautics and Astronautics.
Silver, D., et al. 2017. “Mastering the game of Go without human knowledge.” Nature 550 (7676): 354–359. https://doi.org/10.1038/nature24270.
Sutton, R. S., and A. G. Barto. 2018. Reinforcement learning: An introduction. Cambridge, MA: MIT Press.
Xu, D., Z. Hui, Y. Liu, and G. Chen. 2019. “Morphing control of a new bionic morphing UAV with deep reinforcement learning.” Aerosp. Sci. Technol. 92 (Sep): 232–243. https://doi.org/10.1016/j.ast.2019.05.058.

Information & Authors

Information

Published In

Go to Journal of Aerospace Engineering
Journal of Aerospace Engineering
Volume 36Issue 3May 2023

History

Received: Jan 4, 2022
Accepted: Nov 16, 2022
Published online: Feb 7, 2023
Published in print: May 1, 2023
Discussion open until: Jul 7, 2023

Permissions

Request permissions for this article.

Authors

Affiliations

Yizhang Dong, Ph.D. [email protected]
Ph.D. Student, College of Aerospace Engineering, Nanjing Univ. of Aeronautics and Astronautics, Nanjing 210000, China; National Key Lab of Computational Mathematics and Experimental Physics, Nandahongmen St. 1, Beijing 100076, China. Email: [email protected]
Professor, College of Aerospace Engineering, Nanjing Univ. of Aeronautics and Astronautics, Yudao St. 29, Nanjing, Jiangsu 210016, China. Email: [email protected]
Associate Professor, National Key Laboratory of Transient Physics, Nanjing Univ. of Science and Technology, Xiaolingwei St. 200, Xuanwu District, Nanjing, Jiangsu 210000, China (corresponding author). Email: [email protected]
Zengran Ge, Ph.D. [email protected]
Ph.D. Student, College of Aerospace Engineering, Nanjing Univ. of Aeronautics and Astronautics, Yudao St. 29, Nanjing, Jiangsu 210016, China. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share