Technical Papers
Oct 9, 2023

Stability Analysis for Incremental Adaptive Dynamic Programming with Approximation Errors

Publication: Journal of Aerospace Engineering
Volume 37, Issue 1

Abstract

This paper provides a convergence and stability analysis of the incremental value iteration algorithm under the influence of various errors. Incremental control is firstly used to linearize the continuous-time nonlinear system, recursive least squares (RLS) identification is then introduced to identify the incremental model online. Based on the incremental model, the value iteration algorithm is used to design an optimal adaptive controller, with an analytical optimal control law. Moreover, the convergence of the developed incremental value iteration algorithm is proved. The stability of the controller is analyzed using Lyapunov stability theory. Finally, a flight control simulation verifies the robustness of the controller to various initial conditions, as well as adaptation to actuator faults.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

Some or all data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

References

Acquatella, P., E. Kampen, and Q. P. Chu. 2013. “Incremental backstepping for robust nonlinear flight control.” In Proc., EuroGNC 2013: 2nd CEAS Specialist Conf. on Guidance, Navigation and Control. Brussels, Belgium: Council of European Aerospace Societies.
Balakrishnan, S. N., J. Ding, and F. Lewis. 2018. “Issues on stability of ADP feedback controllers for dynamical systems.” IEEE Trans. Syst. Man Cybern. Part B Cybern. 38 (4): 913–917. https://doi.org/10.1109/TSMCB.2008.926599.
Bertsekas, D. P. 2019. Reinforcement learning and optimal control. 1st ed. Belmont, MA: Athena Scientific.
de Alvear Cárdenas, J. I., and B. Sun, and E. J. Van Kampen. 2018. “Intelligent adaptive control using LADP and IADP applied to F-16 aircraft with imperfect measurements.” In Proc., AIAA Scitech 2021 Forum, 1119. Reston, VA: American Institute of Aeronautics and Astronautics.
Guo, W., J. Si, F. Liu, and S. Mei. 2018. “Policy approximation in policy iteration approximate dynamic programming for discrete-time nonlinear system.” IEEE Trans. Neural Networks Learn. Syst. 29 (7): 2794–2807. https://doi.org/10.1109/TNNLS.2017.2702566.
Haykin, S. 2009. Neural networks and learning machines. 3rd ed. New York: Pearson Education.
Heydari, A. 2014. “Revisiting approximate dynamic programming and its convergence.” IEEE Trans. Cybern. 44 (12): 2733–2743. https://doi.org/10.1109/TCYB.2014.2314612.
Heydari, A. 2015. “Theoretical and numerical analysis of approximate dynamic programming with approximation errors.” J. Guid. Control Dyn. 39 (2): 301–311. https://doi.org/10.2514/1.G001154.
Heydari, A. 2018. “Stability analysis of optimal adaptive control using value iteration with approximation errors.” IEEE Trans. Autom. Control 63 (9): 3119–3126. https://doi.org/10.1109/TAC.2018.2790260.
Isermann, R., and M. Munchhof. 2011. Identification of dynamic systems: An introduction with applications. 1st ed. Berlin: Springer-Verlag.
Jiang, D., Z. Cai, Z. Liu, H. Peng, and Z. Wu. 2022. “An integrated tracking control approach based on reinforcement learning for a continuum robot in space capture missions.” J. Aerosp. Eng. 35 (5): 1–10. https://doi.org/10.1061/(ASCE)AS.1943-5525.0001426.
Jiang, Y., and Z. P. Jiang. 2017. Robust adaptive dynamic programming. 1st ed. New York: Wiley.
Lewis, F., and D. R. Liu. 2013. Reinforcement learning and approximate dynamic programming for feedback control. 1st ed. New York: Wiley.
Liu, D., and Q. Wei. 2014. “Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.” IEEE Trans. Neural Networks Learn. Syst. 25 (3): 621–634. https://doi.org/10.1109/TNNLS.2013.2281663.
Liu, Z., Y. Zhang, J. Liang, and H. Chen. 2022. “Application of the improved incremental nonlinear dynamic inversion in fixed-wing UAV flight tests.” J. Aerosp. Eng. 35 (6): 1–13. https://doi.org/10.1061/(ASCE)AS.1943-5525.0001495.
Powell, W. B. 1977. “Approximate dynamic programming: Solving the curses of dimensionality.” In General systems yearbook, 22. Hoboken, NJ: John Wiley & Sons.
Sharma, R., and G. W. P. York. 2018. “Near optimal finite-time terminal controllers for space trajectories via SDRE-based approach using dynamic programming.” Aerosp. Sci. Technol. 75 (Apr): 128–138. https://doi.org/10.1016/j.ast.2017.12.022.
Sieberling, S., Q. P. Chu, and J. A. Mulder. 2010. “Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction.” J. Guid. Control Dyn. 33 (6): 1732–1742. https://doi.org/10.2514/1.49978.
Sonneveldt, L. 2011. “Adaptive backstepping flight control for modern fighter aircraft.” Ph.D. thesis, Dept. of Control and Operation, Delft Univ. of Technology.
Sun, B., and E. Kampen. 2021. “Incremental adaptive optimal control using incremental model-based global dual heuristic programming subject to partial observability.” Appl. Soft Comput. 103 (May): 1–15.
Sutton, R. S., and A. G. Barto. 2014. Reinforcement learning: An introduction. 2nd ed. London: MIT Press.
Sutton, R. S., A. G. Barto, and R. J. Williams. 1992. “Reinforcement learning is direct adaptive optimal control.” IEEE Control Syst. Mag. 12 (2): 19–22. https://doi.org/10.1109/37.126844.
Tamimi, A., L. Frank, and M. Khalaf. 2008. “Discrete-time nonlinear HJB solution using approximate dynamic programming: Convergence proof.” IEEE Trans. Syst. Man Cybern. 38 (4): 943–949. https://doi.org/10.1109/TSMCB.2008.926614.
Wang, Q., L. Gong, C. Dong, and K. Zhong. 2019a. “Morphing aircraft control based on switched nonlinear systems and adaptive dynamic programming.” Aerosp. Sci. Technol. 93 (Oct): 1–16.
Wang, X. R., E. Kampen, and Q. P. Chu. 2019b. “Incremental sliding-mode fault-tolerant flight control.” J. Guid. Control Dyn. 42 (2): 244–259. https://doi.org/10.2514/1.G003497.
Wang, X. R., E. Kampen, Q. P. Chu, and P. Lu. 2019c. “Stability analysis for incremental nonlinear dynamic inversion control.” J. Guid. Control Dyn. 42 (5): 1116–1129. https://doi.org/10.2514/1.G003791.
Wang, X. R., T. Mkhoyan, and R. D. Breuker. 2021. “Nonlinear incremental control for flexible aircraft trajectory tracking and load alleviation.” Aerosp. Sci. Technol. 27 (1): 1–9. https://doi.org/10.1016/j.ast.2012.05.006.
Wang, X. R., and S. H. Sun. 2022. “Incremental fault-tolerant control for a hybrid quad-plane UAV subjected to a complete rotor loss.” Aerosp. Sci. Technol., 1–9.
Wang, Y. C., W. S. Chen, S. X. Zhang, and J. W. Zhu. 2018. “Command-filtered incremental backstepping controller for small unmanned aerial vehicles.” J. Guid. Control Dyn. 41 (4): 1–14.
Werbos, P. J. 1977. “Advanced forecasting methods for global crisis warning and models of intelligence.” In General systems yearbook, 25–38. Denver: Annual Meetings of the Society for General Systems Research.
Zhou, Y., E. Kampen, and Q. P. Chu. 2015. “Incremental approximate dynamics programming for nonlinear flight control design.” In Proc., 3rd CEAS EuroGNC: Specialist Conf. on Guidance, Navigation and Control, 33–40. Toulouse, France: Council of European Aerospace Societies.
Zhou, Y., E. Kampen, and Q. P. Chu. 2016. “An incremental approximate dynamic programming flight controller based on output feedback.” In Proc., AIAA Guidance, Navigation, and Control Conf., 1–16. Reston, VA: American Institute of Aeronautics and Astronautics.
Zhou, Y., E. van Kampen, and Q. P. Chu. 2018. “Incremental approximate dynamic programming for nonlinear adaptive tracking control with partial observability.” J. Guid. Control Dyn. 41 (12): 2554–2567.
Zhou, Y., E. van Kampen, and Q. P. Chu. 2020. “Incremental model based online heuristic dynamic programming for nonlinear adaptive tracking control with partial observability.” Aerosp. Sci. Technol. 105 (Oct): 1–14.

Information & Authors

Information

Published In

Go to Journal of Aerospace Engineering
Journal of Aerospace Engineering
Volume 37Issue 1January 2024

History

Received: Feb 9, 2023
Accepted: Aug 3, 2023
Published online: Oct 9, 2023
Published in print: Jan 1, 2024
Discussion open until: Mar 9, 2024

Permissions

Request permissions for this article.

ASCE Technical Topics:

Authors

Affiliations

Dept. of Control and Operation, Delft Univ. of Technology, Delft, South Holland 2629 HS, Netherlands (corresponding author). Email: [email protected]
Dept. of Control and Operation, Delft Univ. of Technology, Delft, South Holland 2629 HS, Netherlands. ORCID: https://orcid.org/0000-0002-5593-4471. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share