TECHNICAL PAPERS
Oct 15, 2003

Neural Networks and Reinforcement Learning in Control of Water Systems

Publication: Journal of Water Resources Planning and Management
Volume 129, Issue 6

Abstract

In dynamic real-time control (RTC) of regional water systems, a multicriteria optimization problem has to be solved to determine the optimal control strategy. Nonlinear and/or dynamic programming based on simulation models can be used to find the solution, an approach being used in the Aquarius decision support system (DSS) developed in The Netherlands. However, the computation time required for complex models is often prohibitively long, and therefore such a model cannot be applied in RTC of water systems. In this study, Aquarius DSS is chosen as a reference model for building a controller using machine learning techniques such as artificial neural networks (ANN) and reinforcement learning (RL), where RL is used to decrease the error of the ANN-based component. The model was tested with complex water systems in The Netherlands, and very good results were obtained. The general conclusion is that a controller, which has learned to replicate the optimal control strategy, can be used in RTC operations.

Get full access to this article

View all available purchase options and get full access to this article.

References

Bhattacharya, B., and Solomatine, D. P. (2000). “Application of artificial neural network in stage-discharge relationship.” Proc., Hydroinformatics Conf. 2000.
Crites, R. H., and Barto, A. G. (1996). “Improving elevator performance using reinforcement learning.” Advances in Neural Information Processing Systems, Proc., 1995 Conf., MIT Press, Cambridge, Mass.
Dawson, C. W., and Wilby, R.(1998). “An artificial neural network approach to rainfall-runoff modelling.” Hydrol. Sci. J., 43(1) pp. 47–66.
Harris, C. J. (1994). Advances in intelligent control, Taylor and Francis, London.
Haykin, S. (1999). Neural networks: A comprehensive foundation, Prentice-Hall, Upper Saddle River, N.J.
Lint, J. W. C., and Vonk, Z. C. (1999). “Neurale netwerken voorspellen waterstanden.” Proc., Symp.: Neurale Netwerken in Waterbeheer, Technical University of Delft, Delft, The Netherlands (in Dutch).
Lobbrecht, A. H. (1997). “Dynamic water-system control: Design and operation of regional water-resources systems.” PhD thesis, Technical Univ. of Delft, Delft, The Netherlands.
Lobbrecht, A. H., and Solomatine, D. P. (1999). “Control of water levels in polder areas using neural networks and fuzzy adaptive systems.” Water Industry Systems: Modeling and Optimization Applications, Research Studies Press, Baldock, England.
Lobbrecht, A. H., Solomatine, D. P., and Bazartseren, B. (2000). “Intelligent real-time control in water resources management.” Proc., Hydroinformatics Conf. 2000.
Loucks, D. P., Stedinger, J. R., and Haith, D. A. (1981). Water resources systems planning and analysis, Prentice-Hall, Englewood Cliffs, N.J.
Minns, A. W., and Hall, M. J.(1996). “Artificial neural networks as rainfall-runoff models.” Hydrol. Sci. J., 4(13) 399–418.
Mitchell, T. M. (1997). Machine learning, McGraw-Hill, New York.
Peshkin, L., and Savova, V. (2002). “Reinforcement learning for adaptive routing.” Proc., Int. Joint Conf. on Neural Networks, IEEE, Piscataway, N.J.
Singh, S., and Bertsekas, D. (1996). “Reinforcement learning for dynamic channel allocation in cellular telephone systems.” Advances in Neural Information Processing Systems, Proc., 1996 Conf., MIT Press, Cambridge, Mass.
Solomatine, D. P. (2002). “Applications of data-driven modeling and machine learning in control of water resources.” Computational Intelligence in Control, M. Mohammadian, R. A. Sarker, and X. Yao, eds., Idea Group, Hershey, Pa.
Solomatine, D. P., and Dulal, K.(2002). “Model tree as an alternative toneural network in rainfall-runoff modeling.” Hydrol. Sci. J., 48(3), 399–411.
Solomatine, D. P., and Torres, A. (1996). “Neural network approximation of a hydrodynamic model in optimizing reservoir operation.” Proc., Hydroinformatics Conf. 1996, Balkema, Rotterdam, The Netherlands.
Sutton, R. S., and Barto, A. G. (1998). Reinforcement learning—An introduction, Bradford, Cambridge, U.K.
Tesauro, G.(1995). “Temporal difference learning and TD-Gammon.” Commun. Assoc. Comput. Mach., 38(3), 58–68.
Wilson, G. (1996). “Reinforcement learning: A new technique for the real-time optimal control of hydraulic networks.” Proc., Hydroinformatics Conf. 1996, Balkema, The Netherlands.
Zhang, W., and Dietterich, T. G. (1995). “High performance job-shop scheduling with a time-delay TD(λ) network.” Advances in Neural Information Processing Systems, Proc., 1995 Conf., MIT Press, Cambridge, Mass.

Information & Authors

Information

Published In

Go to Journal of Water Resources Planning and Management
Journal of Water Resources Planning and Management
Volume 129Issue 6November 2003
Pages: 458 - 465

History

Received: Dec 5, 2001
Accepted: Dec 16, 2002
Published online: Oct 15, 2003
Published in print: Nov 2003

Permissions

Request permissions for this article.

Authors

Affiliations

B. Bhattacharya
Research Fellow, Hydroinformatics, International Institute for Infrastructural, Hydraulic and Environmental Engineering, P.O. Box 3015, 2601 DA Delft, The Netherlands.
A. H. Lobbrecht
Senior Lecturer in Hydroinformatics, International Institute for Infrastructural, Hydraulic and Environmental Engineering, P.O. Box 3015, 2601 DA Delft, The Netherlands and HydroLogic BV, P.O. Box 2177, 3800 CD Amersfoort, The Netherlands.
D. P. Solomatine
Associate Professor in Hydroinformatics, International Institute for Infrastructural, Hydraulic and Environmental Engineering, P.O. Box 3015, 2601 DA Delft, The Netherlands.

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share