Model Performance Sensitivity to Objective Function during Automated Calibrations
Publication: Journal of Hydrologic Engineering
Volume 17, Issue 6
Abstract
Previous studies have reported limitations of the efficiency criteria commonly used in hydrology to describe goodness of model simulations. This study examined sensitivity of model performance to the objective function used during automated calibrations. Nine widely used efficiency criteria were evaluated for their effectiveness as objective function, and goodness of the model predictions were examined using 13 criteria. Two cases (Case I: Using observed streamflow data and Case II: Using simulated streamflow) were considered to accomplish objectives of the study using a widely used watershed model (SWAT) and good-quality field data from a well-monitored experimental watershed. Major findings of the study include (1) automated calibration results are sensitive to the objective function group—group that work based on minimization of the absolute deviations (Group I), group that work based on minimization of square of the residuals (Group II), and groups that use log of the observed and simulated streamflow values (Group III)—but not to objective functions within the group; (2) efficiency criteria that belong to Group I were the most effective when used as objective function for accurate simulation of both low flows and high flows; (3) Group I and Group II objective functions complement each other’s performance; (4) with regard to the capability to describe goodness of model simulations, efficiency criteria that belong to Group I showed superior robustness; (5) for the study watershed, use of the long-term interannual calendar day mean as baseline model did not improve capability of an efficiency criterion to describe model performance; and (6) even for ideal conditions where uncertainty in input data and model structure are fully accounted for, identifying the so-called global parameters values through calibration could be daunting as parameter values that were significantly divergent from predetermined values produced model simulations that can be considered near perfect even when judged using multiple efficiency criteria.
Get full access to this article
View all available purchase options and get full access to this article.
Acknowledgments
Source code for DDS (Tolson and Shoemaker 2007) was obtained from the first writer of the program.
References
Arnold, J. G., Williams, J. R., Srinivasan, R., and King, K. W. (1999). SWAT: Soil and water assessment tool, U.S. Dept. of Agriculture, Agricultural Research Service, Temple, TX.
ASCE Task Committee on Definition of Criteria for Evaluation of Watershed Models of the Watershed Management, Irrigation, and Drainage Division. (1993). “Criteria for evaluation of watershed models.” J. Irrig. Drain. Eng.JIDEDH, 119(3), 429–442.
Bekele, E. G., and Nicklow, J. W. (2007). “Multi-objective automated calibration of SWAT using NSGA-II.” J. Hydrol. (Amsterdam)JHYDA7, 341(3–4), 165–176.
Beven, K. J., and Freer, J. (2001). “Equifinality, data assimilation, and uncertainty estimation in mechanistic modeling of complex environmental systems using the GLUE methodology.” J. Hydrol. (Amsterdam)JHYDA7, 249(1–4), 11–29.
Bosch, D. D., et al. (2007). “Little river experimental watershed database.” Water Resour. Res.WRERAQ 43(9), W09470.
Bosch, D. D., Sullivan, D. G., and Sheridan, J. M. (2006). “Hydrologic impacts of land-use changes in coastal plain watersheds.” Trans. ASABETARSBB, 49(2), 423–432.
Criss, R. E., Winston, W. E. (2008). “Do Nash values have value? Discussion and alternate proposals.” Hydrol. ProcessesHYPRE3, 22(14), 2723–2725.
Efstratiadis, A., and Koutsoyiannis, D. (2010). “One decade of multi-objective calibration approaches in hydrological modelling: A review.” Hydrol. Sci. J.HSJODN, 55(1), 58–78.
Gassman, P. W., Reyes, M. R., Green, C. H., and Arnold, J. G. (2007). “The soil and water assessment tool: Historical development, applications, and future research directions.” Trans. ASABETARSBB, 50(4), 1211–1250.
Gupta, H. V., Kling, H., Yilmaz, K. K., and Martineza, G. F. (2009). “Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modeling.” J. Hydrol. (Amsterdam)JHYDA7, 377(1–2), 80–91.
Gupta, H. V., Sorooshian, S., and Yapo, P. O. (1998). “Toward improved calibration of hydrologic models: Multiple and noncommensurable measures of information.” Water Resour. Res.WRERAQ, 34(4), 751–763.
Jain, S. K., and Sudheer, K. P. (2008). “Fitting of hydrologic models: A close look at the Nash-Sutcliffe index.” J. Hydrologic Eng.JHYEFF, 13(10), 981–986.
Krause, P., Boyle, D. P., and Bäse, F. (2005). “Comparison of different efficiency criteria for hydrological model assessment.” Adv. Geosci.69IPIS, 5, 89–97.
Kuczera, G., Kavetski, D., Franks, S., and Thyer, M. (2006). “Towards a bayesian total error analysis of conceptual rainfall-runoff models: Characterizing model error using storm-dependent parameters.” J. Hydrol. (Amsterdam)JHYDA7, 331(1–2), 161–177.
Legates, D. R., and McCabe, G. J. (1999). “Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model evaluation.” Water Resour. Res.WRERAQ, 35(1), 233–241.
Li, H., Sivapalan, M., and Tian, F., (2012). “Comparative diagnostic analysis of runoff generation processes in Oklahoma DMIP2 basins: The Blue River and the Illinois River.” J. Hydrol. (Amsterdam)JHYDA7, 418–419, 90–109.
Madsen, H. (2000). “Automated calibration of a conceptual rainfall-runoff model using multiple objectives.” J. Hydrol. (Amsterdam)JHYDA7, 235(3–4), 276–288.
McCuen, R. H., Knight, Z., and Gillian, C. A. (2006). “Evaluation of the Nash-Sutcliffe efficiency index.” J. Hydrol. Eng.JHYEFF, 11(6), 597–602.
Moriasi, D. N., Arnold, J. G., Liew Van, M. W., Bingner, R. L., Harmel, R. D., and Veith, T. L. (2007). “Model evaluation guidelines for systematic quantification of accuracy in watershed simulations.” Trans. ASABETARSBB, 50(3), 885–900.
Muleta, M. K. (2012). “Improving model performance using season based evaluation.” J. Hydrol. Eng.JHYEFF, 17(1), 191–200.
Muleta, M. K., and Nicklow, J. W. (2005). “Sensitivity and uncertainty analysis coupled with automated calibration for a distributed watershed model.” J. Hydrol. (Amsterdam)JHYDA7, 306(1–4), 127–145.
Nash, J. E., and Sutcliffe, J. V. (1970). “River flow forecasting through conceptual models. Part I—A discussion of principles.” J. Hydrol. (Amsterdam)JHYDA7, 10(3), 282–290.
Neitsch, S. L., Arnold, J. G., and Williams, J. R. (2005). Soil and water assessment tool theoretical documentation—Version 2005, Grassland, Soil and Water Research Laboratory, Temple, TX.
Schaefli, B., and Gupta, H. V. (2007). “Do Nash values have value?.” Hydrol. ProcessesHYPRE3, 21(15), 2075–2080.
Schaefli, B., Hingray, B., Musy, A. (2005). “A conceptual glaciohydrological model for high mountainous catchments.” Hydrol. Earth Syst. Sci.HESSCF, 9(1–2), 95–109.
Seibert, J. (2001). “On the need for benchmarks in hydrological modeling.” Hydrol. ProcessesHYPRE3, 15(6), 1063–1064.
Sheshukov, A., Daggupati, P., Lee, M. C., Douglas-Mankin, K. (2009). “ArcMap tool for pre-processing SSURGO soil database for ArcSWAT.” Proc., 5th Int. SWAT Conf., Texas A&M University, College Station, TX.
Sobol’, I. M. (1993). “Sensitivity estimates for non-linear mathematical models.” Math. Model. Comput. Exp.MMCEE6, 1(4), 407–414.
Sorooshian, S., Gupta, V. K., and Fulton, J. L. (1983). “Evaluation of maximum likelihood parameter estimation techniques for conceptual rainfall-runoff models: Influence of calibration data variability and length on model credibility.” Water Resour. Res.WRERAQ, 19(1), 251–259.
Southeast Watershed Research Laboratory (SEWRL). (2010). “Little River experimental watershed data.” 〈ftp://www.tiftonars.org/〉 (July 2010).
Tang, Y., Reed, P. M., van Werkhoven, K., and Wagener, T. (2007). “Advancing the identification and evaluation of distributed rainfall-runoff models using global sensitivity analysis.” Water Resour. Res.WRERAQ, 43(6), W06415.
Tang, Y., Reed, P. M., and Wagener, T. (2006). “How effective and efficient are multiobjective evolutionary algorithms at hydrologic model calibration?.” Hydrol. Earth Syst. Sci.HESSCF, 10(2), 289–307.
Tian, F., Li, H., and Sivapalan, M. (2012). “Model diagnostic analysis of seasonal switching of runoff generation mechanisms in the Blue River basin, Oklahoma.” J. Hydrol. (Amsterdam)JHYDA7, 418–419, 136–149.
Tolson, B. A., and Shoemaker, C. A. (2007). “Dynamically dimensioned search algorithm for computationally efficient watershed model calibration.” Water Resour. Res.WRERAQ, 43(1), W01413.
U.S. Environmental Protection Agency (U.S. EPA). (2002). “Guidance for quality assurance project plans for modeling.” EPA QA/G-5M. Rep. EPA/240/R-02/007, Washington, DC.
van Werkhoven, K., Wagener, T., Reed, P., and Tang, Y. (2008). “Rainfall characteristics define the value of streamflow observations for distributed watershed model identification.” Geophys. Res. Lett.GPRLAJ, 35(11), L11403.
Willmott, C. J. (1981). “On the validation of models.” Phys. Geogr., 2, 184–194.
Winchell, M., Srinivasan, R., Di Luzio, M. and Arnold, J. (2008). ArcSWAT 2.1 Interface for SWAT 2005, User’s Guide: Blackland Research Center, Temple, TX.
Yan, J., Haan, C. T. (1991). “Multiobjective parameter estimation for hydrologic models—Weighting of errors.” Trans. ASABETAPSAY, 34(1), 0135–0141.
Yapo, P. O., Gupta, H. V., and Sorooshian, S. (1998). “Multi-objective global optimization for hydrologic models.” J. Hydrol. (Amsterdam)JHYDA7, 204(1–4), 83–97.
Information & Authors
Information
Published In
Copyright
© 2012. American Society of Civil Engineers.
History
Received: Apr 21, 2011
Accepted: Sep 1, 2011
Published online: Sep 3, 2011
Published in print: Jun 1, 2012
Published ahead of production: Jun 15, 2012
Authors
Metrics & Citations
Metrics
Citations
Download citation
If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.