Open access
Technical Papers
Apr 23, 2020

Metamodel-Based Seismic Fragility Analysis of Concrete Gravity Dams

Publication: Journal of Structural Engineering
Volume 146, Issue 7

Abstract

Probabilistic methods, such as fragility analysis, have been developed as a promising alternative for the seismic assessment of dam-type structures. However, given the costly reevaluation of the numerical model simulations, the effect of the model parameters likely to affect the seismic fragility of the system is frequently overlooked. Acknowledging the lack of the thorough exploration of different machine learning techniques to develop surrogates or metamodels that efficiently approximate the seismic response of dams, this study provides insight on viable metamodels for the seismic assessment of gravity dams for use in fragility analysis. The proposed methodology to generate multivariate fragility functions offers efficiency while accounting for the most critical model parameter variation influencing the dam seismic fragility. From the analysis of these models, practical design recommendations can be formulated. The procedure presented herein is applied to a case study dam in northeastern Canada, where the polynomial response surface of order 4 (PRS O4) came up as the most viable metamodel among those considered. Its fragility is assessed through comparison with the current safety guidelines to establish a range of usable model parameter values in terms of the concrete-rock angle of friction, drain efficiency, and concrete-rock cohesion.

Introduction

Methods for the seismic analysis of dams have improved extensively in the last few decades, and the growth of computing power has expedited this improvement. Advanced numerical models have become more feasible and, thus, constitute the basis of improved procedures for design and assessment. Moreover, a probabilistic framework is required to manage the various sources of uncertainty that may impact the system performance and decisions related thereto (Ellingwood and Tekie 2001). Fragility analysis, which depicts the conditional probability that a system reaches a structural limit state, is a central tool in this probabilistic framework. Traditional vulnerability assessment methods develop fragility functions by using a single parameter to relate the level of shaking to the expected damage, which consequently produces a robustness of predictions that is highly dependent on the selected parameter. However, the estimation of the fragility of the system can be potentially improved by increasing the number of parameters; in this way, a more complete description of the properties of ground motions can be obtained (Alembagheri 2018). Furthermore, the effect of the variation of the material properties in the seismic fragility analysis of structures with complex numerical models, such as dams, is frequently overlooked due to the costly and time-consuming revaluation of the numerical model. The seismic response and vulnerability assessment of key infrastructure elements often require a large number of nonlinear dynamic analyses of complex finite-element models (FEMs). The substantial computational time may be reduced by using machine learning techniques to develop a surrogate or metamodel, which is an engineering method used when an outcome of interest cannot be easily directly measured; thus, a model of the outcome is used instead (Forrester et al. 2008). In addition, if the outcome of interest comes from nonlinear FEM analysis that reflects the dynamic behavior of the structure under seismic loading, the metamodel will emulate this behavior. To this end, these algorithms include several features in their mathematical formulation to help capture this highly nonlinear behavior. Among them, the use of higher order sparse polynomials, partitioning the sample space and fitting a series of models and then combines them into an ensemble with an overall better performance and by mapping into high dimensional feature space, between others.
Such a challenge is particularly relevant to the case of large-scale infrastructures such as dams subjected to seismic loads. Thus, the main goal of this study is to explore the applicability of metamodels for the seismic assessment of gravity dams and present a methodology to develop parameterized multivariate fragility functions through the use of the latter. The secondary goal is to explicitly account for the effect of the model parameter variation in the seismic fragility analysis. The proposed methodology has the added asset of properly depicting the seismic scenario likely to occur at a specific site, enhancing the accuracy of the seismic fragility analysis. The proposed methodology is applied to a case study gravity dam located in northeastern Canada.

Literature Review

In the past two decades, univariate fragility functions, which depict the potential for limit state exceedance conditional on a ground motion intensity, have been readily adopted and developed for the seismic assessment of structures, as it can been found in several state-of-the-art reviews (Ghosh et al. 2017; Hariri Ardebili and Saouma 2016b; Muntasir Billah and Shahria 2015). However, given the recognized limitations of such univariate fragility curves, the use of multivariate fragility functions to assess the vulnerability of a given structure or portfolio of structures is beginning to be used progressively (Brandenberg et al. 2011; Koutsourelakis 2010; Pan et al. 2010; Gehl et al. 2009; Grigoriu and Mostafa 2002). Noted advantages of these parameterized or multivariate fragility functions include the potential for efficient posterior uncertainty propagation, exploring sensitivities or the influence of design parameter variation, and enabling application across a portfolio of structures. Nevertheless, given the large number of simulations required, the development of fragility surfaces or multivariate fragility functions that leverage numerical models impose high computational burdens.
The combination of numerical models, probabilistic approaches, and machine learning has gained considerable interest in the literature in recent years for engineering design and structural reliability (Seo and Rogers 2017; Yu et al. 2014; Sudret 2012; Wang and Shan 2007; Simpson et al. 2001). This combination is justified by the significant randomness that characterizes not only the earthquake excitation but also the structural system itself (e.g., stochastic variations in the material properties, degradation due to aging, and temperature fluctuation, etc.). Surrogate modeling techniques within a seismic fragility framework have found recent applications for the safety assessment of buildings and bridges, among other structures (Mangalathu and Jeon 2018; Sichani et al. 2017; Kameshwar and Padgett 2014; Ghosh et al. 2013; Seo and Linzell 2013; Seo et al. 2012). Even though many of these studies considered several seismic intensity measures (IMs) and model parameters (MPs) for building the metamodels to predict the response of the structure, most of them do not clearly depict the influence of all the considered parameters in the form of multivariate fragility functions from the respective metamodels. For the specific case of dam-type structures, an extensive comparison between machine-learning data-based predictive models for monitoring the dam behavior can be found in Salazar et al. (2015a, b). More recently, Hariri-Ardebili and Pourkamali-Anaraki (2017a, b), and Hariri-Ardebili (2018) have used machine learning techniques to perform reliability analysis applied to gravity dams against flooding, earthquakes, and aging, considering, in some cases, explicit limit state functions and simplified FEM in others.
Most of the prior studies on the seismic assessment of concrete gravity dams via machine learning techniques are limited to the consideration of a single metamodel, simplified FEMs, and univariate fragility functions. Therefore, they do not explore the most suitable metamodel for fragility analysis of this type of structure, nor they explore the influence of the variation of the model parameters on the seismic fragility analysis. Moreover, none these studies discuss the proper definition of the seismic scenario likely to occur at a specific site in a probabilistic manner.

Originality and Contribution

To address the aforementioned gaps, this paper aims to identify the most viable metamodel, from the subset of machine learning methods considered, for the seismic fragility assessment of gravity dams, provide an overview on the importance of the parameters influencing the dam performance, and formulate design recommendations from the analysis. The major contributions of this paper, in order of presentation, can be listed as follows: (1) perform a comparative analysis to determine the best performing regression metamodel to predict the base sliding of gravity dams from six metamodels with different basis function configurations, yielding a total of 14 different regression techniques, for the first time; (2) present a methodology to fit parameterized fragility surfaces, as a function of IMs and MPs, from the metamodels; (3) consider the correlation between the seismic IMs from a probabilistic seismic hazard analysis (PSHA) to generate the samples to characterize the seismic scenario where the metamodel will be evaluated; (4) gain insight into the influence of the model parameters affecting the dam performance and explicitly quantify their effect with the generation of multivariate fragility functions; and (5) formulate model parameter design recommendations from the analysis, e.g., appropriate range of parameters to achieve target risk.

Metamodel-Based Multivariate Fragility Procedure

Dam seismic assessment is a complex task due to the uniqueness of each of such structures and to the interaction between the different components of the system. Similarly, the seismic response of dam-type structures involves nonlinear dynamic analysis of complex FEMs, often requiring prohibitively high computation times. Machine learning describes a series of methods that allow learning from data, i.e., what relationships exist between the quantities of interest. When performing probabilistic studies related to structural reliability, it may be interesting to replace the FEM by a regression model built on a set of simulated responses, for the purpose of computational efficiency (Goulet 2018). Accordingly, the motivation of applying statistical algorithms to develop a seismic probabilistic demand model or metamodel is to expedite this safety assessment process. Such a model of a model is based on machine learning techniques to allow the algorithm to learn from the data and because observations are the output of a simulation, the observation model does not include any observation error. Within this context, a probabilistic seismic demand model expresses an approximate relationship between an uncertain seismic response, e.g., a dam’s maximum relative base sliding, and a set of parameters that influence the response. The basic idea is for the surrogate or metamodel to act as a curve fit to the available data so that the results may be predicted without requiring costly simulation. In the general case, the metamodel can be described as follows:
yiresponse=g(xi)metamodel+v,xi={x1,x2,,xn}icovariates
(1)
where the surrogate model g(·) statistically predicts the response of the structure, yi, for a given set of intensity measures and model parameters, xi; and v = error due to the lack of fit of the surrogate model.
The metamodels considered herein are within an adaptive scheme, i.e., the functions in the metamodels can change according to the input data to reduce the burden of manual selection of several parameters in the metamodel. Therefore, three adaptive metamodels and an interpolation scheme will be considered. In addition, the performance of two other statistical learning algorithms based on kernels and decision trees will also be addressed. Among the different regression techniques, this paper focuses on the following: (1) polynomial response surface models of order 2 to 4 with stepwise regression (PRS O24); (2) multivariate adaptive regressive splines (MARS) with linear and cubic splines; (3) adaptive basis function construction (ABFC); (4) radial basis function (RBF) interpolation with multiquadratic, thin plate splines and Gaussian basis functions; (5) support vector machines for regression (SVMR) with linear, quadratic, cubic, and radial basis function kernels; and (6) random forest for regression (RFR).
Within the extent of this study, the first comparative analysis of metamodels in the context of seismic assessment of dams is performed. Using the results from the finite-element simulations, this study develops metamodels for approximating the seismic response of gravity dam-type structures using the three steps outlined in Fig. 1. The subsequent sections will detail such steps.
Fig. 1. Procedure for the generation of metamodels.

Design of Experiments

To minimize the associated cost of running dynamic nonlinear FEM of dams under seismic excitation while analyzing an adequate number of loading conditions and structural system configurations, an appropriate experimental design method should be used. Structural and material properties likely to affect the seismic response of the structure should be considered, and their associated ranges should be based on experimental data or values found in the literature. The Latin hypercube sampling (LHS) experimental design method is adopted to generate nf sample points representing the different configurations of the dam under study. This sampling technique was selected because of its ability to divide the desired range of values for each parameter into n-equiprobable intervals and then select a sample once from each interval.

Metamodeling Techniques

The following subsections provide a brief overview of the different metamodels tested in this study for the ability to offer viable metamodels of the seismic response of dams. Only the most relevant features of each technique will be presented given that exhaustive mathematical formulation can be found in the literature. Alternatively, relevant details concerning the model fitting and the algorithm settings are provided.

Polynomial Response Surface: PRS

The polynomial response surface is an m-dimensional surface that predicts desired responses using a computationally efficient closed-form polynomial function developed from a set number of input variables (Murphy 2012). PRS was implemented together with stepwise regression to select the best explanatory or basis functions. The sparse polynomial response surface can be represented as follows:
y^=aΘ
(2)
where y^ represents the predicted value with the metamodel, the set of coefficients is represented by a column vector a; and Θ is a column vector of basis functions. In this study, the response variable was considered normally distributed and the metamodel was trained using the statistics and machine learning toolbox in Matlab.
Up to 2nd-order polynomials shall suffice for responses characterized by low curvatures, while 3rd- and 4th-degree polynomials including two factor interactions are more appropriate for significant curvatures (Murphy 2012). This approach is considered because past studies have shown them to be efficient and accurate for concrete gravity dam seismic performance as well as for metamodels of other complex structures (Hariri-Ardebili 2018; Seo and Park 2017; Sichani et al. 2017).

Adaptive Basis Function Construction: ABFC

ABFC is a sparse polynomial regression model building approach that enables adaptive model building without restrictions on the model’s degree, accomplished in polynomial time instead of exponential time, as well as without the requirement to repeat the model building process (Jekabsons 2010a). The required basis functions are automatically iteratively constructed using a heuristic search, specifically for the particular data. The ABFC metamodel can be represented by Eq. (2), where the order of the polynomial basis functions are adaptively determined. When working with relatively small datasets, basis functions selection bias and instability should be prevented. To this end, the ensemble of floating adaptive basis function construction (EF-ABFC) method proposed by Jekabsons (2010a) was used, together with the corrected Akaike’s information criterion (AICC) as the penalization criteria for model evaluation. The software version 0.10.2 variReg (Jekabsons 2010b) implemented through Matlab was used to train this metamodel. Similar to the PRS, ABFC has also been proven to be a valuable technique for the evaluation of the seismic performance of complex structures (Kameshwar and Padgett 2014).

Multivariate Adaptive Regressive Splines: MARS

MARS is a form of regression analysis introduced by Friedman (1991). It is a nonparametric regression technique and can be considered an extension of linear models that automatically take into account nonlinearities and interactions between variables using a tensor product basis of regression splines to represent the multidimensional regression functions. The MARS metamodel can be represented as follows:
y^=i=1ncibi(x)
(3)
where ci = constants coefficients; and bi(x) = basis functions. Due to its adaptive nature, the MARS metamodel partitions the sample space and fits a series of models, each of which has a lower error, and then combines them into an ensemble with an overall better performance (Ghosh et al. 2013). Two MARS models were trained, one with cubic splines and the other with linear splines. The toolbox for Matlab developed by Jekabsons (Jekabsons 2016) was used for this purpose. The algorithm builds a model in two phases: forward selection and backward deletion. From the backward deletion phase, the best models of each size are selected and outputted as the final one based on the ones with the lowest generalized crossvalidation (GCV) estimates. As suggested by Jekabsons (2016), most attention was paid to the maximum number of basis functions included in the model, the maximum degree of interaction between input variables, and the GCV penalty per knot. Regarding the maximum number of basis functions, the recommended value for this parameter is about two times the expected number of basis functions in the final model (Friedman 1991) and in the context of this study was limited to 30. The maximum degree of interaction was set to 3 because it was found to be a fair trade off between the metamodel predictive capabilities and the use of computational resources. Concerning the GCV penalty per knot, larger values will lead to fewer knots (i.e., the final model will have fewer basis functions). Simulation studies suggest values in the range of about 2–4 (Jekabsons 2016). As recommended by Friedman (1991), taking into account the maximum number of interactions, these value was set equal to 3. MARS models are well suited to nonlinear problems, easily interpretable, and have achieved great accuracy in predicting structural response due to their adaptive nature while being computationally efficient for seismic assessment (Salazar et al. 2015a; Kameshwar and Padgett 2014; Ghosh et al. 2013).

Radial Basis Functions: RBF

Radial basis function interpolation uses basis functions whose response monotonically changes as the distance from the central point increases. It was first introduced by Hardy (1971) for scattered multivariate data interpolation, using linear combinations of radially symmetric functions based on Euclidian distances or similar metrics to approximate response functions. The RBF can be expressed in the following functional form
y^=a0+i=1maiΦRBFi[wi(x)]
(4)
where ΦRBFi[wi(x)] = nonlinear mapping from the input layer to the hidden layer; a0 = bias; and a1,,am = connection weights between the hidden layer and output layer, typically determined using iterative procedures. Multiquadratic, thin plate spline and Gaussian radial basis functions are some of the basis functions typically considered for interpolation (Murphy 2012). For the Multicuadratic basis functions and the thin plate spline basis functions, the shape parameter was kept constant as 1/nf, while for the basis Gaussian basis functions, the smoothing parameter was found using leave one out crossvalidation (LOOCV). The RBF metamodel was also implemented with the software variReg (Jekabsons 2010a). Despite the lack of transparency due to the hidden layer, RBF have been proven to generate excellent approximations to a wide range of structural responses (Kameshwar and Padgett 2014; Ghosh et al. 2013; Wang and Shan 2007) and are thus considered in this study.

Support Vector Machine for Regression: SVMR

Support vector machines are a modern class of statistical learning algorithms with a sparse solution; thus, predictions only depend on a subset of the training data, known as support vectors (Murphy 2012). This technique was originally designed for binary classification but can be extended to regression. Moreover, it contains all the main features that characterize the maximum margin algorithm: a nonlinear function is learned by a linear learning machine’s mapping into high-dimensional kernel-induced feature space. The capacity of the system is controlled by parameters that do not depend on the dimensionality of the feature space. In SVM regression, the input X is first mapped onto an m-dimensional feature space using a nonlinear mapping, and then a linear model is constructed in this feature the space. The statistics and machine learning toolbox in Matlab were used to train the metamodel with four different types of kernels (linear, quadratic, cubic, and radial basis functions). A set of hyperparameters values must be set before the learning process, which for SVMR includes the soft margin constant (cost function, C), parameters of the kernel function (width of RBF kernel or degree of a polynomial kernel), and the tolerance width margin, ε. Regarding the soft margin constant the value C=iqr(y)/1.349, where iqr(y) is the interquartile range of response variable, was used for the RBF kernel and C=1 for all other kernels, as recommended by Fan et al. (Fan et al. 2005). The width of the RBF kernel was set equal to 1 and, as it was already mentioned, the degree of the polynomial kernel was set equal to 1, 2, and 3. Concerning the tolerance width margin, ε was set equal to iqr(y)/13.49 as recommended by Fan et al. (2005).
SVMR has started to be used progressively for the fragility assessment of bridges (Ataei 2013; Ghosh et al. 2013; Kameshwar and Padgett 2014) and more recently for dams (Salazar et al. 2015a; Hariri-Ardebili and Pourkamali-Anaraki 2017b), showing satisfactory results.

Random Forest for Regression: RFR

A random forest is a metaestimator that fits a number of classifying decision trees on various subsamples of the dataset and uses averaging to improve the predictive accuracy and control overfitting. The random forest model is an additive-type model that makes predictions by combining decisions from a sequence of base models. More formally, we can write this class of models as follows:
y^=1Ni=0Nfi(x)
(5)
where the final model is the sum of simple base models fi(x). Here, each base classifier is a simple decision tree. In random forests, all the base models are constructed independently using a different subsample of the data (Murphy 2012). A regression ensemble was performed within 100 decision trees using Bootstrap aggregation as the ensemble aggregation method due to its ability to reduce overfitting of the model, handle higher dimensionality very well, and require a less careful tuning of different hyper-parameters unlike the Boosting ensemble method (Efron and Tibshirani 1993). Moreover, random forests typically offer a good estimate on prediction accuracy for external data based on the out-of-bag (OOB) accuracy. For bagged decision trees, the maximum number of decision splits was set at nf1 and the number of predictors to select at random for each split was one third of the number of predictors. The algorithm was implemented with the statistics and machine learning toolbox in Matlab.
RFR metamodels are easy to train and implement, and in contrast to other methods, random forests are not sensitive to outliers. For these reasons, together with the fairly good performance of this technique in the literature (Ghosh et al. 2013; Kameshwar and Padgett 2014; Salazar et al. 2015a), RFR are implemented in this study.

Crossvalidated Goodness-of-Fit Estimates

The goodness-of-fit estimates depict the discrepancy between the observed values from the FEM simulation and the estimated value with the meta-model in question. The root mean square error (RMSE) provides a measure of global error, quantifying the difference between the responses predicted by the metamodel and actual data and is computed as follows:
RMSE=nf(yiy^i)2nf
(6)
where yi = response values in the dataset; y^i = predicted values; and nf = total number of points in the dataset (FE simulations). Additionally, the coefficient of determination R2 can be calculated as follows:
R2=1nf(yiy^i)2nfσ2
(7)
where σ2 = variance of the response in the dam response dataset. Similarly, the relative maximum absolute error (RMAE) measures the extent of the local fitting error and is the ratio of the maximum absolute difference between the metamodel and test data responses to the standard deviation of the actual response
RMAE=max|yiy^i|σ
(8)
In the present study, the predictive capability of the metamodels will be assessed through 5-fold crossvalidation (5-CV). The dataset is randomly divided into 5 sets, and the metamodel is trained using k1 sets, with the remaining set used as test data. This procedure is repeated 5 times; thus, 5-CV provides an estimate of the predictive accuracy of the model for unknown data. The average R2 value resulting from 5-CV will be used along with RMSE and RMAE to compare the different metamodels.

Multivariate Fragility Analysis

As stated by Ghosh et al. (2013), single-parameter demand models suffer from two potential drawbacks: (1) the inability to assess the impact of structural model parameter variation on structure performance during earthquakes without costly reanalysis for each new set of parameter combinations, and (2) the lack of flexibility to incorporate field instrumentation data from monitoring of existing structures to enable the updating of seismic fragility estimates. Consequently, the use of multivariate fragility functions enables the efficient uncertainty propagation of the random variables and allows for the exploration of the effects of design parameter variation on the vulnerability of the structure. Thus, the goal is to identify the role of the most influential ground motion IMs and MPs on the induced damage in the structure to build multivariate fragility functions from the metamodel output to provide a more complete and accurate view of the vulnerability of the structure.
Similar to fragility curves, multivariate fragility functions offer the conditional probability of exceeding different limit states given the occurrence of an earthquake of a certain intensity. The only difference is that the specific limit state is characterized with n parameters p1,p2pn instead of one parameter, as is the case with fragility curves. Hence the probability of limit state exceedance is conditioned on the resulting set of critical parameters. The fragility function corresponding to the limit state l is defined as follows:
Fl(x1,x2,,xn)=Pf(LS>LSl|p1=x1,p2=x2,,pn=xn)
(9)
where LS = limit state damage index; and LSl = value corresponding to the lth limit state.

Sample Generation and Fragility Point Estimates

This paper adopts a sampling strategy for generating point estimates of the fragility functions that draws upon the approach presented in multiple stripe analysis (MSA) (Baker 2013). However, in this study, both the ground motion and model intensity ranges of parameters are stratified, and rather than conduct nonlinear dynamic analyses, the metamodels are used for approximating the seismic response.
To this end, the selected IMs and MPs to generate the fragility surface are divided in N and M intensity levels, respectively, and samples are generated as shown in Fig. 2. While keeping one parameter constant, the other is varied among the different levels, and its response is approximated with the metamodel. The fragility point estimate is calculated as the number of samples with a specific IM and MP intensity level that exceed a determined limit state over the total number of samples generated with those specific IM and MP.
Fig. 2. Multiple stripe analysis for meta-model fragility point estimate generation.

Parametric Fragility Surfaces

While fragility curves are usually represented by well-known and readily parameterizable probability distributions such as the log-normal one (Sudret et al. 2015), the problem is more complex for surfaces, where bivariate distributions must be computed. To fit an analytical function to the fragility point estimates, within this MSA approach, the methodology proposed by Baker (2015) to generate fragility curves and the methodology proposed by Brandenberg et al. (2011) to generate fragility surfaces are combined for the first time. This joint procedure is further modified to make it suitable for the generation of fragility surfaces from the metamodel results, as a function of seismic IMs and model parameters, as shown in Fig. 3. The steps involved in the construction of the parametric fragility surfaces can be summarized as follows:
1.
For each limit state, l, fit fragility curves (Fc) according to Eq. (10), as a function of the seismic IM, for each MP intensity level, mpk
Fc(IM,MP=mpk)=Φl(IM,θk,βk)
(10)
where Φl is a cumulative density function (CDF); and θk and βk = parameters characterizing the associated CDF.
2.
Plot the values of parameters θk and βk for each limit state and for each value of mpk, and fit a functional form to the discrete data
θ^=κ(pθ,MP)
(11)
β^=κ(pβ,MP)
(12)
where θ^ and β^ = analytical expressions of the parameters characterizing the CDF as a function of MP; pθ and pβ = regression coefficients; and κ(·) = fit-type function (polynomial, exponential, etc.) selected for each MP.
3.
Substitute Eqs. (11)–(12) into Eq. (10) to obtain the analytical expression of the fragility surface (Fs) as a function of IM and MP for each limit state, as shown in Eq. (13)
Fs(IM,MP)=Φl(IM,θ^,β^)=Φl(IM,κ(pθ,MP),κ(pβ,MP))
(13)
Fig. 3. Parametric fragility surface construction.
Three different CDFs [(i) normal, (ii) log-normal, and (iii) Weibull] were tested, and the one with the best performance in each case was selected. The parameters of Eq. (10) were estimated with the maximum likelihood estimation (MLE) method and a Newton-Raphson optimization technique, as recommended by Baker (2015) when working with the MSA method. In the following sections, an application of the proposed methodology to assess the seismic vulnerability of a concrete gravity dam case study is presented, where the effect of the different model parameters influencing the seismic response of the dam is explicitly considered in the fragility analysis.

Case Study: Description and Modeling

The proposed methodology in this study is applied to a case study gravity dam in Quebec, Canada. It possesses 19 unkeyed monoliths, a maximum crest height of 78 m, and a crest length of 300 m. The dam was chosen for its simple and almost symmetric geometry, its well documented dynamic behavior, and the availability of forced vibration test results used to calibrate the dynamic properties of the numerical model (Proulx and Paultre 1997).

Finite-Element Model

The tallest monolith of the dam is selected as representative of the structure and was modeled with the explicit finite-element software LS-Dyna version R. 9.1.0, as shown in Fig. 4, following the recommendations of the United States Bureau of Reclamation (USBR) (Mills-Bria et al. 2013). Only one load case combination was considered, which included self-weight, hydrostatic thrust, uplift, hydrodynamic effects, and seismic load. The proposed model takes into account the different interactions among the structure, reservoir, and foundation. The reservoir is modeled with compressible fluid elements, whereas the concrete dam and the rock foundation are modeled with linear elastic materials to which a viscous damping is associated. Given that the model should remain stationary after the static loads are applied, two loading phases were considered: (1) a dynamic relaxation phase for static loads, and (2) a dynamic phase for the seismic loads, each with different boundary conditions. In the first loading phase, a symmetric boundary condition was applied where the normal displacements are zero. For the dynamic phase, nonreflective boundaries were included to prevent artificial amplification of the seismic waves because of the finite length of the foundation and reservoir. Concerning the contact surfaces between the different components of the system, sliding contact with zero friction was used to model the dam-reservoir interface. For the reservoir-foundation interface, tied contact was applied, except near the upstream face of the dam, where sliding contact with zero friction was used to maintain reservoir load during the sliding of the dam. Preliminary linear analyses identified the concrete-rock interface at the base of the dam and the concrete-concrete interface at the crest of the dam as high areas of tensile stresses and, therefore, where cracking and sliding could occur. Consequently, the model nonlinearity was constrained to these two areas only, using tiebreak contact elements with a tension-shear failure criterion. Further details of the modeling assumptions and the validation of the numerical model can be found in Bernier et al. (2016).
Fig. 4. Finite element model of the case study dam.

Model Parameter Uncertainty

Table 1 presents the parameters that were considered random variables in the numerical analysis of the dam response and for which the uncertainty was formally included through their probability distribution functions (PDFs). All of the remaining input parameters were kept constant and represented by their best estimate values. For the case study dam, due to limited availability of material investigations, the probability distributions were defined using empirical data of similar dams. The uniform distribution was used for most parameters except for damping, for which a log-normal distribution was adopted as proposed by Ghanaat et al. (2012). By posing the resulting metamodels and fragilities as functions of MPs that have a significant impact on the behavior of the dam, future studies may integrate these models with emerging estimates of these parameters or updated PDFs.
Table 1. Parameter distributions considered for statistical design of experiments
ParametersPDFDistributionParameters
Concrete-to-rock tensile strength (MPa)UniformL=0.2U=1.5
Concrete-to-concrete tensile strength (MPa)UniformL=0.3U=2.0
Concrete-to-rock cohesion (MPa)UniformL=0.3U=2.0
Concrete-to-concrete cohesion (MPa)UniformL=0.9U=2.5
Concrete-to-rock angle of friction (degrees)UniformL=42U=55
Concrete-to-concrete angle of friction (degrees)UniformL=42U=55
Drain efficiency (%)UniformL=0.0U=66
Concrete damping (%)Log-normalλ=2.99ζ=0.35

Damage Limit States

In recent years, typical damage modes that could lead to the potential collapse in dams after a seismic event have been identified, and seismic damage levels can be established. Preliminary analyses have confirmed sliding as the critical failure mode for the case study dam (Bernier et al. 2016), and other failure modes would only occur after sliding had already been observed. As a result, base sliding at the concrete-to-rock interface damage limit states proposed in Tekie and Ellingwood (2003) is considered in this study and presented in Table 2. The incipient sliding limit state should only cause minor damage since well-dimensioned dams should be able to undergo slight deformations or displacements while remaining stable. The moderate damage limit state can be considered as the onset of nonlinear behavior where material cracking occurs, deformations may become permanent and the drainage system begins to be affected. Displacement greater than 50 mm could cause differential movements between the blocks and potentially lead to the loss of control of the reservoir and very extensive damage, while displacements greater than 150 mm represent a complete damage state and high probability of collapse of the structure.
Table 2. Limit states considered for the case study dam
Limit stateBase sliding, δmax (mm)
LS0—Slight/minor5
LS1—Moderate25
LS2—Extensive50
LS3—Complete150

Seismic Hazard and Ground Motion Selection Method

A probabilistic seismic hazard analysis (PSHA) was performed at the dam site with the computer software OpenQuake version Engine 2.8 to characterize the possible earthquake scenarios at different intensity levels. The hazard levels were defined in terms of horizontal spectral acceleration at the fundamental period of the structure [SaH(T1)=0.1  g0.11.0  g] to conveniently cover the range of spectral accelerations corresponding to return periods from 700 to 30,000 years.
To proceed with the selection of a representative set of ground-motion time series (GMTS), the generalized conditional intensity measure (GCIM) approach (Bradley 2010) was adopted. The purpose of using the GCIM approach is to include the most influencing seismic IMs with respect to the structural response. For the case of gravity dam-type structures, peak ground velocity (PGV) was found to be one of the best performing ground-motion structure-independent scalar IM to correlate damage (Hariri Ardebili and Saouma 2016a). Similarly, the vertical spectral acceleration SaV is also expected to be relevant in heavy structures of this sort. As a result, the set of considered IMs in the GCIM are SaH(T), SaV(T), and PGV, where SaH(T) and SaV(T) are computed at 20 vibration periods in the range of T={0.2T12T1}, as proposed by Baker (2011), leading to a total of 41 IMs to be considered in addition to the conditioning IM, SaH(T1). The GCIM distribution computed with the abovementioned IMs was then used to simulate and select 250 ground motions. The records were selected from the PEER NGA-West2 database (Ancheta et al. 2013) due to the limited availability of strong ground motion records in the PEER NGA-East database (Goulet et al. 2014). Further details on the PSHA and the record selection procedure can be found in Segura et al. (2018).

Most Viable Metamodel for Maximum Relative Base Sliding Prediction

By selecting different configurations of the model parameters in Table 1, nf=250 samples of the FEM were generated with LHS and paired with the selected ground motions. The maximum relative sliding at the base (δmax) was computed from nonlinear simulations, and 14 regression metamodels were fitted to the structural response. An initial prescreening of the covariates or predictors was made before selecting the algorithm, based on visual inspection of the scatter plots of the possible predictors with respect to the response of the structure and taking into account the parameters affecting the dynamic behavior of the structure. This initial set of predictors (input) was used to train all the considered metamodels to perform a comparative analysis. In addition to the model parameters listed in Table 1, several seismic intensity measures were considered in the starting set of variables, such as spectral acceleration at the fundamental period [SaH(T1)], spectral velocity at the fundamental period [SvH(T1)], peak displacement, velocity and acceleration (PGD, PGV, PGA), spectrum intensity (SI), earthquake angular frequency (ωeqk), significant duration (D595), Arias intensity (Ia), and peak ground acceleration and spectral acceleration at the fundamental period in the vertical direction [PGAV, SaV(T1)]. Given that some algorithms already perform an internal selection of the predictors (with forward and backward iterations), the selected final set of predictors (output) in each metamodel is a subset of the initial set of predictors.

Comparison of Metamodel Predictive Capabilities

The performance of the metamodels, namely, their ability to predict the sliding response of the dam, is judged based on the goodness-of-fit estimates shown in Table 3. In general, good performance of the adaptive-type metamodels and relatively poor performance of the kernel-based metamodels were observed, which could be explained due to the small training set. Provided that the number of samples considered to train the different algorithms can have an effect on the performance of the latter, Fig. 5 presents the variation of the prediction performance of the considered metamodeling techniques with respect to the size of the training set. In general it can been that the prediction accuracy from 5 fold crossvalidation improves as the number of training samples increases and that all considered metamodels perform poorly with small training samples (<50). However, it should be noted that for some metamodels, after a certain number of samples, the goodness-of-fit estimates remain almost constant or with a relatively low rate of improvement. Indeed, for these metamodels, the fact of increasing the size of the training samples beyond a certain number will not translate in a significant improvement of the predicting capabilities of the algorithm. This is the case for ABFC, PRS O24, RFR, and, to a lesser extent, MARS and RBF, where beyond 175 training samples the improvement of the predicting accuracy is relatively low. On the contrary, for SVMR with different kernel functions, it can be observed that the performance of the algorithm keeps progressively increasing as the number of samples increases. Moreover, although the local and global performance of SVMR with linear and quadratic kernels is reasonable, it should be mentioned that approximately 72% of the samples in the dataset were used as support vectors in the multidimensional feature space to find hyperplane that separates all given samples. This is the result of a highly nonlinear feature space and a larger sample should be used to validate this type of model.
Table 3. Metamodel comparison for base sliding
MetamodelRMSER2RMAE5-CV RMSE5-CV R25-CV RMAE
PRSaOrder: 20.3670.8571.2450.3810.8460.945
Order: 30.3240.8891.1560.3420.8760.923
Order: 40.3190.9071.0240.3210.8870.898
ABFCa 0.3080.9001.1490.3600.8601.040
MARSaLinear0.2810.9160.9200.4080.810
Cubic0.2980.9051.0400.3830.841
RBFbMultiquad0.3550.8661.137
Thin plate0.3730.8461.165
Gaussian0.4680.7671.310
SVMRcLinear0.3610.8621.2430.3790.8430.956
Quadratic0.3020.9041.1840.4050.8241.136
Cubic0.2670.9241.0560.8590.2133.582
RBF0.1940.9590.7750.6510.5541.483
RFRd 0.2400.9390.9870.4060.8241.113
a
Adaptive algorithm.
b
Interpolation scheme.
c
Kernel-based algorithm.
d
Decision trees–based algorithm.
Fig. 5. Prediction capabilities comparison versus size of the training set: (a) 5-CV RMSE; (b) 5-CV R2; and (c) 5-CV RMAE.
Goodness-of-fit estimates were calculated to evaluate how closely the metamodel’s predicted values match the FEM simulation (true) values, considering the whole dataset for training and testing. Despite the fact that some algorithms showed superior performance, it is also known that some of them tend to overfit the data. To this end, the metamodels were trained and validated using 5-CV, and the performance was evaluated by calculating the mean of the goodness-of-fit estimates obtained from each fold. Consequently, by comparing the estimates from the algorithm trained and validated with the whole dataset with the ones from crossvalidation, it is possible to identify model overfitting. Fig. 6 presents the comparison of the goodness-of-fit estimates of the metamodels trained with the entire training set and the average result from 5-CV. It is intended for these indicators to be as close as possible to ensure that the metamodel can predict accurately for the cases that were trained with as well as for the unknown cases, i.e., to prevent overfitting. From Fig. 5, it can be seen that in terms of RMSE and R2, the PRS metamodels present similar values for both cases. Similarly, in terms of RMAE, the ABFC and the MARS surrogates, for which the estimates were calculated with the entire training set, are very close to those from 5-CV. On the other hand, the SVMR behavior suggests overfitting of the metamodel, given their large capacity to describe the training dataset but failing to predict beyond it to unseen cases. The RFR metamodel presents fair results regarding the R2 and RMAE, but the difference between the RMSE for the training data and the average 5-CV reflect some limitations to accurately predict the response of the system. From Fig. 5 and Table 3, the best metamodel in terms of predicting capabilities is the 4th-order PRS (PRS O4), which is evident from the 5-CV results.
Fig. 6. Goodness-of-fit–training data versus crossvalidation average: (a) RMSE; (b) R2; and (c) RMAE.

Polynomial Response Surface Metamodel

From the metamodel comparison, the selected surrogate is a polynomial response surface of 4th-order (PRS O4) as a function of three model parameters, three seismic intensity measures, and their transformations and pairwise products
δmax=g(X)=g(CRF,DR,CRCmodelparameters,PGV,Ia,PGAVseismicIMs)+v
(14)
vN(v;0,σv2)
(15)
where CRF, DR, and CRC = model parameters corresponding to the concrete-rock angle of friction, drain efficiency, and concrete-rock cohesion, respectively; PGV = peak ground velocity; PGAV = peak ground acceleration in the vertical direction; and Ia = Arias intensity. A normally distributed model error term v with zero mean and standard deviation equal to the RMSE is added to the selected surrogate model to reflect the lack of fit (Barnston 1992; Kameshwar and Padgett 2014; Sichani et al. 2017).
The stepwise regression algorithm in MATLAB R2019b was formally implemented together with PRS to improve the task of manually selecting the predictors. This first filter was used because the PRS metamodel considers all the covariates present in the dataset. The algorithm starts with a constant term to predict the response. In the next step, one predictor is added to the model, and the performance of the model is evaluated based on the Bayesian information criterion (BIC). If the model performance improves, the added term is kept; otherwise, it is removed, and this process is repeated until all the proposed predictors are tested. As a result of the stepwise regression, the best predictors from each 5-CV fold were selected. A polynomial regression model considering the interaction between the predictors was fitted to the dataset. In addition to the 5-CV method, which shows the overall accuracy of the model, p-values associated with the explanatory functions were controlled to be smaller than 0.05, assuring that the final terms included in the model are not selected by chance. Fig. 7(a) shows that the predicted values obtained with the selected metamodel are in agreement with the simulated dataset, while in Figs. 7(b and c), it can be seen that the residual normal distribution and the independence between the observation error hypothesis is respected.
Fig. 7. PRS O4: (a) Predicted versus simulated values (millimeters); (b) residuals histogram; and (c) fitted value residuals.

Dam Sample Generation and Multivariate Fragility Functions

Regarding the generation of the samples where the metamodel will be evaluated to predict the dam’s response and derive point estimates of fragility, independence among all the MPs was considered to generate 5×105 samples with LHS. Nevertheless, it should be noted that for a specific site, the seismic IMs are correlated. For the case study dam and considering the 250 GMTS used to train the metamodels, a linear correlation is to be expected, as seen from Fig. 8. Fig. 9 shows the histogram of the logarithm of the seismic IMs that follow an approximate normal distribution. Based on this, the samples of these parameters are taken from a jointly log-normal distribution with their respective correlation coefficients. In addition, to consider a range of values corresponding to return periods from 700 to 30,000 years at the dam site, the possible values of PGV, Ia, and PGAV were bounded, respectively, as 0.8  cm/sPGV25  cm/s, 0.0  m/sIa2.5  m/s, and 0.01  gPGAV0.25  g.
Fig. 8. Linear correlation between the seismic IMs from the GMTS.
Fig. 9. Probability distribution of seismic predictors.
The fragility analysis was performed following the MSA methodology depicted in Fig. 2. The range of each of the parameters of the fragility surface was divided into 100 intervals. As a result, 104 fragility point estimates were generated for each limit state. Moreover, to display how the variation of the model parameters alters the seismic fragility, fragility surfaces as a function of PGV and each of the model parameters considered in the metamodel (CRF, DR, and CRC) were generated. It is noteworthy that PGV was selected as the IM to be displayed in the fragility surface not only because of its practicality of database availability and value accessibility but also because, as previously mentioned, it was found to be one of the best performing ground-motion IM to correlate with the proposed damage state (Hariri Ardebili and Saouma 2016a).
Parametric fragility surfaces for each limit state and for each MP were generated, implementing the methodology explained in Fig. 3. The resulting fragility surfaces are depicted in Figs. 1012, and the goodness-of-fit between the proposed parameterized fragility surfaces and the fragility point estimates is presented in Table 4.
Fig. 10. Fragility surfaces: Fs (PGV,CRF) for (a) LS0; (b) LS1; (c) LS2; and (d) LS3.
Fig. 11. Fragility surfaces: Fs (PGV, DR) for (a) LS0; (b) LS1; (c) LS2; and (d) LS3.
Fig. 12. Fragility surfaces: Fs (PGV, CRC) for (a) LS0; (b) LS1; (c) LS2; and (d) LS3.
Table 4. Fragility surfaces’ goodness of fit
ParametersLimit stateR2RMSERMAE5-CV R25-CV RMSE5-CV RMAE
PGV-CRFLS00.9970.0130.2070.9910.0140.215
LS10.9970.0150.2530.9920.0160.277
LS20.9960.0140.3140.9910.0150.364
LS30.9910.0100.4490.9910.0140.480
PGV-DRLS00.9970.0130.2170.9900.0140.291
LS10.9970.0140.2630.9950.0150.293
LS20.9960.0140.2520.9940.0160.285
LS30.9870.0160.7520.9480.0180.649
PGV-CRCLS00.9940.0230.8090.9910.0240.724
LS10.9970.0170.2880.9930.0180.295
LS20.9950.0170.3480.9920.0170.390
LS30.9870.0150.8030.9730.0160.882

Effect of Model Parameter Variation in the Fragility Analysis

Concrete-Rock Angle of Friction

Fragility curves as a function of PGV were calculated for each CRF intensity level, according to Eq. (10), where Φl in this case is the Weibull CDF. The parameters characterizing the CDF were plotted for each CRF value and for each limit state, as shown in Fig. 13. A linear function was fitted to each parameter whose regression coefficients are shown in Table 5. Finally, the parametric fragility surface is depicted by Eq. (16)
Fs(PGV,CRF)=CDFW(PGV,pθ1CRF+pθ2,pβ1CRF+pβ2)
(16)
where the CDF is also a Weibull distribution to be consistent with the fragility curves. As seen from Fig. 10 and Table 4, the parametric fragility surfaces fit fairly well with the fragility point estimates.
Fig. 13. CRF fragility curves parameters regression: (a) θ^CRF—LS0; (b) β^CRF—LS0; (c) θ^CRF—LS1; (d) β^CRF—LS1; (e) θ^CRF—LS2; (f) β^CRF—LS2; (g) θ^CRF—LS3; and (h) β^CRF—LS3.
Table 5. CRF regression coefficients
Limit statepθ1pθ2pβ1pβ2
LS02.6193.5480.4710.799
LS14.9287.7530.7270.820
LS26.35810.7950.9200.741
LS36.83621.9181.4120.439

Drain Efficiency

The same procedure described in the above section was used to develop fragility surfaces as a function of PGV and DR; however, a quadratic fit-type function was used instead of a linear one, as seen in Fig. 14. The parameterized formulation is described by Eq. (17)
Fs(PGV,DR)=CDFW(PGV,pθ1DR+pθ2DR2+pθ3,pβ1DR+pβ2DR2+pβ3)
(17)
where pθ1,2,3 and pβ1,2,3 = linear regression coefficients shown in Table 6. Fig. 11 shows the resulting fragility surfaces.
Fig. 14. DR fragility curve parameter regression: (a) θ^DR—LS0; (b) β^DR—LS0; (c) θ^DR—LS1; (d) β^DR—LS1; (e) θ^DR—LS2; (f) β^DR—LS2; (g) θ^DR—LS3; and (h) β^DR—LS3.
Table 6. DR regression coefficients
Limit statepθ1pθ2pθ3pβ1pβ2pβ3
LS04.4833.9644.6970.0270.2741.314
LS116.7015.1639.6210.2300.3931.637
LS234.5153.82112.7800.4330.4541.782
LS3123.31410.19920.9570.7130.4322.033

Concrete-Rock Cohesion

Similarly, the fragility surfaces as a function of PGV and CRC were generated. In this case, as displayed in Fig. 15, an exponential function was fitted to each parameter defining the fragility curves, and a log-normal CDF was used for the fragility functions. The parametric fragility surfaces result in the following
Fs(PGV,CRC)=CDFlnN(PGV,pθ1CRCpθ2+pθ3,pβ1CRCpβ2+pβ3)
(18)
where pθ1,2,3 and pβ1,2,3 = linear regression coefficients presented in Table 7. The fragility surfaces are shown in Fig. 12.
Fig. 15. CRC fragility curve parameter regression: (a) θ^CRC—LS0; (b) β^CRC—LS0; (c) θ^CRC—LS1; (d) β^CRC—LS1; (e) θ^CRC—LS2; (f) β^CRC—LS2; (g) θ^CRC—LS3; and (h) β^CRC—LS3.
Table 7. CRC regression coefficients
Limit statepθ1pθ2pθ3pβ1pβ2pβ3
LS01.0741.0792.6190.3780.9960.274
LS10.9280.7583.2470.3061.0490.310
LS20.5100.9773.1770.3410.9890.288
LS30.0252.9313.2850.4150.9340.249

Application to Seismic Assessment of Dams

The resulting fragility functions can also be used to assess the seismic performance of the dam by formulating recommendations with respect to the model parameters. To achieve a desired seismic performance, boundaries of model parameters for an adequate performance under extreme limit states can be formulated. The usable range of values is determined by ensuring that the probability of exceedance given that an extreme event is in line with the current guidelines for the minimum provisions for life safety. To establish the admissible values of the model parameters, a return period of 10,000 years, which corresponds to the maximum considered earthquake (MCE) (ASCE 2016; FEMA 2015; CDA 2007), was used. The peak ground velocity associated with the MCE (PGVMCE) was determined from the PSHA and from the fragility surfaces corresponding to the complete damage limit state (LS3), and the fragility curves as a function of the model parameters were extracted for the PGVMCE value.
According to what it is proposed by ASCE 7 (ASCE 2016), for a risk category III (dam-type structures) and total or partial structural collapse, the maximum probability of failure should be less than 6%. Consequently, and as shown in Fig. 16, the boundaries of model parameters were established to provide a probability of exceedance less than 6% for an MCE seismic scenario. As a result, a range of values of 50°CRF55°, 0.35DR0.66, and 0.87  MPaCRC2.0  MPa should be considered to ensure that the performance of the dam is in line with the minimum values for life safety. Nonetheless, it should be acknowledged that these parameter ranges are derived from the single parameter at a time evaluation, and the joint interaction of the model parameters should be further examined.
Fig. 16. Admissible model parameter range of values for MCE events: (a) concrete-rock angle of friction; (b) drain efficiency; and (c) concrete-rock cohesion.

Conclusions

The main objective of this study was to identify the most viable metamodel among those considered for the seismic fragility assessment of gravity dams, present a methodology for the development of multivariate fragility functions displaying the effect of the model parameter variation on the dam seismic performance and formulate design recommendations from the analysis.
The methodology was applied to a case study dam located in northeastern Canada. PSHA was performed to characterize the seismic hazard at the dam site and to select 250 ground motions, consistent with the latter, using the GCIM approach. The sets of selected ground motion records were paired with the 250 samples of numerical models of the dam generated with LHS, representing different material and loading configurations of the system. The dataset used to train the metamodels was generated by performing nonlinear dynamic analysis of these samples, with an FEM that considered the fluid-structure-foundation interaction and by extracting the maximum relative sliding at the base of the dam. Six different types of metamodels with different configurations (basis and interpolation functions) were fitted to the seismic response of the case study dam to predict the sliding limit state at the base of the dam. The 4th-order PRS emerged as the best performing metamodel based on local and global goodness-of-fit estimates from 5-CV, and it was used to generate fragility surfaces as a function of PGV and each of the model parameters. It is observed that the variability of the concrete-rock cohesion model parameter has the most influence on the fragility analysis estimates, followed by the drain efficiency and to a lesser extent, the concrete-rock angle of friction. Finally, a seismic assessment was performed to determine the model parameter boundaries for adequate performance under extreme events, such as the MCE, to respect the minimum safety margins proposed by the current guidelines.
It should be noted that machine learning techniques are indispensable when assessing the vulnerability of structures with computationally expensive FEM such as gravity dams subjected to seismic loading. Similarly, the use of surrogate models allows for the exploration of the impact of different parameters in the fragility without the costly reevaluation of the FEM simulations. Regarding the fragility functions, as evident for the case presented herein and the goodness-of-fit values, the fragility estimates are well depicted by the methodology suggested in this study to fit parametric fragility surfaces.
It is expected that the results of this study can lead to more accurate planning and retrofitting policies to expedite the safety assessment of dams under seismic loads while supporting the decision-making process and to guide the preliminary design of future gravity dams. In future studies, additional insight into the correlation between the parameters defining the model configurations should be made, including other relevant limit states for gravity dams. Additionally, the model parameter variations in the fragility analysis should be further explored to provide parametric fragility functions, including the joint interaction of these parameters using classification meta-modeling techniques.

Notation

The following symbols are used in this paper:
a
constant coefficient vector for PRS metamodel;
a0
bias constant term for RBF metamodel;
a1,,am
connection weights between hidden layers for RBF metamodel;
b(·)
basis functions of MARS metamodel;
C
SVM cost function;
c
constant coefficients for MARS metamodel;
D595
earthquake significant duration;
f(·)
decision tree base models for RFR;
g(·)
surrogate model function;
Ia
Arias intensity;
L
uniform PDF lower bound;
N(·)
normal distribution;
NFLS
normal failure stress;
nf
number of FE simulations;
On
nth order of the polynomial response surface;
PGAV
vertical peak ground acceleration;
PGVMCE
peak ground velocity associated with the MCE;
p1,,pn
parameters characterizing a limit state;
pθ, pβ
regression coefficients;
R2
coefficient of determination;
SaH(T)
horizontal spectral acceleration corresponding to period T;
SaV(T)
vertical spectral acceleration corresponding to period T;
SvH(T1)
horizontal spectral velocity corresponding to period T1;
SFLS
shear failure stress;
T
period of vibration;
T1
fundamental mode period;
U
uniform PDF upper bound;
v
normally distributed model error vector;
v
model error;
W(·)
Weibull distribution;
w(·)
radial function for RBF metamodel;
X
covariate matrix;
x
row vector of the covariate matrix;
y
structural response;
y^
predicted structural response;
δmax
maximum relative sliding at the base;
ϵ
SVM tolerance width margin;
ζ
standard deviation of the damping log-normal PDF;
Θ
vector of basis functions for PRS metamodel;
θ, β
parameters characterizing a cumulative density function;
θ^, β^
analytical formulation of the cumulative density function parameters;
κ(·)
fit-type regression function;
λ
mean of the damping log-normal PDF;
ρ
linear correlation coefficient between IMs;
σ
standard deviation of the response in the dataset;
σn
normal stress;
σs
shear stress;
σv
standard deviation of the normally distributed model error;
Φ(·)
cumulative density function;
ΦRBFi(·)
nonlinear mapping function for RBF metamodel;
ϕ
concrete-rock angle of friction; and
ωeqk
earthquake angular frequency.

Acknowledgments

The authors acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Fonds de recherche du Québec Nature et technologies (FRQNT), the Centre d’études interuniversitaire des structures sous charges extrêmes (CEISCE), and the Centre de recherche en génie parasismique et en dynamique des structures (CRGP). Computational resources for this project were provided by Compute Canada and Calcul Quebec. The authors also thank Carl Bernier and all the students in Padgett Research Group at Rice University for their collaboration and valuable remarks on various aspects of this study. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

References

Alembagheri, M. 2018. “Investigating efficiency of vector-valued intensity measures in seismic demand assessment of concrete dams.” Adv. Civ. Eng. 2018 (12): 1. https://doi.org/10.1155/2018/5675032.
Ancheta, T. D., et al. 2013. PEER NGA-West2 database. Berkeley, CA: Pacific Earthquake Engineering Research Center.
ASCE. 2016. Minimum design loads for buildings and other structures. Reston, VA: ASCE.
Ataei, N. 2013. “Vulnerability assessment of coastal bridges subjected to hurricane events.” Ph.D. thesis, Dept. of Civil and Environmental Engineering, Rice Univ.
Baker, J. W. 2011. “Conditional mean spectrum: Tool for ground-motion selection.” J. Struct. Eng. 137 (3): 322–331. https://doi.org/10.1061/(ASCE)ST.1943-541X.0000215.
Baker, J. W. 2013. “Trade-offs in ground motion selection techniques for collapse assessment of structures.” In Proc., Congress on Recent Advances in Earthquake Engineering and Structural Dynamics, edited by C. Adam, R. Heuer, W. Lenhardt, and C. Schranz. Wien, Austria: Vienna Univ. of Technology.
Baker, J. W. 2015. “Efficient analytical fragility function fitting using dynamic structural analysis.” Earthquake Spectra 31 (1): 579–599. https://doi.org/10.1193/021113EQS025M.
Barnston, A. G. 1992. “Correspondence among the correlation, RMSE, and Heidke verification measures; refinement of the Heidke score.” Weather Forecasting 7 (4): 699–709. https://doi.org/10.1175/1520-0434(1992)007%3C0699:CATCRA%3E2.0.CO;2.
Bernier, C., J. Padgett, J. Proulx, and P. Paultre. 2016. “Seismic fragility of concrete gravity dams with modeling parameter uncertainty and spacial variation.” J. Struct. Eng. 142 (5): 05015002. https://doi.org/10.1061/(ASCE)ST.1943-541X.0001441.
Bradley, B. A. 2010. “A generalized conditional intensity measure approach and holistic ground-motion selection.” Earthquake Eng. Struct. Dyn. 39 (12): 1321–1342. https://doi.org/10.1002/eqe.995.
Brandenberg, S. J., J. Zhang, P. Kashighandi, Y. Huo, and M. Zao. 2011. Demand fragility surfaces for bridges in liquefied and laterally spreading ground. Berkeley, CA: Pacific Earthquake Engineering Research Center.
CDA (Canadian Dam Association). 2007. Dam safety guidelines. Edmonton, AB, Canada: CDA.
Efron, B., and R. J. Tibshirani. 1993. An introduction to the bootstrap: Number 57 in Monographs on Statistics and Applied Probability. Boca Raton, FL: Chapman & Hall/CRC.
Ellingwood, B., and P. B. Tekie. 2001. “Fragility analysis of concrete gravity dams.” J. Infrastruct. Syst. 7 (2): 41–48. https://doi.org/10.1061/(ASCE)1076-0342(2001)7:2(41).
Fan, R., P. Chen, and C. Lin. 2005. “Working set selection using second order information for training support vector machines.” J. Mach. Learn. Res. 6 (Dec): 1889–1918. https://doi.org/10.5555/1046920.1194907.
FEMA. 2015. Federal guidelines for dam safety risk management. FEMA P-1025. Washington, DC: FEMA.
Forrester, A. I. J., A. Sóbester, and A. J. Keane. 2008. Engineering design via surrogate modelling: A practical guide. Chichester: Wiley.
Friedman, J. 1991. “Multivariate adaptive regression splines.” Ann. Stat. 19 (1): 1–67.
Gehl, P., D. Seyedi, J. Douglas, and M. Khiar. 2009. “Introduction of fragility surfaces for a more accurate modeling of the seismic vulnerability of reinforced concrete structures.” In Proc., Conf. on Computational Methods in Structural Dynamics and Earthquake Engineering. Glasgow, UK: European Community on Computational Methods in Applied Science.
Ghanaat, Y., R. C. Patev, and A. K. Chudgar. 2012. “Seismic fragility analysis of concrete gravity dams.” In Proc., 15th World Conf. on Earthquake Engineering. Tokyo: International Association for Earthquake Engineering.
Ghosh, J., J. E. Padgett, and L. Dueñas Osorio. 2013. “Surrogate modeling and failure surface visualization for efficient seismic vulnerability assessment of highway bridges.” Probab. Eng. Mech. 34 (Oct): 189–199. https://doi.org/10.1016/j.probengmech.2013.09.003.
Ghosh, S., S. Ghosh, and S. Chakraborty. 2017. “Seismic fragility analysis in the probabilistic performance-based earthquake engineering framework: An overview.” Int. J. Adv. Eng. Sci. Appl. Math. 1–14. https://doi.org/10.1007/s12572-017-0200-y.
Goulet, C. A., et al. 2014. PEER NGA-east database. Berkeley, CA: Pacific Earthquake Engineering Research Center.
Goulet, J. A. 2018. Probabilistic machine learning for civil engineers. Cambridge, MA: MIT Press.
Grigoriu, M., and E. Mostafa. 2002. “Fragility surfaces as a measure of seismic performance.” In Proc., 7th US National Conf. on Earthquake Engineering. Oakland, CA: Earthquake Engineering Research Institute.
Hardy, R. 1971. “Multiquadric equations of topology and other irregular surfaces.” J. Geophys. Res. 76 (8): 1905–1915. https://doi.org/10.1029/JB076i008p01905.
Hariri Ardebili, M., and V. Saouma. 2016a. “Probabilistic seismic demand model and optimal intensity measure for concrete dams.” Struct. Saf. 59 (Mar): 67–85. https://doi.org/10.1016/j.strusafe.2015.12.001.
Hariri Ardebili, M., and V. Saouma. 2016b. “Seismic fragility analysis of concrete dams: A state-of-the-art review.” Eng. Struct. 128 (Dec): 374–399. https://doi.org/10.1016/j.engstruct.2016.09.034.
Hariri-Ardebili, M. 2018. “MCS-based response surface metamodels and optimal design of experiments for gravity dams.” Struct. Infrastruct. Eng. 14 (12): 1641–1663. https://doi.org/10.1080/15732479.2018.1469650.
Hariri-Ardebili, M., and F. Pourkamali-Anaraki. 2017a. “Simplified reliability analysis of multi hazard risk in gravity dams via machine learning techniques.” Arch. Civ. Mech. Eng. 18 (2): 592–610. https://doi.org/10.1016/j.acme.2017.09.003.
Hariri-Ardebili, M., and F. Pourkamali-Anaraki. 2017b. “Support vector machine based reliability analysis of concrete dams.” Soil Dyn. Earthquake Eng. 104 (Jan): 276–295. https://doi.org/10.1016/j.soildyn.2017.09.016.
Jekabsons, G. 2010a. “Adaptive basis function construction: an approach for adaptive building of sparse polynomial regression models.” In Machine learning, edited by Y. Zhang, 127–155. London: IntechOpen.
Jekabsons, G. 2010b. “VariReg: A software tool for regression modelling using various modelling methods.” Accessed April 24, 2017. http://www.cs.rtu.lv/jekabsons/.
Jekabsons, G. 2016. “ARESLab: Adaptive regression splines toolbox for Matlab/Octave.” Accessed April 24, 2017. http://www.cs.rtu.lv/jekabsons/.
Kameshwar, S., and J. E. Padgett. 2014. “Multi-hazard risk assessment of highway bridges subjected to earthquake and hurricane hazards.” Eng. Struct. 78 (Nov): 154–166. https://doi.org/10.1016/j.engstruct.2014.05.016.
Koutsourelakis, P. S. 2010. “Assessing structural vulnerability against earthquakes using multi-dimensional fragility surfaces: A Bayesian framework.” Probab. Eng. Mech. 25 (1): 49–60. https://doi.org/10.1016/j.probengmech.2009.05.005.
Mangalathu, S., and J. Jeon. 2018. “Stripe-based fragility analysis of concrete bridge classes using machine learning techniques.” Preprint, submitted July 25, 2018. http://http://arxiv.org/abs/1807.09761.
Mills-Bria, B., R. Koltuniuk, and P. Percell. 2013. State-of-practice for the nonlinear analysis of concrete dams 2013. Denver: US Dept. of the interior Bureau of Reclamation.
Muntasir Billah, A., and A. M. Shahria. 2015. “Seismic fragility assessment of highway bridges: A state-of-the-art review.” Struct. Infrastruct. Eng. 11 (6): 804–832. https://doi.org/10.1080/15732479.2014.912243.
Murphy, K. P. 2012. Machine learning a probabilistic perspective. Cambridge, MA: MIT Press.
Pan, Y., A. K. Agrawal, M. Ghosn, and S. Alampalli. 2010. “Seismic fragility of multispan simply supported steel highway bridges in New York State. II: Fragility analysis, fragility curves, and fragility surfaces.” J. Bridge Eng. 15 (5): 462–472. https://doi.org/10.1061/(ASCE)BE.1943-5592.0000055.
Proulx, J., and P. Paultre. 1997. “Experimental and numerical investigation of dam-reservoir-foundation interaction for a large gravity dam.” Can. J. Civ. Eng. 24 (1): 90–105. https://doi.org/10.1139/l96-086.
Salazar, F., M. A. Toledo, T. Oñate, and R. Moran. 2015a. “An empirical comparison of machine learning techniques for dam behaviour modelling.” Struct. Saf. 56 (Sep): 9–17. https://doi.org/10.1016/j.strusafe.2015.05.001.
Salazar, F., M. A. Toledo, T. Oñate, and R. Moran. 2015b. “Data-based models for the prediction of dam behaviour: A review and some methodogical considerations.” Arch. Comput. Methods Eng. 24 (1): 1–21. https://doi.org/10.1007/s11831-015-9157-9.
Segura, R., C. Bernier, R. Monteiro, and P. Paultre. 2018. “On the seismic fragility assessment of concrete gravity dams in eastern Canada.” Earthquake Spectra. 35 (1): 211–231. https://doi.org/10.1193/012418EQS024M.
Seo, J., L. Dueñas Osorio, J. Craig, and B. Goodno. 2012. “Metamodel-based regional vulnerability estimate of irregular steel moment-frame structures subjected to earthquake events.” Eng. Struct. 45 (Dec): 585–597. https://doi.org/10.1016/j.engstruct.2012.07.003.
Seo, J., and D. Linzell. 2013. “Use of response surface metamodels to generate system level fragilities for existing curved steel bridges.” Eng. Struct. 52 (Jul): 642–653. https://doi.org/10.1016/j.engstruct.2013.03.023.
Seo, J., and H. Park. 2017. “Probabilistic seismic restoration cost estimation for transportation infrastructure portfolios with an emphasis on curved steel I-girder bridges.” Struct. Saf. 65 (Mar): 27–34. https://doi.org/10.1016/j.strusafe.2016.12.002.
Seo, J., and L. Rogers. 2017. “Comparison of curved prestressed concrete bridge population response between area and spine modeling approaches toward efficient seismic vulnerability analysis.” Eng. Struct. 150 (Nov): 176–189. https://doi.org/10.1016/j.engstruct.2017.07.033.
Sichani, E., J. Padgett, and V. Bisadi. 2017. “Probabilistic seismic analysis of concrete dry cask structures.” Struct. Saf. 73 (Jul): 87–98. https://doi.org/10.1016/j.strusafe.2018.03.001.
Simpson, W., J. Peplinski, P. Koch, and J. Allen. 2001. “Metamodels for computer-based engineering design: Survey and recommenations.” Eng. Comput. 17 (Jul): 129–150. https://doi.org/10.1007/PL00007198.
Sudret, B. 2012. “Meta-models for structural reliability and uncertainty quantification.” In Proc., 5th Asian-Pacific Symp. Structural Reliability (APSSRA 2012). Wuhan, China: Scientific Research Publishing.
Sudret, B., C. Mai, and K. Konakli. 2015. “Assessment of the lognormality assumption of seismic fragility curves using non-parametric representations.” Accessed May 4, 2015. https://arxiv.org/abs/1403.5481.
Tekie, P. B., and B. R. Ellingwood. 2003. “Seismic fragility assessment of concrete gravity dams.” Earthquake Eng. Struct. Dyn. 32 (14): 2221–2240. https://doi.org/10.1002/eqe.325.
Wang, G. G., and S. Shan. 2007. “Review of metamodeling techniques in support of engineering design optimization.” J. Mech. Des. 129 (4): 370–380. https://doi.org/10.1115/1.2429697.
Yu, J., X. Qin, and O. Larsen. 2014. “Uncertainty analysis of flood inundation modelling using GLUE with surrogate models in stochastic sampling.” Hydrol. Processes 29 (6): 1267–1279. https://doi.org/10.1002/hyp.10249.

Information & Authors

Information

Published In

Go to Journal of Structural Engineering
Journal of Structural Engineering
Volume 146Issue 7July 2020

History

Received: Feb 15, 2019
Accepted: Oct 29, 2019
Published online: Apr 23, 2020
Published in print: Jul 1, 2020
Discussion open until: Sep 23, 2020

Authors

Affiliations

Ph.D. Student, Dept. of Civil Engineering, Univ. of Sherbrooke, Sherbrooke, QC, Canada J1K 2R1. ORCID: https://orcid.org/0000-0003-4463-7735
Jamie E. Padgett, A.M.ASCE
Associate Professor, Dept. of Civil Engineering, Rice Univ., Houston, TX 77005.
Professor, Dept. of Civil Engineering, Univ. of Sherbrooke, Sherbrooke, QC, Canada J1K 2R1 (corresponding author). ORCID: https://orcid.org/0000-0001-8111-8614. Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share