Open access
Technical Papers
Sep 29, 2017

Fragility Curves for Assessing the Resilience of Electricity Networks Constructed from an Extensive Fault Database

Publication: Natural Hazards Review
Volume 19, Issue 1

Abstract

Robust infrastructure networks are vital to ensure community resilience; their failure leads to severe societal disruption and they have important postdisaster functions. However, as these networks consist of interconnected, but geographically-distributed, components, system resilience is difficult to assess. In this paper the authors propose the use of an extension to the catastrophe (CAT) risk modeling approach, which is primarily used to perform risk assessments of independent assets, to be adopted for these interdependent systems. To help to achieve this, fragility curves, a crucial element of CAT models, are developed for overhead electrical lines using an empirical approach to ascribe likely failures due to wind storm hazard. To generate empirical fragility curves for electrical overhead lines, a dataset of over 12,000 electrical failures is coupled to a European reanalysis (ERA) wind storm model, ERA-Interim. The authors consider how the spatial resolution of the electrical fault data affects these curves, generating a fragility curve with low resolution fault data with a R2 value of 0.9271 and improving this to a R2 value of 0.9889 using higher spatial resolution data. Recommendations for deriving similar fragility curves for other infrastructure systems and/or hazards using the same methodological approach are also made. The authors argue that the developed fragility curves are applicable to other regions with similar electrical infrastructure and wind speeds, although some additional calibration may be required.

Introduction

Modern society’s reliance on critical infrastructure networks has become so great that large-scale failure of these systems can have catastrophic consequences, and even short-term disruptions can be serious (Murray and Grubesic 2007). Electricity networks usually occur as national or even continental scale systems, and so their failure can be felt over a wide geographic area (Andersson et al. 2005; Sforna and Delfanti 2006) and the consequences can be severe both on communities and other networks that depend on them (Wilkinson et al. 2012). If the direct and indirect cost of damage due to a natural disaster are considered as a proxy for the impact that the disaster have, then the consequence of natural disasters on infrastructure has been steadily increasing. This is due to both increasing event frequency and our increasing reliance on the services provided by our infrastructure (Kreimer and Arnold 2000; Chang 2003). Therefore, methodologies to quantitatively assess the vulnerability of these systems are needed, allowing an accurate assessment of the impact of infrastructure failure. The current scarcity of risk assessment and impact tools makes it difficult for infrastructure owners and governments to make rational investments to increase the resilience of their infrastructure systems in the most efficient and effective manner.
Several models have been developed that attempt to quantitatively assess the vulnerability posed by natural hazards to infrastructure systems, such as Reed et al. (2009) and Salman and Li (2016). However, the most notable are the Hazard U.S. (HAZUS) methodologies (Schneider and Schauer 2006). These models have been developed for flood, hurricane, and earthquake hazards and are designed to produce loss estimates to be used by federal, state, regional, and local governments in planning and preparing for these natural hazards (Kircher et al. 2006; Scawthorn et al. 2006; Vickery et al. 2006). The HAZUS methodology incorporates nearly all aspects of the built environment, from general building stock to critical infrastructure systems, and has a wide range of different types of losses, including direct economic and social losses. In order to achieve these estimations, extensive databases are embedded within the HAZUS framework, including information regarding demographic aspects of the population, square footage for different occupancies of buildings, and numbers and locations of bridges (HAZUS 2015a). However, by their own admission the estimates made by this framework are likely to have uncertainties of “at best a factor of two or more” (HAZUS 2015a, p. 29). There are a number of factors contributing to this uncertainty, with a major one being the damage models that are used to calculate expected damages. These have been primarily produced by expert judgement and so are likely to be subject to considerable bias and are unsuitable for application to storms that are not in the historical knowledge of the expert.
Risk of failure of infrastructure systems is difficult to assess as they are essentially a series of interconnected and interacting systems of assets, geographically distributed over regional, national, or even vast continental scales. However, a similar problem of risk assessment of geographically dispersed assets has been faced by the insurance industry. In this case, insurance managers require methods to assess the risk to a geographically distributed portfolio of assets (e.g., building stock). The most common methodology of quantifying this risk is through the use of a catastrophe (CAT) risk modeling approach. The CAT models are used to assess risk by combining an exposure database together with a hazard model, a vulnerability model, and an economic loss model (Fig. 1) (Grossi and Kunreuther 2005). To apply this methodology to assess the risk to an infrastructure system, it is first necessary to overcome a number of challenges. First, there are few vulnerability models for infrastructure systems. Vulnerability functions express the likelihood that assets at risk will sustain varying degrees of loss (e.g., in terms of direct damage) over a range of hazard intensities. In some cases, developing vulnerability relationships requires the use of: (1) fragility functions, expressing the likelihood of different levels of damage (i.e., damage states) sustained by a given building class over a range of hazard intensities; and (2) damage-to-loss functions, which convert the damage estimates to loss estimates. In the insurance industry, vulnerability functions are usually formed empirically by correlating insurance claim information from previous events with the estimated hazard intensity at the location of the asset. Examples of infrastructure fragility functions are sparse because infrastructure is typically self-insured and therefore there is no insurance claim information with which to calibrate those functions. Furthermore, as they are not insured, there are few incentives for risk analysts to produce fragility and vulnerability functions by other means (e.g., analytically or through expert judgement). Secondly, economic models must be developed to quantify loss. While this is relatively straightforward for a portfolio of independent assets (such as buildings), it is far more difficult for an infrastructure system where failure of one asset has implications for the whole system (and in some cases interdependent infrastructure systems). It is not the cost of the damaged asset that is of primary importance, but rather the value of the service (e.g., electrical or water supply) that these systems provide.
Fig. 1. Typical catastrophe risk (CAT) modeling framework, consisting of an exposure database (exposure information), hazard model (event generation and intensity calculation), vulnerability model (damage estimation), and an economic loss model (consequence calculation) (adapted from Grossi and Kunreuther 2005; Risk Management Solutions 2008)
While applying a CAT model type approach to infrastructure has its difficulties, it also has its attractions because, unlike traditional CAT models, the exposure database and the economic loss model have less uncertainty associated with them. For example, in the case of electricity networks, the utility operator usually has a very accurate asset database and understands exactly the exposure (loss of income or potential fines) they may receive for failing to supply consumers. For the hazard model, the same event-based models used by the insurance industry can be adopted (although as infrastructure networks may be geographically distributed on national, or even continental, scales they may need to increase the area of the hazard considered). This only leaves the vulnerability models as the last missing piece of a CAT model that can assess the resilience of infrastructure. In the assessment of the resilience of infrastructure networks, it is the value of the service that the network provides that is of primary concern, rather than the capital cost of the infrastructure itself. This means that resilience is a combination of the fragility of the assets and the time to repair failed assets. In this paper only fragility is considered, as the time of repair is primarily an operational decision involving reserves of labor and material and these will vary considerably between different operators, whereas fragility is dependent on the specification of the asset and its maintenance program. While these do vary between operators, operators tend to adopt similar specifications for their assets.
This paper argues the case for developing catastrophe type models for electrical distribution networks. A methodology is developed to produce empirical fragility curves for individual infrastructure assets using the example of overhead line (OHL) components of an electrical distribution system subjected to wind storm hazard. To achieve this, a large empirical database of fault information, namely the National Fault and Interruption Scheme (NaFIRS) reporting database, which is collected by the United Kingdom electrical distribution network operators is used to derive the fragility curves. The accuracy of these fragility curves is also explored by considering how their accuracy increases as the precision of the information used to derive them improves.

Vulnerability and Fragility Curve Methodologies and Data Requirements

Fragility curves define the relationship between the probability of exceeding a particular damage state (or performance) of an infrastructure component, and the intensity of the applied hazard, whereas vulnerability curves define the relationship between expected losses (such as economic losses) and this same intensity. Exceeding a particular damage state is often termed failure and is usually defined to be when “the capacity of a structure to provide a designated level of service has been exceeded” (Schultz et al. 2010, p. 4); it does not necessarily imply the total, or catastrophic, failure of the structure. Obviously, fragility functions and vulnerability functions are closely related and can usually be converted using an economic or other model. In this paper, the damage state of interest is when damage to an electricity component is sufficient that it could result in an interruption to electricity supply, regardless of duration or number of customers impacted. These damages can be expressed as a fragility function by expressing failures as a probability, or more usefully (as the authors have done) by expressing them as a number of interruptions normalized by the length of overhead line in the asset database. As the components in an electricity network are the lines between electricity poles, then the faults/km presented in this paper can easily be converted to probabilities by dividing them by the average length of overhead lines between poles (in this case approximately 80 m).
Fragility curves were initially introduced and developed for conducting seismic risk assessments at nuclear power plants (Kennedy et al. 1980; Kaplan et al. 1983), and currently the majority of publications developing (or using) fragility curves are still in the area of seismic risk assessment. This includes studies by Basoz and Kiremidjian (1997), who developed fragility curves using observations of bridge damage following the Northridge earthquake that struck Los Angeles in 1994; Lin (2008), who developed fragility curves for frame structures exposed to seismic loads; and more generally Porter et al. (2007). While fragility curves for insured assets such as housing stock have been developed for the insurance industry, for other hazards, including flooding and hurricanes [e.g., Ellingwood et al. (2004) who developed fragility curves for lightweight wood frame structures exposed to hurricane winds] examples of vulnerability models for infrastructure systems are limited.
Fragility curves for individual assets within electrical power networks do exist, but tend to be very specific in their application and also do not consider how the resolution of data used to derive them impacts the resultant curve. For example, Davidson et al. (2003) uses empirical data to construct fragility curves for transformers subjected to hurricane events, and Reed et al. (2010) correlates wind speed data from Hurricane Rita with electricity outages. In their studies, Winkler et al. (2010) and Ouyang and Duenas-Osorio (2014) define fragility curves for hurricane events. Ryan et al. (2014) and Bjarnadottir et al. (2013) derive analytical fragility curves for wooden power poles subjected to wind storm events.
Fragility curves have also been developed in the HAZUS methodology for numerous infrastructure assets (including electrical substations and generation facilities) but these are limited to earthquake hazard only (HAZUS 2015b, c). While HAZUS present a complete framework for assessing infrastructure risk, the majority of the relationships between hazard intensity and impact were obtained using expert judgement, which may be subject to significant biases. For example, the judgement of experts may not be transferable to other regions and may not be extrapolated to more severe events that they have witnessed.
There are four broad approaches to developing fragility curves, each with their own advantages and disadvantages (Jeong and Elnashai 2007).

Judgmental Approaches

This approach develops fragility curves based on some form of expert opinion, or judgement, and tends to only be used as a last resort when it is not possible to use other approaches because of the lack of obtainable observational data and models, for example. The primary advantage of this approach is that it is not limited by the quantity and quality of available data; conversely, the primary disadvantage of this approach is the lack of data that can be used to validate results. This approach can be biased by the opinion of the expert; e.g., an expert’s opinion may be influenced by their individual experience, which itself is influenced by factors specific to the location where they work (Jeong and Elnashai 2007). If these factors are known it may be possible to correct the results; however, this can be difficult because of the number of influencing factors, which may differ for each expert, meaning that many biases are undetected.

Empirical Approaches

Using this approach, fragility curves are developed using observational or empirical data. These observations may be obtained through controlled experiments or may be collected in an ad hoc fashion (e.g., through the survey of structures after a catastrophic event). Databases containing this information may be difficult to analyze statistically. Observational data may be highly specific to the source situation, or location, and only contain data for events captured over the monitoring period. This is the most commonly used approach for evaluating fragility curves for mechanical, electrical, and electronic parts, as it is relatively easy to replicate parts and test them to failure (Schultz et al. 2010); however, it is generally limited to situations where a sufficient quantity of data can be collected.

Analytical Approaches

The analytical approach uses structural models that characterize the performance limits of the structure. In assessing the performance of the structure, several basic variables are used to determine the capacity of the structure to withstand the applied load and also the demand placed on the structure. This usually includes the material properties, the dimensions of the structure (or individual components), and may also include environmental variables, such as temperature and/or humidity, which may impact on the capacity of the structure. These variables are then used to construct the limit-state equation, which is expressed as the difference between capacity and demand. However, the structural characteristics, load-structure interactions, and structure-environment interactions can be difficult to model analytically and these models are difficult to validate.

Hybrid Approaches

A hybrid approach to developing fragility curves uses a combination of two or more of the previous three approaches in an attempt to overcome their limitations (Jeong and Elnashai 2007). Empirical approaches tend to be limited by the availability of observational data, while judgmental approaches can be biased by expert assessments and analytical approaches can be limited by modeling deficiencies or be computationally inefficient. To overcome these limitations, judgmental or analytical approaches can be combined with observational data from the empirical approach to improve the robustness of the fragility curve and produce confidence bounds on estimates of the probability of failure [as achieved by Singhal and Kiremidjian (1998)].
In this paper, an empirical approach is used to develop a series of the fragility curves for overhead line components of an electrical distribution system. In this case, observations are formed of recorded electrical faults, held in a national database reporting database (the NaFIRS database).

Structure of the United Kingdom Electricity System and NaFIRS Empirical Fault Data

In the United Kingdom and many other regions of the world, there are two primary infrastructure systems that facilitate the transfer of electricity from where it is generated (e.g., coal, gas or oil power plants, hydroelectric plants, or wind farms) to where it is required by industrial, commercial, or domestic consumers. The first system is the electrical transmission network, owned by the National Grid in England and Wales, which carries electricity from the generators to grid supply points situated at numerous locations around the country at high voltages between 400 and 275 kV (Fig. 2). This is achieved primarily through the use of overhead lines, supported by steel transmission towers or underground cables. To give a sense of the scale of this network, it consists of around 7,000 km of overhead lines and 22,000 transmission towers (National Grid 2014).
Fig. 2. Typical electricity supply chain for the United Kingdom (adapted from Energy Networks Association 2011)
From these grid supply points, the electricity is transferred to the distribution network. This distribution network is formed of nine regional grids that are responsible for delivering power to industrial, commercial, and domestic users. These networks again consist of a series of overhead lines, supported by either steel towers or timber poles, underground cables, and a range of substations at different voltages. They are owned and operated by the district network operators (DNOs), who are also responsible for the operation and maintenance of the individual assets (e.g., towers, cables, and substations) that form their networks. In the United Kingdom the DNOs are not responsible for selling electricity to consumers, this is done by the electricity suppliers.
United Kingdom electricity companies, which are private organizations, collect data on faults in the NaFIRS database as part of their regulation criteria set out by the government. This database contains details of over 200,000 wind related faults on the United Kingdom electrical distribution system, including date, time, and number of consumers affected, among others. Data from the NaFIRS database from April 1, 2003 to December 31, 2010 is used to consider the impact that wind storm hazards have on the overhead line components operated by one DNO in the United Kingdom, during which time there are approximately 12,000 faults. The data in this time period is audited and therefore likely to be of greater accuracy. The faults to the 132–11 kV overhead line components (e.g., grid supply point to distribution substation level, see Fig. 2) are considered, rather than the lower voltage supplies to individual domestic and commercial properties, as the robustness of this section is of a high importance to the DNOs. Individual faults in this section of the network have the potential to impact a large number of consumers, whereas faults occurring below the distribution substation are likely to impact on a very small number of consumers.
While the NaFIRS database provides a comprehensive overview of the fault statistics, from an energy operator’s perspective, it is missing two key components to enable accurate vulnerability analysis. First, the database does not record the intensity of the hazard that caused the fault (e.g., wind speed), and second, the exact location of the fault is not recorded. The fault location is recorded in terms of the DNO (covering approximately 60,000  km2), which is further split into smaller geographic zones, which in this paper are referred to as areas (covering approximately 2,000  km2). Therefore, in this paper several assumptions are made in order to obtain the likely wind speed that caused each individual fault and assess the extra precision that accurate fault location information can provide.

Wind Hazard Model

Following the traditional approach used in probabilistic CAT modeling, hazard modeling consists of: (1) simulating representative, or stochastic, catastrophic events (e.g., earthquakes, storms/precipitations) in time and space, i.e., a database of scenarios (event generation); and (2) assessing the resulting hazard intensity (e.g., intensity of ground motion, flood depth and velocity, wind speed, etc.) across a geographical area at risk by propagating a given event across the affected region (local intensity calculation). Each event is usually defined by a specific magnitude (i.e., its severity), location, and probability of occurring (or event rate) based on historical data and the scientific understanding of the highly complex physical phenomena of natural hazards (to supplement the scarcity of historical data). To produce fragility curves it is then necessary to convert the magnitude of an event with a particular probability of occurrence into intensities over the area of interest. For this study, as the wind is always blowing (although the wind speed may be negligible) there is a need to define the existence and duration of a storm event and a method of producing relevant wind intensities. The high spatial variability of wind and the number of factors that lead to this variability means that it is difficult to produce a simple equation for wind intensity for different locations due to individual storms; instead an alternative approach is used.
Hourly records of observed wind are recorded at a number of locations by the U.K. Met Office. These records comprise both hourly mean wind and hourly maximum gust data, where gust is averaged over a 3-s interval. However, not all of these records are simultaneous or continuous and there are often large periods of time with no data. There are also large areas of the United Kingdom that are not covered by an observation station (e.g., the west of Scotland and the majority of Wales).
Therefore, the observed data is supplemented with a downscaling of the European reanalysis (ERA)-Interim dataset (Dee et al. 2011) (Fig. 3). The reason for this is that as reanalysis data is produced by forecast models and data assimilation systems to reanalyze archived observations; therefore, they are data sets that describe the recent history of the atmosphere, land surface, and oceans on a regular gird (in the case of ERA-Interim, approximately 80 km). This process has the added advantage of producing proxy data where there may no observations (i.e., each gird square). The downscaling used in this paper converts the required atmospheric lateral boundary conditions, sea surface temperatures, and sea ice cover over ocean surfaces for an evaluation run of a high-resolution (12 km) atmospheric model, RACMO22 (Meijgaard et al. 2012), to provide 10 m maximum gust (over 3 s) at 6-h intervals. The ERA-Interim boundary conditions can be considered to be of very high quality (Dee et al. 2011), particularly in the northern hemisphere extratropics where reanalysis uncertainty is negligible (Brands et al. 2013). The downscaling is achieved using RACMO22 forced by ERA-Interim data from 1979 to 2011, from the CORDEX model ensemble run over Europe (Jacob et al. 2013; Kotlarski et al. 2014). This provides local estimates of wind speed suitable for impact studies (e.g., Fowler et al. 2007 for discussion on downscaling), particularly where observations are not available. Wilkinson et al. (2014) established that the bias between observations and RACMO22 output is small: below 3  ms1 for all quantiles of the distribution. The location of data, for both RACMO22, ERA-Interim, and observation stations are shown in Fig. 3. For further information the reader is directed to Wilkinson et al. (2014).
Fig. 3. Locations of United Kingdom wind observation stations (dark red dots) and grid cell corners for ERA-Interim reanalysis (black crosses) and the 12-km RACMO22 Regional Climate Model (block dots) (adapted from Wilkinson et al. 2014)

Development of Vulnerability Model

Now the wind hazard model with the exposure database and fault data are brought together to develop fragility curves for overhead line components of electrical distribution infrastructure. The authors have chosen to use this as an example but could equally have chosen another infrastructure system or hazard to demonstrate the new methodology. An assessment of how the accuracy of the fragility curve improves as the precision (or resolution) of the data used to derive them increases is also made. Although fragility curves are developed, unlike traditional fragility curves that present the impact to infrastructure assets in terms of the probability of failure, the number of assets failed/km due to the hazard are presented. This is a more useable and understandable measure for infrastructure owners and operators and can be converted to probabilities by dividing by the average length of overhead lines between poles (in this case approximately 80 m).
The data is aggregated to three different resolutions, but approximately the same methodology is used for each. In all methods, individual wind storms are initially identified based on a threshold value of wind speed. There are numerous studies that have considered the identification of a threshold value for severe weather events, including wind storms. The majority of these studies have focused on the analysis of indices of climate extremes based solely on observational or reanalysis data, including those of Klawa and Ulbrich (2003) and Bonazzi et al. (2012). Other studies have defined and classified severe weather events by investigating both the magnitude of the event and also the damage caused to infrastructure. For example, Vajda et al. (2013) defined three different threshold parameters for severe weather events on the basis of the observed level of damage of previous recorded events (e.g., minor, mild, and severe damage). In this paper, the lower threshold of damage defined by Vajda et al. (2013) is used to identify wind storm events: wind storm occurs when wind speed >17  m/s. Within these individual wind storms the number of faults that have occurred are identified and deemed to be caused by the maximum wind speed in that storm event (Fig. 4).
Fig. 4. Example of the identification of individual wind storms and the faults occurring within their time frame; in this example there are two wind storms, the first shown by the blue line (occurring between 33 and 147 h) and the second by the green line (occurring between 253 and 280 h)—identified when the wind speed crosses the 17  m/s threshold line (dotted line); the first wind storm causes a total of 50 faults (shown by the blue triangles) and are assumed to have been used by the maximum 29  m/s wind speed recorded in this storm; in the second wind storm (green) nine faults are caused by a wind speed of 19  m/s
The authors acknowledge that not all of the faults during the storm event will be caused by the maximum wind speed. However, by analyzing a number of storms in the dataset, it has been found that the majority of faults occur within the same 6-h time period as the maximum wind speed of the storm or within the short-term aftermath (and therefore are likely to have been significantly weakened by the high wind speeds during the storm event).
In the first method a low resolution, publicly available (although potentially at a cost), wind data is used to identify the maximum wind speed occurring in the DNO during a wind storm event. In the second method, the 12-km resolution RACMO22 wind data is used to estimate the failure wind speed for faults in individual areas. This is achieved by creating individual time series data for each of the areas and selecting the maximum in the storm events. Finally in the third method, the highest resolution data is used to obtain wind speeds associated with faults at the primary substation circuit where the fault occurred. This information is also used to split the faults into those occurring in urban and rural areas, to determine if the basic land use information can improve the precision of the fragility curves. In all three approaches an overview of the detailed methodology and data sets used to derive the fragility curves is given. All fragility curves are presented as plots of the mean number of faults/km against the intensity of the hazard (e.g., wind speed). For an introduction to fragility curves the reader is directed to Schultz et al. (2010), and for detailed guidance in developing them to Porter et al. (2007) or Rossetto et al. (2014).

Method 1: Low Resolution Approach

In the first method, fragility curves are calculated using information that would be readily obtainable by most organizations. In this method, all meteorological stations owned by the U.K. Met Office within the DNO region are identified (a total of 31 stations) and construct a time series of wind maxima using the maximum wind speed at each time interval. The 12-km RACMO22 atmospheric model output is used at the grid cell nearest to the relevant station to fill in any missing data in the observations. For the fault information all of the fault data within the NaFIRS database for the DNO in question from April 1, 2003 to December 31, 2010 is used.
To construct the fragility curve the method previously discussed (to identify individual wind storms and the number of faults associated with each) is used and the maximum wind speed in each wind storm is binned to the nearest integer, enabling the mean number of faults caused by each wind speed value to be calculated. Using this method a total of 766 individual wind storms is obtained and the average for each integer value of wind speed is plotted, before fitting a trend line to obtain the shape of the fragility curve, as shown in Fig. 5. Note that wind storms that have not resulted in any recorded faults are included in this average and in subsequent calculations.
The primary advantage of this method is that only two statistics for each wind storm hazard are required in order to calculate the fragility curve, namely the maximum wind speed of each storm and the number of faults that the storm caused. This information is usually publicly available and relatively easy to obtain.
The fragility curve for this method is shown in Fig. 5, in terms of the number of faults recorded normalized by the length of overhead line cables (in km). From this figure, it is shown that overhead lines are generally resilient to wind storm hazards, experiencing a very small number of faults (and also small number of customers affected), when subjected to wind speeds of less than 20  m/s. However, they become more susceptible to damage as the wind speed increases to more than 30  m/s, showing an increase in the number of faults.
Fig. 5. Fragility curves using Method 1: (a) the maximum wind speed in each individual wind storm, binned to the nearest integer against the number of faults recorded in each storm (gray dots) and the average number of faults occurring for storms of each given intensity (black dots); a power-law trend line has been fitted through the average number of faults (black line) and the R2 value is shown; (b) for clarity, the individual data points (gray dots) have been removed; in both graphs the number of faults recorded in each wind storm has been normalized by the total length of overhead line components in the DNO (in kilometers)
This information can be combined with the fault data to generate the graph shown in Fig. 6, which plots the proportion of storms causing a fault for each integer value of wind speed. From this figure, it is shown that storms with a maximum wind speed of greater than 21  m/s are almost certain to cause at least one fault to overhead line infrastructure. Note that there are only a few wind storms with a maximum wind speed of greater than 30  m/s from 2003 to 2011.
Fig. 6. Plot of the proportion of wind storms when a fault occurred; also shown are the frequency of wind storms for each maximum value of storm wind speed
There are also a small proportion of faults (10%) that occur below the threshold value of wind storms of 17  m/s and are therefore not included in the calculation of fragility curves. These faults could be attributable to errors in the database, human error at the time of recording the fault, or due to the RACMO22 model not capturing localized wind speeds, among others.
The standard deviation for the number of faults occurring for each wind storm is also considered (shown in Fig. 7). This measure indicates a large scatter in the results particularly for wind storms with a wind speed greater than 25  m/s. This indicates that although there are on average more faults occurring for higher wind speeds there is also more uncertainty around the exact number of faults recorded. Note that the standard deviation for the wind storms with a higher mean number of faults may be skewed because of their higher mean values and also the smaller number of data points (as indicated in Fig. 6).
Fig. 7. Plot of the standard deviation for the number of faults occurring for each wind storm when the maximum value of wind speed is binned to the nearest integer

Method 2: High Resolution Approach

In this approach, an investigation to determine how more precise weather and fault information may be used to improve the fragility curves is undertaken. While the fault information in the NaFIRS database is accurate, it has relatively low spatial precision. It is recorded that a fault occurs somewhere within an area in the DNO region (approximately 2,000  km2). While this smaller area is recorded in the NaFIRS database, the location and boundary of this area must be obtained from the DNOs. This location data has been obtained for our case study DNO (consisting of 32 areas) and it is now used to provide more accurate identification of the maximum wind speed associated with each fault.
In this instance a time series data for each of the 32 areas within this study DNO is created and the meteorological station (owned by the U.K. Met Office) closest to the spatial center of each area is identified. The closest grid cell from the RACMO22 output is used to create time series data (due to incomplete meteorological station datasets). The maximum wind speed and number of faults for each storm are then identified before being normalized by the length of overhead lines in each area. This enables the combination of the data to form one fragility curve for the whole DNO area. The authors do this as the aim is to create one fragility curve for all overhead line components and not a specific curve for components in each small area. Similar to the previous method, the wind data is binned to the nearest integer to enable the mean number of faults for each wind speed value to be calculated. This average is then plotted, before fitting a trend line to obtain the fragility curve, shown in Fig. 8. Using this method 9,168 wind storms are recorded, significantly more than the 766 recorded using Method 1. This is because time series data is used for each area and not for the whole DNO region; consequently, the same storm may be counted twice if it occurred over more than one area. Note that the maximum wind speed recorded using Method 1 was 38  m/s and the maximum recorded using Method 2 was 37  m/s. There was one instance of a 38  m/s storm in Method 1, recorded at one RACMO22 grid cell, but this grid cell is not used in Method 2 as it is not the closest to the center of an area.
Fig. 8. Plot of the average number of faults occurring for wind storms against the maximum wind speed recorded in each storm, binned to the nearest integer, and calculated using Method 1 (open dots) and Method 2 (filled dots); the faults have been calculated in terms of faults per km of overhead lines; a power-law trend line has been fitted through both data sets and the R2 values are shown
The fragility curve calculated using this method, plotted in Fig. 8, shows a similar curve to that calculated using Method 1. On average, a low number of faults are recorded for overhead line components for wind speeds lower than 20  m/s, but show an increase in the number of faults recorded as wind speeds increase, particularly to greater than 30  m/s. However, it is shown that there are clearly identifiable differences in the fragility curves obtained using each method. It appears that Method 1 significantly underestimates the fragility of overhead line components (as there are fewer failures using this method for the lower wind speed storms). This is because this method overestimating the failure wind speeds of each component. Method 1 assumes that wind fault is caused by the maximum wind speed in the whole DNO region; however, this method does not account for the range of wind speeds occurring throughout the region (most of which will be lower). More precise spatial information allows for the correction of this and from the results it can be concluded that this information is required to produce accurate fragility curves.
For this method, the standard deviation for the number of faults occurring for each wind storm is also considered, shown in Fig. 9. In a similar manner to Method 1, the standard deviation increases as the maximum wind speed increases. The standard deviations calculated for Method 2 are higher than those obtained for Method 1. This counterintuitive result is likely because Method 1 averages the results over a much larger area and therefore smooths out the local effects, such as variations in the wind climate that result from localized topographies, variations in the age of the infrastructure, and/or variations in land usage. The important point is that although Method 2 has a greater dispersion about the mean, the mean is a much better fit for the data and therefore results in better estimates of failures. Note that, in a similar manner to Method 1, there are only a few instances of high wind speeds recorded and therefore the standard deviations for these wind storms may be skewed because of the smaller number of data points.
Fig. 9. Plot of the standard deviation for the number of faults occurring for each wind storm when the maximum value of wind speed is binned to the nearest integer, calculated using Method 1 (open dots) and Method 2 (filled dots)

Method 3: Land Usage Approach

In this paper, one final approach is considered to determine the improvement that incorporating simple land usage information has, if any, to the precision of the fragility curves for overhead line components. To achieve this more accurate spatial information is needed in order to categorize each overhead line component into either an urban or rural environment. Individual DNOs may keep more accurate fault/spatial information and, in this case, data that enables each individual fault to its primary substation (of which there are approximately 900 in the whole DNO region) has been obtained. The land use for each substation is categorized by plotting its location on ArcGIS software and using CORINE land usage data to visually identify whether it is located in an urban or rural environment. The same time series data as in Method 2 is used (and therefore have the same storm information) but the faults occurring in each storm are divided to those in urban or rural environments. Data regarding accurate electrical circuit information is not obtainable, and therefore the length of overhead line components supported by each primary substation cannot be calculated. As such, Thiessen polygons are used to identify the closest primary substation to each overhead line component and this is used to approximate the length of overhead line components supported by each substation (it is likely that substations will support the overhead lines closest to them). This information is used to normalize the number of faults for each wind storm (for the two land use categories) before combining the data into two fragility curves (i.e., for the urban and rural land usage). Again the integer value of wind speed is used and the mean is calculated, which is plotted in Fig. 10.
Fig. 10. Plot of the average number of faults occurring for wind storms against the maximum wind speed recorded in each storm, binned to the nearest integer, calculated using Method 2 (open dots) and Method 3 (filled dots); a power-law trend line has been fitted through both data sets and the R2 values are shown
From Fig. 10, it is shown that there is little difference between the fragility curves obtained using this method (splitting the faults for land use) and Method 2 (using the higher resolution wind speed data). This indicates that the location of the overhead line component, in terms of urban or rural environment (defined by CORINE) makes little difference to its fragility to an applied wind speed. This may seem surprising, as it could be expected that more faults would occur in a rural environment and also secondary impacts exacerbating the number of faults (e.g., falling trees); however, this is not supported by the evidence when using CORINE land use data. Similarly the dispersion in the results for both Method 3 datasets is similar for Method 2 and therefore a pooled Method 3 dataset will have no improvement over Method 2. This is likely because of the very localized nature of wind and localized nature of tree cover (which is the primary cause of faults) or because of the dispersion in the assets themselves being of the same or greater order than the variability in the environmental hazard.
Again, the standard deviation for the number of faults occurring for each wind storm is considered, plotted in Fig. 11. From this figure, it is shown that the standard deviation shows that there is more scatter for Method 3, particularly for higher values of wind speed (Fig. 11). This difference is likely because Method 3 is more sensitive to local effects, including localized topographies, compared to Method 2.
Fig. 11. Plot of the standard deviation for the number of faults occurring for each wind storm when the maximum value of wind speed is binned to the nearest integer, calculated using Method 2 (open dots) and Method 3 (filled dots)

Conclusions

Hazard U.S. has developed a multihazard loss estimation methodology for use in assessing potential losses to infrastructure and other parts of the built environment (e.g., building stock). By their own admission the estimates made by this framework are likely to have an uncertainty of “at best a factor of two or more” (HAZUS 2015a). There are a number of factors contributing to this uncertainty, with a major one being the damage models that are used to calculate expected damages. These have been primarily produced through expert judgement and so are subject to considerable bias and are unsuitable for application to storms that are not in the historical record of the expert.
In this paper the authors have developed a new methodology for calculating fragility curves for overhead line components of electrical distribution networks subjected to wind storm hazard. While this paper is restricted to this asset, the method presented could equally have been applied to another infrastructure system subjected to the same hazard. The fragility curves developed in this paper could be incorporated into future catastrophe risk models of electricity infrastructure (e.g., Grossi and Kunreuther 2005) to give an indication of the response of a system to an applied wind storm hazard.
The method uses a dataset of over 12,000 wind related faults in electricity distribution infrastructure and the associated date, time, number of consumers involved, and the duration of the fault and this data has been correlated with a high-resolution reanalysis model to obtain fragility curves for electricity distribution infrastructure subjected to wind storm.
These fragility curves are developed using three methodologies to assess how their precision improves as the resolution of the data used to derive them increases. It was found that precise spatial information is required to produce accurate fragility curves and that using imprecise information can lead to an underestimation of the fragility of infrastructure components. It was also found that using simple land usage information to form fragility curves for faults in an urban and rural environment does not significantly alter the resultant curves. This is may be attributable to the very localized nature of wind and localized nature of tree cover (which is the main cause of faults) or due to the dispersion in the assets fragility being of the same or greater order than the variability in the environmental hazard. To further improve the fragility curves presented in this paper would require very accurate fault location information, higher resolution wind and land use models, and/or more information on the asset (such as age); however, these are currently not available.
Although this dataset is a United Kingdom dataset, the authors argue that it could be used for any region with similar infrastructure and for wind speeds up to 37  m/s (however, there is greater uncertainty associated with higher wind speeds). As all but a handful of failures of this large data set are simple electricity cables strung between power poles, even if the lines were a different specification it would be a relatively simple process to calibrate the presented fragility curves so they are suitable for different overhead line specifications (although it is accepted that this would increase the uncertainty associated with the curves).

Acknowledgments

This research was funded by the Engineering and Physical Sciences Research Council (EPSRC), through an EPSRC Doctoral Prize Fellowship awarded to S. Dunn, and through the RESNET project (Reference: EP/I035781/1) for the wind modeling and NERC NE/N012992/1 for fragility modeling. The authors would like to acknowledge the FP7 Enabling Climate Information Services for Europe (ECLISE) Project (Project Reference: 265240), funded under the Environment Programme of the European Commission. H.J. Fowler is funded by the Wolfson Foundation and the Royal Society as a Royal Society Wolfson Research Merit Award (WM140025) holder. Their support is gratefully received and acknowledged. The authors would like to acknowledge the Energy Networks Association for allowing access to the NaFIRS database and also for their insights into the management and configuration of energy distribution systems; particular thanks go to David Whensley. The authors would like to acknowledge Western Power Distribution for providing additional network information.

References

Andersson, G., et al. (2005). “Causes of the 2003 major grid blackouts in North America and Europe, and recommended means to improve system dynamic performance.” IEEE Trans. Power Syst., 20(4), 1922–1928.
ArcGIS 9.3 [Computer software]. ESRI, Redlands, CA.
Basoz, N., and Kiremidjian, A. S. (1997). “Risk assessment of bridges and highway systems from the Northridge earthquake.” 2nd National Seismic Conf. on Bridges and Highways, Transportation Research Board, Washington, DC.
Bjarnadottir, S., Li, Y., and Stewart, M. G. (2013). “Hurricane risk assessment of power distribution poles considering impacts of a changing climate.” J. Infrastruct. Syst., 12–24.
Bonazzi, A., Cusack, S., Mitas, C., and Jewson, S. (2012). “The spatial structure of European wind storms as characterized by bivariate extreme-value copulas.” Nat. Hazard. Earth Syst. Sci., 12(5), 1769–1782.
Brands, S., Herrera, S., Fernández, J., and Gutiérrez, J. M. (2013). “How well do CMIP5 earth system models simulate present climate conditions in Europe and Africa? A performance comparison for the downscaling community.” Clim. Dyn., 41(3–4), 803–817.
Chang, S. E. (2003). “Evaluating disaster mitigations: Methodology for urban infrastructure systems.” Nat. Hazard. Rev., 186–196.
CORINE [Computer software]. European Environment Agency, Copenhagen, Denmark.
Davidson, R., Liu, H., Sarpong, I. K., and Sparks, P. (2003). “Electric power distribution system performance in Carolina hurricanes.” Nat. Hazard. Rev., 36–45.
Dee, D. P., et al. (2011). “The ERA-Interim reanalysis: Configuration and performance of the data assimilation system.” Q. J. R. Meteorolog. Soc., 137(656), 553–597.
Ellingwood, B., Rosowsky, D., Li, Y., and Kim, J. (2004). “Fragility assessment of light-frame wood construction subjected to wind and earthquake hazards.” J. Struct. Eng., 130(12), 1921–1930.
Energy Networks Association. (2011). “Electricity networks climate change adaptation report.”, London, 1.
Fowler, H. J., Ekstrom, M., Blenkinsop, S., and Smith, A. P. (2007). “Estimating change in extreme European precipitation using a multimodel ensemble.” J. Geophys. Res. Atmos., 112(18), 2156–2202.
Grossi, P., and Kunreuther, H. (2005). Catastrophe modeling: A new approach to managing risk, Springer, New York.
HAZUS (Hazard U.S.). (2015a). Hazus—MH 2.1, earthquake model, technical manual, FEMA, Washington, DC.
HAZUS (Hazard U.S.). (2015b). Hazus—MH 2.1, flood model, technical manual, FEMA, Washington, DC.
HAZUS (Hazard U.S.). (2015c). Hazus—MH 2.1, hurricane model, technical manual, FEMA, Washington, DC.
Jacob, D., et al. (2013). “EURO-CORDEX: New high-resolution climate change projections for European impact research.” Regional environmental change, Springer, Berlin, 1–16.
Jeong, S. H., and Elnashai, A. S. (2007). “Probabilistic fragility analysis parameterized by fundamental response quantities.” Eng. Struct., 29(6), 1238–1251.
Kaplan, S., Perla, H. F., and Bley, D. C. (1983). “A methodology for seismic risk analysis of nuclear power plants.” Risk Anal., 3(3), 169–180.
Kennedy, R. P., Cornell, C. A., Campbell, R. D., Kaplan, S., and Perla, H. F. (1980). “Probabilistic seismic safety study of an existing nuclear power plant.” Nucl. Eng. Des., 59(2), 315–338.
Kircher, C. A., Whitman, R. V., and Holmes, W. T. (2006). “HAZUS earthquake loss estimation methods.” Nat. Hazard. Rev., 45–59.
Klawa, M., and Ulbrich, U. (2003). “A model for the estimation of storm losses and the identification of severe winter storms in Germany.” Nat. Hazard. Earth Syst. Sci., 3(6), 725–732.
Kotlarski, S., et al. (2014). “Regional climate modeling on European scales: A joint standard evaluation of the EURO-CORDEX RCM ensemble.” Geosci. Model Dev., 7(4), 1297–1333.
Kreimer, A., and Arnold, M. (2000). “World Bank’s role in reducing impacts of disasters.” Nat. Hazard. Rev., 37–42.
Lin, J. H. (2008). “Seismic fragility analysis of frame structures.” Int. J. Struct. Stab. Dyn., 8(3), 451–463.
Meijgaard, E. V., et al. (2012). “Refinement and application of a regional atmospheric model for climate scenario calculations of Western Europe.”, Royal Netherlands Meteorological Institute, De Bilt, Netherlands.
Murray, A., and Grubesic, T. (2007). Critical infrastructure: Reliability and vulnerability, Springer, New York.
National Grid. (2014). “Transmission network.” ⟨http://www2.nationalgrid.com/uk/services/land-and-development/planning-authority/shape-files/⟩ (Jan. 13, 2015).
Ouyang, M., and Dueñas-Osorio, L. (2014). “Multi-dimensional hurricane resilience assessment of electric power systems.” Struct. Saf., 48, 15–24.
Porter, K., Kennedy, R., and Bachman, R. (2007). “Creating fragility functions for performance-based earthquake engineering.” Earthquake Spectra, 23(2), 471–489.
Reed, D. A., Kapur, K. C., and Christie, R. D. (2009). “Methodology for assessing the resilience of networked infrastructure.” IEEE Syst. J., 3(2), 174–180.
Reed, D. A., Mark, D. P., and Julie, M. W. (2010). “Energy infrastructure damage analysis for hurricane Rita.” Nat. Hazard. Rev., 102–109.
Risk Management Solutions. (2008). The review: A guide to catastrophe modelling, London.
Rossetto, T., Ioannou, I., Grant, D. N., and Maqsood, T. (2014). “Guidelines for empirical vulnerability assessment.”, GEM Foundation, Pavia, Italy.
Ryan, P. C., Stewart, M. G., Spencer, N., and Li, Y. (2014). “Reliability assessment of power pole infrastructure incorporating deterioration and network maintenance.” Reliab. Eng. Syst. Saf., 132, 261–273.
Salman, A. M., and Li, Y. (2016). “Assessing climate change impact on system reliability of power distribution systems subjected to hurricanes.” J. Infrastruct. Syst., 04016024.
Scawthorn, C., et al. (2006). “HAZUS-MH flood loss estimation methodology. I: Overview and flood hazard characterization.” Nat. Hazard. Rev., 60–71.
Schneider, P., and Schauer, B. (2006). “HAZUS—Its development and its future.” Nat. Hazard. Rev., 40–44.
Schultz, M. T., Gouldby, B. P., Simm, J. D., and Wibowo, J. L. (2010). Beyond the factor of safety: Developing fragility curves to characterize system reliability, U.S. Army Corps of Engineers, Washington, DC.
Sforna, M., and Delfanti, M. (2006). “Overview of the events and causes of the 2003 Italian blackout.” Proc., 2006 IEEE PES Power Systems Conference and Exposition, IEEE, New York, 301–308.
Singhal, A., and Kiremidjian, A. S. (1998). “Bayesian updating of fragilities with application to RC frames.” J. Struct. Eng., 922–929.
Vajda, A., Tuomenvirta, H., Juga, I., Nurmi, P., Jokinen, P., and Rauhala, J. (2013). “Severe weather affecting European transport systems: The identification, classification and frequencies of events.” Nat. Hazard., 72(1), 169–188.
Vickery, P., Lin, J., Skerlj, P. F., Twisdale, L. A., Jr., and Huang, K. (2006). “HAZUS-MH hurricane model methodology. I: Hurricane hazard, terrain, and wind load modeling.” Nat. Hazard. Rev., 7(2), 82–93.
Wilkinson, S. M., Alarcon, J. E., Mulyani, R., Whittle, J., and Chian, S. C. (2012). “Observations of damage to buildings from Mw 7.6 Padang earthquake of 30 September 2009.” Nat. Hazard., 63(2), 521–547.
Wilkinson, S. M., Fowler, H., Manning, L., and Dunn, S. (2014). ECLISE deliverable 4.07 of task T4.5: Estimates of extreme wind speed used for the design of buildings, Newcastle Univ., Newcastle upon Tyne, U.K.
Winkler, J., Dueñas-Osorio, L., Stein, R., and Subramanian, D. (2010). “Performance assessment of topologically diverse power systems subjected to hurricane events.” Reliab. Eng. Syst. Saf., 95(4), 323–336.

Information & Authors

Information

Published In

Go to Natural Hazards Review
Natural Hazards Review
Volume 19Issue 1February 2018

History

Received: Jul 21, 2016
Accepted: May 31, 2017
Published online: Sep 29, 2017
Published in print: Feb 1, 2018
Discussion open until: Feb 28, 2018

Authors

Affiliations

Sarah Dunn, Ph.D. [email protected]
Lecturer in Structural Engineering, School of Civil Engineering and Geosciences, Newcastle Univ., Newcastle NE1 7RU, U.K. (corresponding author). E-mail: [email protected]
Sean Wilkinson, Ph.D.
Reader in Structural Engineering, School of Civil Engineering and Geosciences, Newcastle Univ., Newcastle NE1 7RU, U.K.
David Alderson
Researcher in GIS, School of Civil Engineering and Geosciences, Newcastle Univ., Newcastle NE1 7RU, U.K.
Hayley Fowler
Professor of Climate Change Impacts, School of Civil Engineering and Geosciences, Newcastle Univ., Newcastle NE1 7RU, U.K.
Carmine Galasso, Ph.D.
Lecturer in Earthquake Engineering, Dept. of Civil Engineering, Environmental and Geomatic Engineering and Institute for Risk and Disaster Reduction, Univ. College London, London WC1E 6BT, U.K.

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share