Free access
FEATURES
Sep 15, 2009

Strategies for Managing the Consequences of Black Swan Events

Publication: Leadership and Management in Engineering
Volume 9, Issue 4

Abstract

The metaphor of Black Swan refers to unpredictable events, such as September 11, 2001, that happen from time to time and have enormous consequences. The phrase originated in medieval Europe during philosophical discourses, but has become widely known subsequent to the recent publication of Nassim Nicholas Taleb’s eponymous bestseller, The Black Swan: The Impact of the Highly Improbable. Currently, civil engineers deal with the impact of extreme natural and man-made (accidental or malevolent intentional) hazards for critical facilities through a risk-based approach, where risk is a function of the likelihood of event occurrence and the resulting consequences. However, Black Swan events are not foreseeable by the usual statistics of correlation, regression, standard deviation or return periods. Expert opinions are also of minimal use since their experiences are tainted by biases and constrained by finite human life span. This inability to estimate the likelihood of occurrence for Black Swan events precludes the application of risk management. Therefore, the development of strategies to manage their consequences is of paramount importance. The objective here is to discuss some past such events of engineering relevance, idiosyncrasies of human behavior responsible for missing precognition of their clues, and management strategies to cope with the potential or actual consequences of such unforeseen, large-impact, hard-to-predict events.
There are certain events of enormous consequences that seemingly happen far too often and out of nowhere, cautioning one to consider the calculus of engineering risk assessments with a dose of healthy skepticism. Before their actual occurrence, the chain of events on September 11, 2001, the effects of hurricane Katrina on New Orleans or cata-strophic devastation from the Indian Ocean tsunami would either (1) have likely not occurred on anyone’s radar screen or (2) have been dismissed as bizarre events of negligible probability and, therefore, not worthy of consideration in engineering assessments. However, these events happened within a short span of one decade and there have been numerous post facto explanations. The events may be rare, but considering their monstrous impact, it behooves civil engineers to put a searchlight on the impact of past inexplicable incidents, analyze why one is often wiser afterwards, and how vital engineering facilities can deal with such events.
The term “Black Swan” to describe unanticipated events has seeped into public consciousness since the publication of The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb (2007). The idea was originally put forward by philosopher David Hume to represent the unexpected, an unlikely but not impossible catastrophe that no one ever seems to plan for, the things one does not know or does not know that one does not know. Taleb expounds further on Hume’s old problem of induction by melding the principle from information theory that only improbable events can be highly informative with Karl Popper’s principle of falsifiability to create a framework for his enunciation.
Unlike deductive logic, inductive reasoning premises do not necessarily have a causal relationship with the conclusions. Thus, even if all observed swans are white, it does not lead to the conclusion that all swans are always white. An infinite series of observations would be required to prove the validity of the argument whereas only a single observation of black swan will negate it. As the principle of falsifiability explains, one cannot prove the truth of hypotheses empirically, only their falsity. If one were to test the hypothesis that “all swans are white,” enumerating white swans is not very informative, but the reality of a single black swan is enormously important. Tautologically, information theory propounds that if the probability at which one expects to receive a message is close to one, the information received from the message is close to zero.
European philosophers debated the issue extensively and prognosticated that black swans did not exist, until colonial explorers actually located one in Australia, resulting in coining of the phrase. As Black Swan events are unpredictable from probabilistic or statistical considerations, strategies to cope with their consequences are of prime importance for engineers, managers, and response providers. The objective here is to discuss some such natural and man-made (accidental or malevolent intentional) events, impact on engineering artifacts, reasons for human inability to recognize these, and strategies to cope with their actual or potential consequences.

Why Do We Underestimate Black Swan Risks?

Behavioral scientists postulate that human experiences are often tainted with personal biases and formed from a rather limited time span of observations compared to the likelihood period of most extreme events. Consider the experience of people in the Indian Ocean region, who had likely never observed or even heard about tsunami, whether in normal discourse or in folklore. Except for a few experts almost no one was aware of their reality and considering the absence of any past observed occurrence or experience, no one had visualized this unusual event. The New Orleans residents had been through a lot of storms before hurricane Katrina and nothing that bad had ever happened. In hindsight, it is clear that there was no engineering redundancy that would save the residents if a large enough storm dumped water in short duration and the levees were breached. Could someone coming to work in the World Trade Center towers on the morning of September 11 have imagined the strange experiences of that day? Then, why are engineers often so dismissive of such scenarios until after they occur?
It may be because engineers do not like uncertainty and ambiguity, focus on specifics instead of generalities, and crave explicit explanations. Such thinking is shaped by their training in linear logic, whereas nearly all Black Swan events involve complex causal relationships. Taleb identifies five peculiarities of human behavior responsible for blindness to Black Swans:
1.
Humans tend to categorize, focusing on preselected data that reaffirm beliefs as opposed to any contradictions (confirmation bias);
2.
Humans construct stories to explain events and see patterns in data when none exist, due to illusion of understanding (narrative fallacy);
3.
Human nature is not programmed to imagine Black Swans;
4.
Humans tend to ignore the silent evidence and focus disproportionately either on the failures or successes;
5.
Humans overestimate one’s knowledge and focus too narrowly on one’s field of expertise (tunnel vision), ignoring other sources of uncertainty, and mistaking concocted models for reality (ludic fallacy).
These peculiarities of human behavior are illustrated here with examples from science and engineering, as opposed to Taleb’s general historical and financial focus.

Confirmation Bias

Confirmation bias refers to the human tendency to notice and look for what confirms one’s beliefs or prevailing dogma, and to ignore or undervalue the relevance of contradictions. For a long time, astronomers tried fitting observations to models for planetary movements in terms of cycles and epicycles in Ptolemy’s school of thought until the Copernican revolution overthrew old models. Still, his contemporaries rejected the new approach until Galileo’s ideas concerning motion finally confirmed their validity. Similarly, Einstein had to move beyond accepted wisdom to come up with his theory of relativity.

Illusion of Understanding or Narrative Fallacy

Narrative fallacy refers to a false sense of comfort from limited data observations, leading to wrong conclusions. Behavioral psychologists call this phenomenon “anchoring,” where one takes observed events and projects them into the future in a straight line. There are a host of studies that demonstrate this all-too-human tendency to assign patterns to random data and create descriptive narratives, resulting in focus on the mundane and missing the extraordinary. Pitfalls of projections from limited data were seen in stretching design parameters beyond limitations of the previous model, contributing to failure of the Tacoma Narrows Bridge. NASA managers decided to launch the space shuttle Challenger during 30°F conditions despite O-rings designed for the 40°F to 90°F range, resulting in its catastrophic failure.

Inherent Human Nature

It has been observed that humans are not programmed to deal with the trap of anticipation, waiting for some important event that is expected to occur infrequently. Nuclear operators are tasked with taking action during an elusive incident that usually never seems to happen. The Nuclear Regulatory Commission has found that days, months and years of repeated nothingness can often lead to incidents when operators were found asleep on the job, most recently at the Peach Bottom Power Plant in Pennsylvania. In a July 2008 incident, three ballistic missile crew members fell asleep while holding classified launch code devices at Minot Air Force base in North Dakota. Numerous similar cases can be cited in the airline and marine industries, where nothing out of the ordinary is observed for long periods, and the deadly combination of fatigue and boredom ultimately leads to that sudden snap-through moment. Some of the most terrible eruptions have come from volcanoes that had been inactive since Holocene times. These examples show that “Risk never takes a holiday, even though it may appear excessively sleepy for long stretches.”

Silent Evidence

Silent evidence refers to the difference between the sample one constructs for analysis and the actual reality, causing systematic errors from ignoring evidence due to the manner of posing questions or biased sampling. Before boarding an airplane, crash risk of one in 1,000years somehow seems more acceptable than one out of 1,000 planes. Similarly, engineers drawing conclusions about the seismic behavior of a component from enumerating the failed components in field surveys would be ignoring the silent evidence provided by survival of thousands of similar components.

Ludic Fallacy

Humans have a tendency for tunnel vision, focusing on the known sources of uncertainty and ignoring the complexity of reality. As events that have not taken place can not be accounted for, one does not have adequate information for prediction, particularly since small variation in a variable can cause drastic impact (the butterfly effect in chaos theory). It is not the random uncertainty of probabilistic models (what Donald Rumsfeld called “known unknowns” or Taleb refers to as “Gray Swans”) but rather the epistemic uncertainty due to lack of knowledge (i.e., unknown unknowns; Black Swans) that is of prime concern. No probabilistic model based on in-box thinking can deal with out-of-box type events.

Some Unusual Black Swan Events

The following catastrophic out-of-the-ordinary events that happened in recent years were impossible to predict ahead of their occurrence and probably would never have been considered in risk assessment of critical engineering projects sited in these locations:
In 2006, an Indonesian company drilled a two-mile-deep exploration well in Java, triggering a gigantic oozing torrent of noxious hot mud at the rate of 1.7 million cubic feet per day, unstoppable for months. It eventually covered tens of square miles up to 20feet deep, overwhelming barriers, destroying houses and towns, and forcing people to flee.
In 2007, a glacial lake in the Magallanes region of Chile suddenly disappeared after fissures in the ground were likely produced by earthquake tremors. Imagine such an occurrence for water impoundment behind a major dam.
The first incident had a major impact on population centers, whereas serious consequences from the second incident were thwarted due to absence of engineering projects in the vicinity. “If a tree falls in a forest and no one is around to hear it, does it make a sound?” Hundreds of meteorites strike the earth every day, occasionally large ones, and even the cataclysmic Tunguska strike of 1908 in Siberia did not result in engineering consequences. But an ill-timed meteorite strike on a critical nuclear facility may have potentially huge consequences. A similar situation of being in the wrong place at the wrong time was experienced by people in littoral areas of the Indian Ocean during the monstrous December 2004 tsunami. And the effect of hurricane Katrina on New Orleans in August 2005 was catastrophic as storm surge caused major breeches in drainage canal levees, flooding 80 percent of the city with water as deep as 15feet in many places. Though not as widely publicized, a new type of serious hazard was uncovered in 1986 when a giant enveloping bubble of high concentration dissolved gases, estimated to be 1.6 million tons and 150feet thick, escaped from the bottom of Lake Nyos in Cameroon and asphyxiated hundreds of people in the surrounding areas. These dissolved gases are under high pressure in deep water and unstable density stratifications can lead to complete churning of lakes, similar to what may happen in liquefied natural gas storage tanks.
Often, the object is in the wrong place but timing is right, resulting in what one calls a providential escape or near-miss event. During the Indian Ocean tsunami, high waves hit the operating nuclear power plant on the east coast in India, and it was safely shut down. But the foundation pit of a 500-MW fast breeder reactor under construction at a nearby location was inundated with water. Subsequently, the site location was moved to higher ground and the design was modified. It was providential that the timing was right; the plant was under construction and not operating.

White, Gray, and Black Swans

Thomas Kuhn (1962), in The Structure of Scientific Revolutions, argues that evolution of knowledge and ideas follows saltatory growth through a series of epistemic paradigm shifts resulting from observed anomalies. Engineering design philosophy has certainly undergone such conceptual changes. Conventional design approach, using factors of safety based on value judgment and past experience, is reasonable for parameters that vary over a limited range and where failure is not generally anticipated. These are the so-called White Swans, completely predictable by well-defined procedures.
The need to model natural and accidental extreme events, with parameters varying over a wider range, led to the development of probabilistic risk-based approaches, where operational and life safety consequences of hazards are linked to their return periods. Nearly all of the risk assessment scenarios depend on some form of speculative extrapolation, where the future is expected to be an extension of the past and the likelihood is estimated by using the frequency of past similar events. These extreme event models rely on meticulously compiled databases for estimating the risk due to random uncertainty and represent the so-called Gray Swans (known unknowns). Such risk modeling certainly represents a paradigm shift from designing for safety to explicit consideration of failures in engineering design.
Risk is defined in terms of the likelihood of an event occurring coupled with resulting consequences, but likelihood for epistemic uncertainty due to lack of knowledge is difficult to estimate. Black Swan events are not foreseen from observed data and are about unknown unknowns, where absence of likelihood information makes risk management methods futile, since in this world, even knowledge is of no use because experts do not really know what they do not know. Black Swans are not the same as low-probability, high-consequence events because they are unpredictable by probabilistic means. One can not estimate probability of bombing for the Oklahoma City federal building, since the event is not a random process. These “left field” outliers can vitiate all attempts at prediction since there is no historical precedence or data for prognostications. Therefore, one needs to shift emphasis from risk management to devising strategies for dealing with the consequences of such unforeseen events.
For a particular subset of Gray Swans that are low-probability, high-consequence events, the likelihood information is available but designing for extremes is prohibitively expensive. Therefore, risk assessments usually screen out events if there is either absence of evidence or negligible chance of occurrence over long return periods. But “absence of evidence is not evidence of absence.” No known hurricane has ever hit the coast of Georgia even though Florida and South Carolina have been frequently impacted. Does this absence of evidence allow one to assume low risk from hurricanes for buildings in Georgia? How does one inspire public trust and confidence in risk assessments when once-in-a- million-years events like the Exxon Valdez oil spill seem to happen so frequently?
A meteorite strike has less than a one in a million chance of striking a critical facility, but there are still 80 places on earth that are struck every year. Even though there is a low likelihood of large seismic events in the midwest United States, such events have occurred. The ship Titanic is a prime example of a low-probability design failure without consideration of consequences. The vessel was built with a double-bottomed hull and divided into 16 watertight compartments and was claimed to be unsinkable—even with four of these filled with water—and, therefore, more life boats were not considered necessary. These low-probability, high-consequence events show the perils of discounting risk based on the low likelihood of occurrence, without adequate consideration of the consequences. Instead, it may be more appropriate to apply strategies devised for managing the consequences of Black Swans to low-probability, high-consequence Gray Swan events. However, unlike Black Swans, using such strategies is a matter of choice for Gray Swans. The chemical and process industries have long considered the consequences of adverse events through the development of systematic methods. Civil engineers need to ponder the development of similar consequence-based management strategies to cope with the potential or actual consequences of such large-impact, hard-to-predict events.

Strategies to Manage Adverse Consequences

As Henri Poincaré propounded and chaos theory confirms, nonlinearity puts a limit on forecasting because small effects can often lead to serious consequences, and forecasting extremes is a treacherous pursuit. Consider the shocking collapse of the Ronan Point apartment building in England; an occurrence completely out of proportion to the minor triggering event of a kitchen gas cylinder explosion in an apartment of the 22-story structure. An inconsequential event leading to another sequentially caused a chain reaction with disproportionate consequences, dubbed as progressive failure. It is instructive to consider that such minor events, which are never a part of engineering design, can lead to complete building collapse despite being designed for all the extreme scenarios conjectured in building codes.
How should one account for such outlier events in important civil engineering projects? Because of the infeasibility of predicting such occurrences, no perfect risk management approach is feasible. As risk management techniques based on likelihood arguments do not necessarily address Black Swan outliers, which by definition are unknown, engineers must think through the likely consequences of such occurrences and use their experience and judgment for developing suitable design and management strategies. These strategies are also applicable for low-probability, high-consequence Gray Swan events. Remembering Voltaire’s aphorism that “perfect is the enemy of the good,” some reasonable stratagems for the prevention, detection, control, and mitigation of hazard consequences are discussed here. The desired safety goals can be achieved by choosing appropriate tactics or combination of tactics from among these or other strategies.

Prevention Strategies

Hazard Avoidance via Site Selection

It is well known that not all potential Black Swan spawning hazard events lead to their actual occurrence, and even then, many may not have engineering consequences. Therefore, hazard avoidance is often a purposeful strategy during the site-selection process and is exercised through a set of guiding principles based on continually updated prior experiences. Buildings have long been sited away from flood-prone locations, whatever the cause of flooding may be. Similarly, protection against ground subsidence resulting from mining, sinkholes, and underground fluid withdrawal are all best achieved through careful site selection. The best strategy for mitigating landslides is to avoid construction in hazardous areas, careful land use to control mass movement, and incorporation of these caveats in the planning process and regulations.

Hazard Avoidance using Barriers

After a spate of terrorist incidents targeting overseas facilities, the U.S. State Department has promulgated new guidelines for the protection of embassies, relying primarily on multiple independent safety barriers, including physical barriers. Enhanced security procedures can provide additional protection and reduce the likelihood of loss from unforeseen events. For example, U.S. Coast Guard regulations for the security and safety of offshore liquefied natural gas plants are enforced by designating a 500-meter safety zone around a project location and restricting access through radar surveillance. Similar security regulations have been implemented at chemical facilities by planned safety zones and at maritime facilities by mandatory access control via Transportation Worker Identification Credential cards.

Hazard Avoidance using Event Predictions or Detection

The U.S. National Hurricane Center uses sophisticated mathematical models to predict the time and location of storm landfalls. Such predictions make it possible to prepare for the approaching hazard event, and to determine areas for abandonment and evacuation routes, resulting in a drastic drop in human consequences. Chemical and process plants often rely on a variety of visual and audible alarms to detect and warn of impending events. Modern onboard electronic charts on ships are able to detect hazards and warn a mariner about the time it will take to bump into a hazard, by integrating real-time information about location of the ship, radar images, tides, currents, water levels, and other meteorological parameters.

Hazard Avoidance from Operational Procedures

Often, the approach for hazard avoidance is incorporated in operating manuals and as long as staff members follow the procedure diligently, safety from hazard events can be achieved. The Port Blair operations manual required vessels to depart the port as soon as intense ground shaking was experienced. There were 20 vessels in the port at the time of the Great Sumatra-Andaman earthquake, and the port authorities followed the established procedure by cutting mooring lines on all vessels, asking them to depart immediately without any knowledge of the impending tsunami. Subsequently, 50minutes after the earthquake, the first tsunami wave hit the location, followed by four others at 30-to-35-minute intervals, but there were no casualties.

Hazard Avoidance through Education and Training

The personnel in air and marine operations, process safety, or other critical activities undergo regular training to update technical knowledge for safely performing their jobs. Pilot training, in both civilian and military aviation, includes tailored training for controllers, dispatchers, and traffic management personnel emphasizing control during weather hazards. For oil spill mitigation, required skills before field deployments include deploying booms and handling boats, sounding the first alarm, paging and communications, designating an on-scene person-in-charge for containment and recovery operations, knowing medical and emergency contact numbers, seeking help from other organizations, and notifying community and regulatory agencies.

Risk-Reduction Strategies

Breaking down into Risk Chunks

A typical risk-reduction strategy is the use of pipeline-segmentation and spill-containment devices used to limit the size of potential environmental damage from oil spills. Similarly, the rate at which a fire spreads in a building can be checked by compartmentalizing air conditioning ducts during the building design. The provision of crack stoppers in ships and aircrafts, which limits damage by confining cracks to discreet chunks, is also a risk-reduction strategy.

Reducing Risk by System Response Control

Risk can be reduced through negative feedback mechanisms that achieve safe shutdown during device failure or loss of operator control. The emergency shutdown systems in oil terminals allow quick isolation of pipelines using motor operated valves (MOVs), thereby limiting oil spill. The system often includes control and communication protocols to coordinate transfer pump shutdowns with MOV operation during earthquakes, mooring lines parting, vessel collisions, or terrorist attacks. The provision of relief valves in process plants, depressurization systems, isolation systems, back-up diesel systems for loss of offsite power, a dead man’s handle for automatically stopping trains, and other similar automatic control systems are examples of using system response control to reduce risks from adverse events.

Use of Technological Devices

Risk of seismic failure in engineering structures is often reduced using technological control devices such as base isolation systems to counteract earthquake loads. Seismic isolation of bridges with a Friction Pendulum System is highly effective in reducing member internal forces during earthquakes, thereby reducing risk. It is also possible to reduce the ground motion energy transferred into the structure by adding a flexible layer between the superstructure and foundation. Base isolation system technology has been highly utilized in Japan and is being increasingly considered for important projects in California.

Risk Transfer Strategies

Insurance and Contracts

The common approach to transfer risk from one party to another is through the use of insurance against consequences of hazard or through contract clauses. Insurance changes an uncertain exposure to certain cost. The general contractor often transfers risk of paying for liquidated damages by including similar clauses in subcontractor agreements. It may also be possible to use exemption clauses or warranty agreements to avoid certain risks or adverse consequences flowing from risk. Such clauses are frequently built into contracts to safeguard the owner interests from consequences of unanticipated events.

Sovereign Warranty

U.S. companies are often hobbled by liability issues when venturing abroad. For example, to build nuclear power plants overseas, the target country needs to sign the Convention on Supplementary Compensation, an international treaty that has created a fund to pay victims of nuclear disasters. In absence of such ratification or sovereign warranties, reactor builders would have to shoulder the prohibitive cost of civil liabilities for accidents.

Design-Based Strategies

Passive Safety via Inherently Safe Design

Inherently safe design is one where there is a low level of danger even if things go drastically wrong. The concept originated in response to the significant contribution of human factors to the Three Mile Island and Chernobyl accidents, and is incorporated in the new generation of advanced reactor designs. The Westinghouse AP600 design enhances safety by drastically reducing the number of valves, pumps, heating, ventilating, and cooling units, and by using gravity-fed water from tanks on top of the containment and natural circulation to cool the reactor core and transport heat to the atmosphere. General Electric’s Advanced Boiling Water Reactor design eliminates the recirculation pumps and provides other passive safety features that are automatically initiated during a loss-of-coolant accident.

Control of System Behavior

Engineers often control system failure behavior by introducing dominant failure modes to ensure failures in a predesignated safe mode, similar to fuses in electrical circuits that prevent equipment damage. Designers strive to control plastic hinge locations in structures to achieve designated failure modes. The key strategy to protect a building from high winds caused by tornados, hurricanes, and gust fronts is to maintain the integrity of the building envelope, including roofs and walls, and to design the structure to withstand the expected lateral and uplift forces, shifting failure to local mechanisms.

System Resilience through Robustness

A resilient system survives and carries on essential functions despite variations beyond the design envelope, by adapting itself to a new and safe equilibrium state (e.g., via load redistribution, avoiding cascading failure, etc.). During the Indian Ocean tsunami, kinetic energy of large ships impacting wharf structures at 510knots was not a part of normal design, resulting in dynamic loads far exceeding structural capacity, yet wharf structures survived due to robustness. Structural robustness is desirable to achieve “tolerance” from damage, to prevent disproportionate consequences, and to avoid progressive failures (Nafday 2008).

System Resilience through Redundancy

System resilience can be achieved by providing redundant design and alternative load paths to ensure that loss of a single component would not lead to overall structural collapse. Redundancy can be active or passive, though most structural system redundancy is active. An example of passive redundancy is providing bridges with cables that are activated only in case of damage to regular components. Redundancy is susceptible to common cause failure events such as earthquakes. Common cause failures impacting system behavior is a concern when locating the control room in process plants, and maintaining a pressurized environment in rooms housing electrical equipment.

Strategy Based on Regulations

The Oil Pollution Act of 1990 was largely driven by the catastrophic Exxon Valdez tanker spill and the marine industry implemented a number of initiatives that resulted in declining trends in the volume of oil spilled into U.S. waters. Similarly, the Union Carbide disaster at Bhopal was the main precursor for Occupational Safety and Health Administration safety regulations. The Ronan Point building collapse resulted in promulgation of regulations requiring consideration of abnormal loads on the structures and theoretical developments in progressive collapse analysis. The ASCE Standard 7-05 (2006) provides that buildings and other structures shall be designed to maintain stability without disproportionate damage from initial local damage. The emphasis is on achieving this mainly through redundancy and transferring loads from any locally damaged area to adjacent regions for preventing disproportionate collapse.

Strategy Based on Control of Consequences

Designers can use clever tweaking of their designs to control adverse consequences. For example, preventing an oil spill from a pipeline attached to a wharf with structural inadequacies can be addressed by separating the pipeline from the wharf and putting it on a separate trestle. This simple strategy of controlling adverse consequences avoids spending major resources on wharf upgrades and achieves the objective of preventing oil spills.

Strategy Based on Response

Response and restoration is the final option if every other strategy has failed. This strategy requires specialized organization with knowledge and resources to handle the response, which can often be exceedingly complex. The Federal Emergency Management Agency deals with emergency response and covers escape, evacuation, sheltering, and training at the national level.

Conclusion

The metaphor Black Swan refers to the unexpected, an unlikely but not impossible catastrophic event that no one seems to plan for, the things one does not know or does not know that one does not know. As the likelihood of occurrence for these unforeseen events cannot be estimated from observed data or prognosticated by experts, the usual risk management techniques are inapplicable for addressing the impact of Black Swan events on vital engineering facilities. Therefore, engineers must think through the likely adverse consequences of such unpredictable events and use their experience and judgment to devise suitable strategies focused on managing the consequences of these outliers.

References

ASCE. (2006). “Minimum design loads for buildings and other structures.” ASCE 7-05, Reston, Va.
Kuhn, T. S. (1962). The Structure of Scientific Revolutions, 3rd Ed., The University of Chicago Press, Chicago.
Nafday, A. M. (2008). “System safety metrics for skeletal structures.” J. Struct. Eng., 134, 3.
Taleb, N. N. (2007). The Black Swan: The impact of the highly improbable, Random House, New York.

Biographies

Avinash M. Nafday is with the California State Lands Commission, Marine Facilities Division, Long Beach, Calif. He can be reached by e-mail at [email protected].

Information & Authors

Information

Published In

Go to Leadership and Management in Engineering
Leadership and Management in Engineering
Volume 9Issue 4October 2009
Pages: 191 - 197

History

Received: Jun 28, 2009
Accepted: Jul 2, 2009
Published online: Sep 15, 2009
Published in print: Oct 2009

Permissions

Request permissions for this article.

Authors

Affiliations

Avinash M. Nafday, Ph.D., M.B.A.
P.E.

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share