Free access
forum
Apr 1, 2009

Reactions to Infrastructure Failures

Publication: Leadership and Management in Engineering
Volume 9, Issue 2

Reactions to Infrastructure Failures

After a tragedy, such as the collapse of Interstate 35W (I-35W) over the Mississippi River in Minneapolis, Minnesota, our inclination, not to mention that of our legal system, is to look for and find somebody to blame. Somebody did something wrong, and they either knew or—I love this phrase—should have known that the decision was wrong. The mistake is assumed to be intentional or negligent, and the perpetrator should pay for it. But think about that scenario—and the ones you work with every day—a little more before passing judgment and prescribing a fix that may be irrelevant.
No one involved had anything to gain by the bridge collapse, not Sverdrup & Parcel, not erection subcontractors, not the steel fabricators, not the Minnesota Department of Transportation (MNDOT), not the design engineers, not the bridge maintainers (MNDOT or contractors), and not the public. Everyone involved could only benefit from a safe, sound bridge. Thus, to assume that someone knowingly cut corners to save a few dollars—even if that meant several thousand dollars in this case—by ignoring or minimizing safety considerations, makes no sense. That scenario requires too many people to break the rules to be believable. There may be evidence that an engineer made a mistake when designing the gusset (and who knows how many others), but that same mistake was also made by everyone else who reviewed and approved the design. For this design to have been accepted meant that everybody involved honestly saw no real problem.
In a book entitled The Challenger Launch Decision (1997, University of Chicago Press), sociologist Diane Vaughan examines a tragedy with a number of elements similar to the I-35W bridge failure. Her contention is that nobody involved with the design, manufacture, and use of the solid rocket boosters broke any rules—rules that were largely in place as the space shuttle program began. What she found was an insidious process she terms “the normalization of deviance,” where an anomaly, or deviance from projected results, arises, often very early in the process. In such situations, very rational decision guidelines allow, or even encourage, both engineers and managers to accept what appears to be a small risk of failure instead of stepping back and redesigning the component. Gradually, some level of anomalous performance is accepted as “normal,” as long as the overall system does not fail. In the case of the Challenger, virtually everyone involved was aware that the solid rocket booster joints never performed as they were designed to, but they believed in an untested claim of redundancy (the second O-ring). Even with significant O-ring damage as early as the second flight, they accepted the risk since nothing crashed, that is, nothing until the Challenger. Even the skeptics ultimately accepted the joint design as an “acceptable risk.”
The bulging gusset plate on the I-35W bridge matches this scenario rather well. Obviously, it was not designed to bulge, but it did. There is more than one way it could have happened, but that is not as important as the fact that it did happen. A full engineering analysis, redesign, and installation of this and similar joints would have been time consuming and expensive, but easily worth it if that would have prevented a failure. But failure didn’t appear to be imminent. Unlike the Quebec Bridge that failed during construction, the bulge was rationalized into an acceptable risk by all concerned because the bridge didn’t fall, at least not for many years. Because the joint carried its loads, even though it did so in an unplanned fashion, it could be managed through inspection. If it ever needed replacement, that could be done if and when it was necessary. This, too, is the normalization of deviance, and at each step in the bridge’s life—from conception to failure—all involved were confident that they were acting properly, according to the rules, and in the public’s interest. They were no doubt more surprised by the failure than anybody.
Interestingly, one of the questions posed after the Challenger tragedy went something like this: If you saw a bridge with a crack one-third the way through a beam, would you think it was safe to cross? (At that point, the worst O-ring erosion seen was about that severe.) Everyone replied, “No way!” But I submit that this bulging gusset was the same thing, and everyone had replied, “Yes, of course,” for years.
Even though the acceptable safety factors are different for aerospace and civil engineering, the principle is the same. Both joints were intended to function repeatedly without permanent distortion or component deterioration, but both essentially “cracked” without failing completely. Yet both remained in service. The longer they worked, the less attention they received and the more “normal” they seemed to be, until they ultimately failed catastrophically. Economic pressures will continue to seek the least amount of design redundancy and safety factor that we as engineers will accept. These competitive pressures will not go away, which means that every detail becomes increasingly critical to a structure’s performance. Safety and reliability analysis is increasingly being done statistically by using a variety of “what if” factors that are, at best, educated guesses. This places a tremendous burden on engineers to make sure that not only the various “what if” factors, but the final result, make sense and can be believed.
Technology history is replete with examples of envelopes being pushed until they fail. In some cases, the engineers knew they were pushing limits and took what they believed were adequate measures to compensate for the unknown elements. In most cases they were successful, even if they didn’t fully understand why, but failures sometimes occurred. At that point, the boundaries were reassessed in detail, and the body of knowledge grew a bit. In other cases, like the I-35W bridge, an anomaly occurred that involved a very common and well-known structural element. Like all such projects, this bridge had its unique design elements, but the common elements were not deemed risky enough to warrant special attention, even after the clear evidence to the contrary. Failures are expected to result from new design elements, not mundane ones, and the emphasis is usually placed on these new elements. Besides, safety factors and redundancies guard against catastrophic failure. Several things generally have to go wrong at once to have a catastrophic failure, and that’s not statistically likely. Thus, it is easy to dismiss the little indicators of problems that chip away at the safety factor.
Obviously, people made mistakes at several junctures of both these examples, even though nobody thought the problems serious enough to blow the whistle and regroup. Even the skeptics bought-in long before either failure. Either tragedy would be easier to deal with if, in fact, someone had done something wrong, especially if they had tried to conceal it. As it is, our work cultures often make it possible, or even likely, that we fall into such traps from time to time. We can easily become victims of our own success. We must get better at recognizing and evaluating the risks of these situations, especially as we continue to engineer excess material out of our designs. This kind of thinking is different from the classic engineering analysis methods that most of us learned, but it is no less critical.
Our educational process can help a lot here. Students should encounter engineering problems that require complex, reasoned thought about things that are inexact by their very nature and frequently in conflict. This type of thinking should not be withheld until the senior-level “design” classes, but rather should begin with the introductory classes. I’ve done it in statics classes by throwing “red herrings” into the problem definitions, or expecting the students to either find or estimate some needed coefficient. Structures courses can include elements such as material degradation over time, or the possibility of inferior fabrication. How do they occur? How much is too much? And the quality of this reasoning needs to be evaluated and graded. It’s not enough for the instructor to automatically accept the student’s conclusion and then only evaluate the calculations that follow. This is a large part of what “thinking like an engineer” is all about, and it is a crucial element of successful practice.

Information & Authors

Information

Published In

Go to Leadership and Management in Engineering
Leadership and Management in Engineering
Volume 9Issue 2April 2009
Pages: 57 - 58

History

Received: Dec 28, 2008
Accepted: Jan 21, 2009
Published online: Apr 1, 2009
Published in print: Apr 2009

Permissions

Request permissions for this article.

Authors

Affiliations

J. Lawrence Lee, Ph.D.
P.E.

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

View Options

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share