The Normalization of Deviance
December 27, 2015 12:01 PM   Subscribe

In 2014 a Gulfstream plane crashed and burst into flames in Bedford, Massachusetts, killing seven people (NTSB animation). Aviation writer Ron Rapp argues that the cause was not defective equipment or simple complacency, but the normalization of deviance, whereby "people within [an] organization become so much accustomed to a deviant behavior that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety." This was also considered to be a factor in the crashes of the space shuttles Challenger and Columbia. The creator of the concept and author of The Challenger Launch Decision, sociologist Diane Vaughan, is interviewed here. (transcript)
posted by desjardins (105 comments total) 98 users marked this as a favorite
 
Make a checklist. Follow it. Every. Single. Time.

Planes.
Operating Theater.
Flood Gate Operator.
Getting off to work.
posted by sammyo at 12:14 PM on December 27, 2015 [16 favorites]


Good god, I fight that every day. A unique situation will come up that requires us to make an exception to a rule, so I agree to let them do it. Next thing I know, engineering is using that as their actual justification to do the same thing later, when we could actually follow the real preferred method if we wanted to.

Luckily, I don't mind being the asshole who "makes everything harder than it needs to be."
posted by ctmf at 12:19 PM on December 27, 2015 [28 favorites]


Wow this is a little close to home.
Social normalization of deviance means that people within the organization become so much accustomed to a deviant behavior that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety.
Sounds like every company I've ever worked for. Shit... it sounds like ME.
posted by zenhob at 12:24 PM on December 27, 2015 [4 favorites]


sammyo, the "getting off to work" one reminds me of this (horrifying) article about leaving babies in cars. (mefi post)
posted by desjardins at 12:27 PM on December 27, 2015 [4 favorites]


This isn't just a workplace issue; why do you think rape culture is so persistent?
posted by cstross at 12:27 PM on December 27, 2015 [25 favorites]


IIRC, the BP/Deepwater Horizon/Macondo disaster is another example.
posted by ZenMasterThis at 12:28 PM on December 27, 2015 [3 favorites]


I'm a project manager and I get pushed constantly on deadlines but my techs have to follow FDA regulations, so I have to push back to give them enough time to follow them to the letter. It's really difficult when the person is high up on the chain (I've gotten emails from the VP of a huge corporation - how do you say no to him?). But this concept makes me think hard about relaxing the rules because although unlikely, it could have major consequences down the line.
posted by desjardins at 12:31 PM on December 27, 2015 [17 favorites]


This is why user-centered design is so important. User-centered design results in tools and procedures that the actual users need and can't do without.

This is also why we can't engineer and technology our way out of problems that stem out of human nature and culture. You can't just throw more and more Engineers Disease at the wall until something sticks. You have to look and listen first.
posted by bleep at 12:36 PM on December 27, 2015 [7 favorites]


Good god, I fight that every day. A unique situation will come up that requires us to make an exception to a rule, so I agree to let them do it. Next thing I know, engineering is using that as their actual justification to do the same thing later, when we could actually follow the real preferred method if we wanted to.

Worse, part of the deal the first time was some extra mitigations for the increased risk. Every subsequent proposal seems to lose some of the mitigations. Since nothing bad happened, they must have been "unnecessary." Wrong.
posted by ctmf at 12:37 PM on December 27, 2015 [16 favorites]


It's literally the new normal.

I work in an industry which is regulated by the government, not because of the possibility of contamination or accident, but because the temptation to commit fraud is so ever-present. In 30 years I haven't ever gone more than a year or two without a customer asking me to help them make their scale lie. Most of them are shocked, shocked I tell you to be told (1) they are not geniuses for thinking of this and (2) it is illegal as hell.
posted by Bringer Tom at 12:39 PM on December 27, 2015 [22 favorites]


Wait wait wait. I had early insider information about the Challenger disaster (a guy I knew worked for a space contractor and he had shared with me a report that outlined all the problems that led to the explosion easily a year before the "official reports") and the number of warnings which were reported about the freezing temperatures and the O-rings were legion. That they were ignored, maybe that's a symptom of this, but there were a LOT of people who Had The Deep Knowledge who were saying "this is a bad thing".... A tragedy, but I'd hardly call that "this organization was used to deviant behavior" and more "people in charge didn't listen to people with knowledge".
posted by hippybear at 12:40 PM on December 27, 2015 [6 favorites]


This isn't just a workplace issue; why do you think rape culture is so persistent?

that raises a difficult point. the article argued that "less stick more carrot" is appropriate when addressing the issue for pilots. to which i nodded my head sagely and continued reading. the same response to rape culture seems, well, inappropriate. why?

(sorry if i'm missing some absurdly obvious difference; also, i'm not arguing rape is "ok" in any way...)
posted by andrewcooke at 12:41 PM on December 27, 2015 [1 favorite]


hippybear, if people in charge aren't listening to the people with knowledge, that's kind of like they decided the deviance was OK and ignored it. They aren't really talking about the deviant behavior of people so much as the deviant behavior of the system, which can include those pesky O-rings not packing properly.
posted by Bringer Tom at 12:43 PM on December 27, 2015 [12 favorites]


Kind of in the opposite situation in my current organization. The rules are so strict that it's close to impossible to get anything done, even stuff that really doesn't have any significant risk, and that utterly destroys our productivity. The normalization effect is related though -- we're so used to missing deadlines that it doesn't raise any alarm bells when we do. Quality is great but at some point you have to actually accomplish a task and cash a check if you want to keep going.
posted by miyabo at 12:43 PM on December 27, 2015 [4 favorites]


I suspect the rape culture angle is veering severely off-topic, but ...

The reason "less stick more carrot" works for aviators is that it's about reinforcing good practices that everyone agrees with the intent of; nobody wants to die in a fireball! So the intent to be a good pilot is already there, it's just a matter of reinforcing good practice.

This is a different situation from one where the bad actor doesn't want to be good, and such situations obviously require a different approach. (I'm now trying to think of some other common situations analogous to this that don't hinge on explicitly criminal behaviour. Reducing the tendency of gun owners to try and solve confrontations by blazing away instead of disengaging and de-escalating, perhaps?)
posted by cstross at 12:47 PM on December 27, 2015 [20 favorites]


Overly aggressive/confrontational policing practices and wearable cameras, perhaps?

Well, and people who think they are geniuses because they've realized it's cheaper to make the scale lie than to actually ship more product or explain why you raised your prices.
posted by Bringer Tom at 12:48 PM on December 27, 2015


the same response to rape culture seems, well, inappropriate.

I had the same thought about how the the normalization of deviance is exactly what's being fought in terms of rape culture and comparable social (rather than professional) issues. I think the carrot in that case is something like Jay Smooth's tactics for calling out racism: structuring the callout in such a way that you emphasize the behaviour over the person, and how it's a fixable act rather than a character flaw. In both cases it's being careful to seek the desired behaviour in a way that's constructive rather than punitive.
posted by fatbird at 12:48 PM on December 27, 2015 [11 favorites]


miyabo, that's interesting. We fight that, too. How do you think you got there?

Where I work, to some extent that's the price of being safe. But we also have to keep an eye on our own behavior - it gets out of hand, we find, when accountability starts to slide. It's much easier in a critique to avoid blame and call it a process problem. We'll just change the process to make that impossible! But way more cumbersome and paralyzing. Sometimes it really is, I fucked that up, and the person supposed to be double-checking didn't. The process is fine, we just didn't do it.
posted by ctmf at 12:51 PM on December 27, 2015 [3 favorites]


Our main office does a lot of work in chemical plants along the Mississippi corridor, so we're used to the kind of hyper-paranoid safety culture miyabo describes. Those places would succumb to the OP phenomenon in about a week if they weren't so vigilant.

Some of our other offices, so, not so much. One office has mostly done work in food plants which are much more lax because the margins are so narrow, but one day I found myself doing a little R&D call at $Large_international_company with our branch manager. I saw a bunch of signage and asked "Do we need PPE to go see this scale?" and he just blew it off, naw, we're only going to be there a few minutes. Under an automatically controlled dump chute.

So we're there about two minutes and the plant safety guy shows up, and basically tells us we'll never get back in the plant if we ignore the signs again. And on the way out, my guy says "That guy was kind of an asshole." And I had to spend the rest of the trip back to our shop explaining that if he wasn't kind of an asshole everyone would ignore him, so that's kind of his job.
posted by Bringer Tom at 1:00 PM on December 27, 2015 [15 favorites]


The NTSB report (pdf) is extremely interesting, particularly the section on procedural non-compliance:
Research observations and airline industry data indicate that procedural noncompliance is not uncommon in professional aviation. The authors of a 1990 National Aeronautics and Space Administration study reported that two out of six airline crews they observed on one particular wide-body airplane type neglected to perform all flight phase checklists during a flight (Degani and Wiener 1990). Line operations safety audit data from more than 20,000 airline flights conducted between 1996 and 2013 revealed that 49% of such flights involved at least one instance of intentional noncompliance (Werfelman 2013). In addition, relatively recent data from airline FDM programs indicated that flight crews continue 97% of unstable approaches to landing, which is against most airline policies (Burin 2011). Thus, procedural noncompliance occurs during normal operations, even among the flight crews of major airlines.

[...]

When flight crewmembers perform a routine check repeatedly over a long period of time and never encounter an example of its effectiveness as a safety protection, they may experience a decreased perception of the check’s importance (Degani and Wiener 1990). As a result, they may begin to skip the check and reallocate their efforts toward other goals that they regard as more important. Such changes can lead to the development of new group norms about what is expected and an increasing mismatch between written guidance and actual operating practice. This increasing mismatch has been described as “procedural drift” (Dekker 2006). Procedural drift likely played a role in the accident flight crew’s procedural noncompliance.
posted by Westringia F. at 1:00 PM on December 27, 2015 [18 favorites]


Some relevant passages from Richard Feynman’s Personal observations on the reliability of the Shuttle:
We have also found that certification criteria used in Flight Readiness Reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence.
and:
If a reasonable launch schedule is to be maintained, engineering often cannot be done fast enough to keep up with the expectations of originally conservative certification criteria designed to guarantee a very safe vehicle. In these situations, subtly, and often with apparently logical arguments, the criteria are altered so that flights may still be certified in time. They therefore fly in a relatively unsafe condition, with a chance of failure of the order of a percent (it is difficult to be more accurate).
(The entire document is well worth reading, and details both the engineering and organizational issues involved in the disaster.)
posted by mbrubeck at 1:13 PM on December 27, 2015 [5 favorites]


There's also the danger of encouraging this with mixed messages about otherwise good things like "continuous improvement," "streamlining for efficiency," "bottom-up decision making." In most cases, people don't think they're sabotaging a necessary thing just because they're lazy. They actually think they're removing some of the bureaucratic cruft from the process, which is a good thing in spirit.

I'm all for new ideas. If we're doing something that's stupid, tell me and we'll look at it. But I also make it very clear that the person who can authorize deviation from the written procedure is "not you" and we make getting caught doing that a serious offense. Sometimes that person is not even me, and it's my job to know clearly, exactly what the scope of my discretion to authorize dubious improvements or workarounds is.

Which can work against the theory of pushing authority and responsibility downward as much as possible. We don't want workers to have to stop and ask permission to scratch their ass, as the saying goes. Finding the right balance is harder than it sounds.
posted by ctmf at 1:21 PM on December 27, 2015 [14 favorites]


It's interesting that in the section about adding video monitoring of pilots it's mentioned all the ways pilots are monitored already. If there's a recording of voice and data for every flight it's obvious that either no one ever reviews the recordings or if they do the deviations don't raise any red flags. Either way, it's not good.
posted by tommasz at 1:23 PM on December 27, 2015 [4 favorites]


"That guy was kind of an asshole."

I've always found it frustrating that the person who's job it is to make sure you stick to a process is labelled in such a personal way. Not every company has good process and not every process created makes sense to follow, but to make it into a personality flaw? Argh.

(I used to be a PM and am fascinated by what makes some people reject process outright, especially since I myself have major issues following authority - that I understand and even sympathize with. It's the people who don't care about how their actions affect others that I don't get.)
posted by A hidden well at 1:25 PM on December 27, 2015 [10 favorites]


I'd hardly call that "this organization was used to deviant behavior" and more "people in charge didn't listen to people with knowledge".

Except that "ignoring the people with the knowledge" is an organizational issue that existed loooong before Challenger. As in, this was normalized...deviant...behavior.

See also: Apollo I. But that was hardly the first time, it was just one of the first worst.

You can't not have normalized deviance, is the whole point. It is human nature, and collectively it is organizational nature. It's very difficult to do anything perfectly, so there is always fault tolerance and risk tolerance. And the tolerance of the tolerance always slips, because the human brain is designed to do that. That's how living organisms that require food didn't starve to extinction 12 hours after coming into existence.

And every organization that requires money in order to perform operations always has the additional pressure of standards versus budget. The Feynman report linked above is a great example of absolutely ANY project with more than four moving parts. I see echoes of it every time I do a tiny piddly project or a full software implementation. I can often tell my boss and the customer up front 12 things that I can see are very likely to go wrong and they'll tell me they're willing to take that risk, and I'm the one who ends up throwing a tantrum at the end when I'm blamed for the 10 of those items that failed.
posted by Lyn Never at 1:26 PM on December 27, 2015 [20 favorites]


Interesting, we were just talking about this last night. Somehow got on the subject of how John Denver died. My wife was asking why people would take those chances. I said if you do something all the time and start to take shortcuts and nothing goes wrong you’re naturally inclined to start thinking the rules aren’t necessary. "It’ll be fine".

People have rolled their eyes at me because I’m so meticulous about multiple data backups. Until they lose something.
posted by bongo_x at 1:28 PM on December 27, 2015 [6 favorites]


This isn't just a workplace issue; why do you think rape culture is so persistent?
And over on the other side of the cultural divide, I'm pretty sure my first encounters with the phrase "normalization of deviance" were in conservative screeds against the growing acceptance of Gay Rights. Would it maybe be better to limit its use to discussions of engineering and workplace safety? I don't know...
posted by Trinity-Gehenna at 1:41 PM on December 27, 2015 [4 favorites]


I don't think it's actually inevitable that standards decline. Naval Reactors and the Navy's SUBSAFE program are actually two examples of how the culture can be upheld (and actually participated as experts in the Columbia investigation). Ultimately though, both of those programs cost a crap ton of money. If safety isn't worth money to management, no program is going to work. And in safety, it's a negative result - when you're winning, there's no evidence to point at to say where that dollar was spent.
posted by ctmf at 1:49 PM on December 27, 2015 [7 favorites]


Hippybear, my grandfather worked on the systems that separated the external rockets from the shuttle itself, and he also said that there were repeated warnings about the o-rings not holding up at low temperatures, and that those warnings were disregarded.
posted by Trinity-Gehenna at 1:51 PM on December 27, 2015


No, it was not Sagan. It was reported in the material I read only a month after the accident that it was the cold that cracked the rubber of the o-rings, and it was not an article written by Sagan. And it had apparently been a matter of discussion for quite some time before the critical failure.
posted by hippybear at 1:56 PM on December 27, 2015


The behavior of the o-rings was documented on multiple previous missions. The Feynman report only refers to them [previous issues] obliquely but it was a documented issue and there's nothing shocking about their failure. The issue was "certified" (passed by Flight Readiness checks) on multiple missions:

The history of the certification and Flight Readiness Reviews will not
be repeated here. (See other part of Commission reports.) The
phenomenon of accepting for flight, seals that had shown erosion and
blow-by in previous flights, is very clear. The Challenger flight is
an excellent example. There are several references to flights that had
gone before. The acceptance and success of these flights is taken as
evidence of safety. But erosion and blow-by are not what the design
expected. They are warnings that something is wrong. The equipment is
not operating as expected, and therefore there is a danger that it can
operate with even wider deviations in this unexpected and not
thoroughly understood way. The fact that this danger did not lead to a
catastrophe before is no guarantee that it will not the next time,
unless it is completely understood. When playing Russian roulette the
fact that the first shot got off safely is little comfort for the
next. The origin and consequences of the erosion and blow-by were not
understood. They did not occur equally on all flights and all joints;
sometimes more, and sometimes less. Why not sometime, when whatever
conditions determined it were right, still more leading to
catastrophe?

posted by Lyn Never at 1:58 PM on December 27, 2015 [9 favorites]


You can't not have normalized deviance, is the whole point. It is human nature, and collectively it is organizational nature. It's very difficult to do anything perfectly, so there is always fault tolerance and risk tolerance. And the tolerance of the tolerance always slips, because the human brain is designed to do that. That's how living organisms that require food didn't starve to extinction 12 hours after coming into existence.

And some people have the unmitigated gall to call this nonsense "intelligent design."
posted by Faint of Butt at 1:59 PM on December 27, 2015 [1 favorite]


ctmf, I think the very fact that you need such a costly program lends credence to the idea that "normal deviation" is human nature. Otherwise you wouldn't need so many resources to combat it.
posted by desjardins at 2:01 PM on December 27, 2015 [2 favorites]


Is that verifiable?
Couldn't say, honestly. I'm just going by what he said in the weeks immediately following the accident. I just a kid at the time, but I remember my grandfather being absolutely livid that the warnings had been ignored.
posted by Trinity-Gehenna at 2:02 PM on December 27, 2015 [1 favorite]


I can confirm through an acquaintance who worked for Morton Thiokol on the boosters at the time of the Challenger disaster that it was more or less common knowledge among the engineers that they had warned about issues with the low temperatures on the Challenger launch day. He told me that years and years ago, closer to the time of the event.

This poor guy had the misfortune to work on both Shuttle disasters, he was a payload specialist on the Columbia mission. He was really upset, got to know the crew very well over months.
posted by C.A.S. at 2:02 PM on December 27, 2015 [1 favorite]


See also factory farming, social strata, burning hydrocarbons to power things, imprisoning people for ingesting plants, and many other examples.
posted by anarch at 2:09 PM on December 27, 2015 [2 favorites]


I have been bullied out of at least one job for going to management about this shit with regard to food safety and refusing to participate in unsafe food prep shortcuts.
posted by Dysk at 2:13 PM on December 27, 2015 [10 favorites]


I had early insider information about the Challenger disaster (a guy I knew worked for a space contractor and he had shared with me a report that outlined all the problems that led to the explosion easily a year before the "official reports") and the number of warnings which were reported about the freezing temperatures and the O-rings were legion

That wasn't the normalization of deviance. The O-rings were considered life critical saftey systems. They *were not allowed to show failure* at any time, by NASA rules, if they did, they were required to be redesigned. If a backup O-Ring held when the primary didn't, the primary was required to be redesigned to eliminate its failure.

That was the rule. The deviance was that O-rings had already show burn through before STS-51L, and yet, NASA ignored its own rules and didn't redesign them.

The O-ring in the cold issue? It's actually almost irrelevant. The O-rings on the SRBs before STS-51L were already failing in flight and NASA rules said that this could not be tolerated.

And yet, since the missions before STS-51L didn't fail, it was tolerated. And then Challenger LOCV'd -- ship gone, crew dead.

That's normalization of deviance. Having a rule to prevent something from happening is useless if the organization *repeatedly* ignores that rule. The O-rings, up to STS-51L, did not comply with the required standards. They were used anyway, despite evidence that they were failing. Had they been redesigned after STS-1 -- the first mission, which showed evidence of burning through -- then STS-51L isn't lost.

After they were redesigned, after STS-51L, then they never failed again in over 100 launches. Of course, they later did it again, repeatedly ignoring foam strikes on the orbiter, despite rules that said foam shedding was not allowed. It happened to STS-107, and this time, it broke the leading edge of a wing, and left the thermal protection fatally compromised. Not only did they ignore their own rules, they then decided to ignore the evidence that there was a problem, and not take further imagery to see if the orbiter was intact.

And...another crew died. Because breaking the rules didn't seem to matter, so they didn't really care about breaking them. Normalization of Deviance.

If there's a rule to prevent a 1 in 100 chance of vehicle loss, and you ignore it 20 times and don't lose a vehicle, it does *NOT* mean that ignoring the rule is safe. It means you've been lucky.
posted by eriko at 2:16 PM on December 27, 2015 [68 favorites]


Humans suck at following checklists. Checklists like this should be seen as a temporary solution while the system is redesigned to prevent it from entering a failure mode like this. There is no reason the plane can't detect the failure to disengage the gust lever and auto abort the takeoff.
posted by humanfont at 2:19 PM on December 27, 2015 [1 favorite]


Flight control checks were not performed on this flight, nor were they ever performed. Hundreds of flights worth of data from the FDR and pilot interviews confirm it.

I think I might be nervous during my upcoming flights during the first time in my life. That is fucking terrifying.
posted by bq at 2:24 PM on December 27, 2015 [1 favorite]


Major airline pilots, particularly in N. American/European airlines, are not likely to skip flight control checks. This was corporate aviation and I wouldn't worry bq
posted by C.A.S. at 2:27 PM on December 27, 2015 [3 favorites]


either C.A.S. or Westringia F is wrong. and the latter gives a reference.
posted by andrewcooke at 2:40 PM on December 27, 2015


Yeah, the pilots in question were Gulfstream (small private jet) pilots, not commercial aircraft pilots.

One of the ways that you prevent normalization of deviance is by having multiply redundant systems with completely separate stakeholders, which is generally what you get with large-scale commercial flight operations. Pilots are not just beholden to their employers, but also to the FAA in order to continue to be allowed to fly, and the airlines are obligated to the FAA and the airport management and to the employees' unions, and the airport is separately accountable to the FAA and each of the airlines and the unions and all of the above entities are now separately accountable to Homeland Security along with the NTSB. It's a fucking mess, but it is a mess that includes some accountability.

Any individual person who generally enjoys walking around in one piece should be concerned about private jets, private jet leasing and staffing companies, the little airports that primarily serve private flights, and the relationship between all of those and the FAA even if you are not the sort of person who ever expects to fly in one. There is some skeevy shit going on there that is never going to attract serious attention until someone deliberately or accidentally lands one in an elementary school - which has only not happened by luck at this point.
posted by Lyn Never at 2:40 PM on December 27, 2015 [11 favorites]


Kind of in the opposite situation in my current organization. The rules are so strict that it's close to impossible to get anything done, even stuff that really doesn't have any significant risk, and that utterly destroys our productivity.

It's the flip side of the coin, and both things often exist in the same company. Adding more and more complex process doesn't stop a normalisation of deviance-- it often encourages it. If you add more and more layers or process to even insignificant risk, then coworkers don't understand which steps are really important or which risks need to be addressed. Often the extra process is designed by someone who doesn't see the big picture and who doesn't realise that it is not possible for all steps to be followed.

I saw this a lot in IT security-- a consultant would design a security rule/system applied to-- say-- a retail environment. That system might have worked well in a non-retail setting, but was impossible to be applied when faced with a real rush of clients. As a results, store coworkers got to be absolute geniuses at ignoring the IT sec rules and a genuinely large security hole opened. It would have been better to design a slightly less secure process which could be followed, than live with gaping security holes. But neither thing happens. (Often because the requirements are set poorly-- someone in a head office tells a security consultant from Accenture that they want "100% security!". Hilarity ensues.)
posted by frumiousb at 2:41 PM on December 27, 2015 [23 favorites]


This happens in caving as well. This accident report details a cascading series of decisions that led to a bolt climbing death. Joe was thought of as the best of the best & his death really stunned the caving community, but it quickly became obvious that he was using gear in an unintended manner.

We admonish new cavers to have a companion double-check your rigging & harnesses, and it's an admonition that experienced cavers are too quick to forget. Joe's death was a wake up call to me that even after 10 years of vertical caving, it was still protocol to ask someone to give my harness & ascenders/descenders a good look-over. These days, rather than telling a new caver "have someone check your gear," I will ask a new caver to check mine, as a leadership example.
posted by Devils Rancher at 2:42 PM on December 27, 2015 [23 favorites]


One thing instructors and other pilots I fly with constantly remark on is the thoroughness of my checklist use. Best place to find a problem is on the ground, before the workload turns it into a disaster.

At the same time, even I find myself eliding over things sometimes. Not necessarily on the preflight cockpit checks, but sometimes I'll be doing the walkaround of the plane and suddenly not remember how I got from the front of the left wing to the back of the tail. It turns into repetition, and it's easy to miss things when you're going through the motions like that.

One of my very first solo flights, I did all of my preflight checks, took off... and then noticed that the passenger door was open. I hadn't banged on the door hard enough to check it and the latch wasn't fully engaged. Not a life-threatening disaster in and of itself, but my response to it was poor. What I should have done was let it be until I was at a safe altitude to deal with it - the door will stay mostly shut just due to the airflow over it. Instead, I panicked and was leaning across the cabin trying to get the door closed while climbing out of the airport at only a couple hundred feet off the ground. That really could have ended badly.

We have related problems at work, too. One way of qualifying new hardware is "by similarity"; that is, part B is similar enough to previously qualified part A that you don't have to go through and do all the environmental testing we normally require. Problem is, there are no good standards for defining how "similar" something has to be. There are guidelines, but they're not really followed. What really worries me and many of my coworkers, though, is that the contractors will often daisy-chain these quals together. So, part A gets tested, B gets qualed by similarity... and then they say, well, part C is very similar to part B! And part B was qualified, so part C should be, too.

Soon enough we have part W tracing back to the original qualification of part A, and at what point do they become dissimilar enough that we need to require retesting it? No one has an answer to that.
posted by backseatpilot at 2:59 PM on December 27, 2015 [21 favorites]


There are several factors which tend to sprout normalization of deviance: First and foremost is the attitude that rules are stupid and/or inefficient.
This is certainly a problem with safety procedures in construction. In heavy industrial construction we have reams and reams of procedures and associated checklists and getting workers to take those seriously is a constant ongoing problem. Worse is that safety is seen as an unnecessary cost so getting funding for actual changes is hard. Even simple chit like stocking multiple sizes of earplugs or keeping gloves in stock.
posted by Mitheral at 3:04 PM on December 27, 2015 [6 favorites]


If you think safety in industrial construction is a problem, wait 'til you see how we do things in residential construction! It's basically the Wild West. There are a lot of things I like about carpentry, but the (lack of) safety culture is truly frightening.
posted by Anticipation Of A New Lover's Arrival, The at 3:10 PM on December 27, 2015 [9 favorites]


Like, as far as I know I am the only guy at my company who even owns earmuffs or a respirator.
posted by Anticipation Of A New Lover's Arrival, The at 3:11 PM on December 27, 2015 [6 favorites]


I saw a bunch of signage and asked "Do we need PPE to go see this scale?" and he just blew it off, naw, we're only going to be there a few minutes. Under an automatically controlled dump chute.

That is a how people end up on a powerpoint slide in the yearly "safety lessons learned" presentation, along with the guy who backed a truck off of a ledge and the person who caused an oil spill in a stream.
posted by Dip Flash at 3:57 PM on December 27, 2015 [2 favorites]


wait 'til you see how we do things in residential construction!

One of our customers -- one of those hyper-vigilant safety conscious facilities -- let out a bid a few years ago for construction of a new unit of their plant. When the contractor, after winning the bid, got the safety protocols he said "no way." The plant people said "way." The way they got around this impasse is that the plant sold the entire construction site to the contractor for one dollar, and after the contractor had done their thing they sold it back for one dollar more than the bid amount. So while they were violating all those safety protocols it wasn't actually a $large_international_company site.
posted by Bringer Tom at 4:34 PM on December 27, 2015 [7 favorites]


It's much easier in a critique to avoid blame and call it a process problem. We'll just change the process to make that impossible!

And this is how at a previous workplace, errors would get through (usually when a superior circumvented or skipped standard checks on a story or layout and added things in themselves). At very least, this cost us a little money every month in strip-ins, and at worst, it once led to an entire team driving to a printing press in another state to pull an all-nighter slapping stickers on 50,000 copies of a publication by hand. Luckily these weren't life-or-death matters.

In cases like this, the person in charge of the checklist would get blamed for resulting errors or delays by the person who had circumvented it, and the same old discussion would ensue about revisiting the process, as if some unknown factor were in play or the person in charge of the checklist was just a bad person who needed to express shame and penitence before the scheduling gods. Unrealistic schedules would be created and approved over the objections of the staff (not only would the schedule diminish the quality of the product, it would also make it nearly impossible to have a life outside the office). Then the person in charge of the checklist would be blamed for putting the team behind schedule by insisting that checks be performed on everything before it moved on (because the errors had been blamed on them before). The rationale for blaming errors on them was usually either A. they were bad at catching errors and slow to boot or B. allegedly checking everything didn't leave enough time in the process for any major changes to be vetted, and it was their fault that the process wasn't accommodating to last-minute changes. Never mind that the person at the top routinely ignored standard checklists, only to check things after we'd already shipped them...

In retrospect, every smaller deadline should have been padded even more by the person in charge of the checklists, but they were pretty padded, and at a certain point in an increasingly unforgiving schedule, you can't pad anymore. There aren't enough checklists and padding in the world to make up for people who want to play cowboy with last-minute changes to the product you're shipping, who think the QA process is just a bottleneck, and who don't respect their staff or their warnings, much less the process that was put in place.

Errors are inevitable, and they aren't a moral failing, but rather part of any process—which is exactly why checklists matter.
posted by limeonaire at 4:51 PM on December 27, 2015 [10 favorites]


This whole thread reminds me of when I got a new riding lawn mower more than a decade ago. I mentioned it to a friend, who immediately explained how to disconnect the feature that a) required there to be someone in the seat for the mower to move and b) made it so that the mower blades disengaged when reversing. This way, if you caught a stick or something, you could just lean down or hop off and pull it out.

I did not disconnect the safety feature.

I was really struck by the complete failure to use the checklists. You think of pilots as the quintessential checklist-users, and even with all the flaws inherent in checklists, such as answering becoming automatic without necessarily confirming the thing asked about, at least they're doing the darn things. Glad to be reassured up-thread that this is not likely to be a problem in commercial aviation.
posted by not that girl at 4:51 PM on December 27, 2015 [3 favorites]


At very least, this cost us a little money every month in strip-ins, and at worst, it once led to an entire team driving to a printing press in another state to pull an all-nighter slapping stickers on 50,000 copies of a publication by hand. Luckily these weren't life-or-death matters.

This happened to me in a job as well! Though in our case, we ended up rubber-stamping a vital missing bit of information onto over 20,000 items to mail.

As we were doing it, we commented on how surprising it was that a mistake that had never been made in nearly 20 years of bimonthly publication had slipped through all the edits and proof-reads and printing (we printed in-house, so if the person printing the back cover had spotted it, a new one could have been gotten to them in half an hour). And we reflected on how glad we were not to be in a profession where a similar small mistake slipping past multiple checkpoints could kill someone.
posted by not that girl at 4:55 PM on December 27, 2015 [2 favorites]


I worked in a marketing firm where we printed thousands of brochures with PUBIC instead of PUBLIC. Somehow it had gotten through the original graphic artist, a proof reader, the art director and the printer. The client was the one who caught it, and they were all destroyed and reprinted. We added another proofreader after that and every person had to physically sign a piece of paper saying they had approved it. I don't think anything major happened afterwards.
posted by desjardins at 5:50 PM on December 27, 2015 [2 favorites]


Humans suck at following checklists. Checklists like this should be seen as a temporary solution while the system is redesigned to prevent it from entering a failure mode like this. There is no reason the plane can't detect the failure to disengage the gust lever and auto abort the takeoff.

This. It's all fly by wire. Why is there not some master switch that's like "On, Ignition" like a car? Looking over the checklist switching the master switch to on would perform everything up to step 8 automatically. You switch it to ignition it fires up each engine in sequence, checks the hydraulic pressure, checks or switches everything on the after start and lights up a green LED. If not, shut down the engines and display a red LED.

How is this not like UX 101?
posted by Talez at 5:52 PM on December 27, 2015 [2 favorites]


It is UX 101 but either no one knows what UX is or they're actively hostile to it.
posted by bleep at 5:57 PM on December 27, 2015 [2 favorites]


worked in a marketing firm where we printed thousands of brochures with PUBIC instead of PUBLIC.

A local high school did this with planners they passed out to all students on the first day of school. Nobody spotted it on the cover until the students did. They re-collected them before students were allowed to leave. We had a high-schooler living with us at the time who was mightily amused by it.
posted by not that girl at 6:08 PM on December 27, 2015 [2 favorites]


This was also considered to be a factor in the crashes of the space shuttles Challenger and Columbia

Maybe it's the wrong place to complain, but the vehicles in those disasters (as well as the Hindenburg and TWA-800) did not crash, they exploded. "Crash" requires hitting something: a collision. I realize the media is really sloppy about this usage, but you don't have to be. (Also, yes Columbia didn't exactly explode, it disintegrated; but it wasn't because it hit anything.) Anyway, thanks for the interesting post.
posted by Rash at 6:41 PM on December 27, 2015


Automation and checklists are both parts of most large aircraft SOPs. See p.21 of this PDF for an example: there are 81 actions on the ENGINE START checklist for a Boeing 757. 13 of those are considered so critical that all operators use a manual checklist as validation that they have been performed correctly. There are some operators who allow the other 68 to be checked by computer and failures communicated to the crew by a specialist module. Some operators choose to do the whole thing manually.

Small plane checklists include things like "visually inspect engine bay for foreign objects" by the way, which are hard to automate.
posted by cromagnon at 6:42 PM on December 27, 2015 [2 favorites]


This. It's all fly by wire.

The aircraft in question is not fly by wire. All the controls are hydraulic.

The designers did their best -- the gust lock actually physically interferes with throttling up the engines. That actually seems like brilliant UX to me. The pilot tried to override this safety measure by shutting down ALL THE AIRCRAFT HYDRAULICS, which is a control you've got to have for emergencies, but is obviously not the right thing to do in this situation.

Even current aircraft don't just have a big TAKE OFF button that an idiot can push. That's due to a lot of regulatory, economic, and safety reasons, not because they don't care about UX. If we made airplanes like we make cars (and in the same quantity), they probably would have that button.
posted by miyabo at 6:43 PM on December 27, 2015 [7 favorites]


checks or switches everything on the after start and lights up a green LED. If not, shut down the engines and display a red LED.

It's not that simple. That would, of course, be nice to have as a backup indicator, but what about when it fails and shuts down the engine mid-takeoff for no reason? Often, over-reliance on automatic features causes more problems when it fails to work properly than just checking the things directly. GA pilots still (I think) check the fuel in the tank with a stick, so they know the fuel gage is not broken. The more you turn it into an automatic black box, the more people get lazy and start relying on that black box to save them.
posted by ctmf at 6:45 PM on December 27, 2015 [16 favorites]


"Crash" requires hitting something: a collision. I realize the media is really sloppy about this usage, but you don't have to be.

Yep, my fault, too much reading about the plane crash.
posted by desjardins at 6:50 PM on December 27, 2015


brochures with PUBIC instead of PUBLIC. Somehow it had gotten through the original graphic artist, a proof reader, the art director and the printer.

I know a DISRICT ATTORNEY that would like to have a word with me about 1500 T-shirts. Client-provided finished art file, middle-manned by a broker who carried it over, camera shot by my staff artist, screens burned in house, job PRESS CHECKED in person by broker, then press operator, assistant & shirt stacker were nearly done printing when another client happened by & spotted it at first glance. "You know you've got a mis-spelled word there?"

FUUUUUUUUU...
posted by Devils Rancher at 6:54 PM on December 27, 2015 [7 favorites]


Not only that if some of those checklist items aren't already monitored (or monitored in the wrong way) then you are adding a sensor(s) and a control structure to monitor those sensors. Which costs a lot of money (especially in avionics). And it sets up something else to fail. Failure of which may stop your plane from doing it's thing.

Worse the sensor may fail in such a way that it thinks everything is ok and it's not. But the pilots aren't used to that sort of failature mode and do the wrong thing because they trust the sensor.

There are costs in both money and safety in automating systems and you have to trade that off against the benefits of automation.
posted by Mitheral at 6:57 PM on December 27, 2015 [3 favorites]


I guess the car analogy is that someone tried to drive off with the parking brake on. Stupid thing to do, but very easy if you're not paying attention. Is it bad car UX is that every car has a parking brake that's totally disconnected from the car computer, except for a little warning light? Of course not, there are all sorts of good reasons you wouldn't want the computer to automatically disengage the parking brake.

The difference is that every little mistake in an airplane can kill you, since you're going 8 times faster and can't just pull over if there's a problem.
posted by miyabo at 7:06 PM on December 27, 2015 [3 favorites]


Kind of in the opposite situation in my current organization. The rules are so strict that it's close to impossible to get anything done, even stuff that really doesn't have any significant risk, and that utterly destroys our productivity.

It's the flip side of the coin, and both things often exist in the same company. Adding more and more complex process doesn't stop a normalisation of deviance-- it often encourages it. If you add more and more layers or process to even insignificant risk, then coworkers don't understand which steps are really important or which risks need to be addressed. Often the extra process is designed by someone who doesn't see the big picture and who doesn't realise that it is not possible for all steps to be followed.


Yes to both of these. In my line of work, regulatory box-ticking makes it difficult to get a lot of things done, but the reason it's like this is because there have been enough bad actors in the past who have made it necessary. So we end up with companies slapping new process on top of procedure on top of process until even the most thorough and thoughtful employees are completely confused. I'm pushing against this every day in my job - trying to both help things to be more efficient while also having better processes in place to protect the company and the customer from risk.

Every time I mention something that I'm currently doing to help mitigate the hole caused by our badly needed technology fix, and which would, if used, probably make everything easier for everyone, I'm told - great idea! Write up a procedure for it! That's kind of the answer to everything. The obvious solution is a specific, basic technology program we don't have. So we write new procedures instead of getting what we need. When one of the biggest problems is that we have so many new procedures all the time and things are always changing that most people don't really follow anything until they have to. Partly because of human nature and partly because instead of putting a bandaid on an issue, someone needs to take a little time to find a common solution that will work well for everyone, that can be a medium to long term solution and is easily implementable. It takes some time upfront, but it will pay off in the longer term.

The number one issue I always face is funding. Companies often seem to be fine with taking the risks until they get their hand slapped and then putting in place the necessary tools to deal with it properly. It could be technology or it could be hiring enough people to adequately manage everything that needs to be done. Both come back to money and companies not wanting to spend it. And it seems that from a purely financial perspective, that's probably the wisest course to take. Because we see across industries that the fines that are levied on companies by regulators/government are more or less negligible compared to the profits they can make by just taking the risks. From a human perspective, of course, it's terrible, because real lives are affected. I don't know the solution to this.
posted by triggerfinger at 7:14 PM on December 27, 2015 [5 favorites]



Worse the sensor may fail in such a way that it thinks everything is ok and it's not. But the pilots aren't used to that sort of failature mode and do the wrong thing because they trust the sensor.


Isn't this essentially what happened with Air France 447? My basic understanding is that the airspeed indicator readings were wrong, because the pitot tubes had ice crystals in them, and thus the pilots made errors.
posted by desjardins at 7:17 PM on December 27, 2015


More complicated. The pitot tubes had ice in them, and the flight computer figured this out and dropped into a "safe mode" where it wasn't controlling as much stuff. Pilots completely flipped out and gave the computer bad commands, which would have been ignored in normal mode, but weren't checked at all in "safe mode," so the plane crashed. Everything would have been fine if the pilots had just taken a nap instead of having a panic attack -- the computer was still totally capable of flying in a straight line forever.
posted by miyabo at 7:24 PM on December 27, 2015


Systems should be designed to trap errors and users should know this (about themselves).
posted by notyou at 8:09 PM on December 27, 2015


Why is there not some master switch that's like "On, Ignition" like a car?

The flippant answer is that aircraft are orders of magnitude more complicated than your car.

The slightly longer answer is that aircraft are just mind-bogglingly complicated. That checklist you linked to is one of about half a dozen ways to start the airplane up, depending on what equipment's available and where you are. So a simple "turn ignition switch to on" will never be able to capture all of the possible functionality that you need to build in to the system.

I mean, look at something as basic as getting the engines running. You've linked to a simple checklist that assumes everything in the airplane is working properly. If the batteries are dead, there's another checklist to start the engines off ground power. There are checklists to start the left engine first if the right engine's starter motor is burned out. If the APU is out of service, there are checklists to supply air from a ground cart. There are maintenance checklists to start one or both engines that may not even require removing the rudder lock. Sometimes you want some functionality and not others, and sometimes you're allowed to override or ignore failed systems and take off anyway. And most of these corner cases come down to human judgment, which gets thrown out the window if you program it in to a master computer that may or may not start the engines when you turn the key.

And really, this is what it boils down to. A simple "green light means go" indication simply isn't enough for good decision making. And you can learn quite a bit from failures in the process and fix them quickly. If the APU doesn't start while you're doing your checklist, what does that mean? Is the battery dead? Is it getting fuel? What do the dials say? Most importantly, what are the implications for the next steps in the checklist? Can I bypass the APU and start the engines anyway, which is a totally legitimate and legal way to get the plane going UNLESS maybe the fuel's contaminated and is going to cause the engines to flame out on takeoff? You can do these checks by having the controls available.

I have a Gulfstream POH on my computer at work, I'll have to go look at it again when I'm back in the office. It's about a thousand pages long.
posted by backseatpilot at 8:18 PM on December 27, 2015 [15 favorites]


User experience isn't just about the controls having other controls turn on and off. It's not "about" the controls. It's about understanding the humans using the system and what they need to make the system go without killing anyone. It doesn't mean making an airplane as easy to use as a car. It does mean that if pilots think they can skip a bunch of procedure that those procedures aren't working for them. The next step is to figure out why that is - is it those assholes who are the problem (unlikely, all humans are assholes) or is it what we're asking them to do (more likely but not a sure thing, let's find out!)
posted by bleep at 8:33 PM on December 27, 2015 [4 favorites]


Although it isn't perfectly suited to individual ffuckups, I highly recommend Jens Rasmussen's Risk Management framework to anyone interested in how such things happen in organizations. He conceptualizes 'normalized deviance' differently, as a gradient of pressure from time/cost and effort/efficacy encouraging drift towards safety boundaries. He combines that with a communication framework and a bunch of analysis tools. It's well-regarded in the profession and used by the US chemical safety board for disaster investigation and policy analysis (e.g. Deepwater Horizon)
posted by anthill at 8:41 PM on December 27, 2015 [7 favorites]


> If there's a rule to prevent a 1 in 100 chance of vehicle loss, and you ignore it 20 times and don't lose a vehicle, it does *NOT* mean that ignoring the rule is safe. It means you've been lucky.

Or, as I like to put it: Just because you were successful doesn't mean you made the right decision; just because you failed doesn't mean you made the wrong one. Americans, deep in their bones, do not believe this.
posted by benito.strauss at 9:40 PM on December 27, 2015 [40 favorites]


The flippant answer is that aircraft are orders of magnitude more complicated than your car.

As an aside, both aircraft UX and medical device UX face incredibly daunting challenges derived from the interaction of a highly-regulated hardware environment and the extremely high cost of new or updated hardware. This interaction means that design-to-delivery cycles can be extremely long, and in the case of aircraft, a new interaction device may be installed into a very old aircraft and be used alongside interaction devices designed well before the advent of screen-based display systems (think an LCD display screen installed in a cockpit next to a bank of mechanical toggle switches). Medical gear has shorter active use, iirc, but my understanding is that these factors mean it is essentially impossible to provide low-cognitive-load work environments in these areas via UX.

That's not to say UX has no place in this, it surely does and can very likely be used to reduce cognitive load in ways that are safe and beneficial. But there's a high hill to climb there.
posted by mwhybark at 9:41 PM on December 27, 2015


Good UX takes into account the entire context. Low cognitive load isn't the goal. Manageable/ expected cognitive load is the goal. There is no way to achieve that without user-centered design activities.
posted by bleep at 10:13 PM on December 27, 2015 [2 favorites]


Normalization of deviance with regards to checklists was at the root of the problem, but when it came down to it, the issue was face-saving. The pilot realized that the gust lock was still in place and had plenty of time to abort the takeoff and regroup. But if he did that, he would have to explain to the million-dollar suit riding in back what went wrong and he would have to explain to the control tower on an open channel what happened so that every other pilot in the area would also know -- or else lie in the presence of his co-pilot.

So instead he tried to erase his mistake while barreling down the runway at 128 knots which is like trying to retrieve that dropped lighted cigarette while navigating the LA freeways at rush hour. Stubbornness and face-saving gets a lot of pilots in trouble.
posted by JackFlash at 10:46 PM on December 27, 2015 [17 favorites]


The problem here seems to have been incredible stupidity rather than deviance. Yes, they failed to use the checklists - but then they saw what was wrong anyway, in time to stop, and carried on, simply doing further incredibly stupid things. You have to assume that if they had run through the checklists it might well have done no good; that they would have recited them without paying attention or ignored the problem.

Honestly I wonder whether both pilots were drunk.
posted by Segundus at 12:29 AM on December 28, 2015


In 2006 XV230, a Nimrod patrol aircraft of the Royal Air Force, caught fire in flight and crashed in Afghanistan, killing the entire crew. The subsequent independent review by experienced aviation lawyer Charles Haddon-Cave QC (now Mr Justice Haddon-Cave) referred extensively to the Columbia accident report, and Haddon-Cave specifically identified the "uncanny and worrying parallels" between the losses of Columbia (and Challenger before her) and of XV230:

(1) The ‘can do’ attitude and ‘perfect place’ culture.
(2) Torrent of changes and organisational turmoil.
(3) Imposition of ‘business’ principles.
(4) Cuts in resources and manpower.
(5) Dangers of outsourcing to contractors.
(6) Dilution of risk management processes.
(7) Dysfunctional databases.
(8) ‘PowerPoint engineering’.
(9) Uncertainties as to Out-of-Service date.
(10) ‘Normalisation of deviance’.
(11) ‘Success-engendered optimism’.
(12) ‘The few, the tired’.

In a nutshell, the crash was almost certainly caused by a fuel leak. The Nimrod had been in service for decades and was based on the Comet, a first-generation airliner designed in the late 1940s. Fuel leaks were common (see Chapters 7 and 8 of the Report), but because they had not caused any crashes - although there had been a very near miss - it was assumed that elderly aircraft just had fuel leaks and it was something to be lived with. Deviance was once again normalised, and 14 service personnel died as a result.

The whole Haddon-Cave Report is well worth reading to anyone interested in safety culture (and how to break it) but Chapter 17, where the above quote is from, is particularly relevant to this discussion.
posted by Major Clanger at 3:07 AM on December 28, 2015 [10 favorites]


As an occasional air passenger who did elementary flying training many years ago, the Air France crash alarms me because of how horribly apparent it is that the pilots broke the first rule of flight safety, as hammered into me: whatever else you do, KEEP ON FLYING THE AIRCRAFT. Even as baby pilots doing introductory instrument flying we were firmly reminded to be mindful of the risk of partial instrument failure that resulted in apparently contradictory or nonsensical flight data.
posted by Major Clanger at 3:20 AM on December 28, 2015


(Also, yes Columbia didn't exactly explode, it disintegrated; but it wasn't because it hit anything.)

It sure did hit something -- it hit the atmosphere. At walking speed, that's a nothing burger. At orbital speed? Unless you're equipped to handle it, you die very quickly as friction rapidly turns you to gas and rips you apart.

It's like water. Jump in from the side of the pool, no big deal. Jump off a 10m board, it's a mighty smack. Jump in from 1km high and you almost certainly die.

Indeed, what killed Challenger was the same thing. When the burning plume from the failed O-Ring impinged on a mounting strut, it burned through and failed. This caused the SRB to swing out, which turned the whole stack. Challenger could handle the airstream drag nose on, but sideways? Nope, and it shredded the orbiter in a very short time. SpaceX had the same issue with the Falcon 9 CRS-7 failure -- when that LOX tank collapsed, it bent the rocket, which turned and shredded.

Rockets have to be very strong and very light. Thus, they're strong in the directions they need to be and that's it. If they depart from controlled flight and find a new direction to fly, they fall apart almost instantly.

Rocket science is hard, but the atmosphere at high velocity is harder.
posted by eriko at 7:04 AM on December 28, 2015 [2 favorites]


Or, as I like to put it: Just because you were successful doesn't mean you made the right decision; just because you failed doesn't mean you made the wrong one. Americans, deep in their bones, do not believe this.

When I bring up my mother's food handling practices with her she replies "Well, I haven't killed you yet".
posted by srboisvert at 7:59 AM on December 28, 2015 [1 favorite]


humanfont: "Humans suck at following checklists. Checklists like this should be seen as a temporary solution while the system is redesigned to prevent it from entering a failure mode like this. There is no reason the plane can't detect the failure to disengage the gust lever and auto abort the takeoff."

Yes, but pilots should be the exception here.

The awesome thing about modern flight is that there are no special or extraordinary circumstances during flight. There is a plan and procedure for everything, and the takeoff/landing checklist is remarkably simple, all things considered. It's not difficult to follow, and accounts for a remarkably wide range of conditions.

Flights are routine and well-understood to an extent that is basically unparalleled by anything else in human technology. There's literally no room for anybody to say "well this is different so we can't follow the established procedures," like I see in other industries all the time (particularly tech). [This happens a lot in medicine too. It should happen less, but reasons.]

The importance and value of checklists is so well-established, and pilots are constantly reminded of this fact. Every pilot I've known has been fairly religious about following the checklist (and almost everybody has a story about a stupid mistake they made once).

As far as your second point goes --- I honestly don't know if the checklist can be made any simpler. Introducing another level of automation could just as easily introduce additional points of failure. I can envision safety systems that would make this error more obvious, but if the pilots are already disobeying procedures, I'm not sure how much benefit this would actually bring. Air France 447 was a fairly notorious example of pilots inexplicably ignoring obvious warnings in the cockpit (which also happened in this case).

Like AF447, this accident looks like it was a spectacular failure of Crew Resource Management. The pilots did not communicate what was wrong, and independently took several actions (all contrary to procedure) that made the situation significantly worse.

If there are lessons to be learned here, it's:
* Follow the checklists. Know the procedures. Never deviate from those procedures. Don't be clever.
* Always be talking. As soon as you realize that something is wrong, start talking constantly. Verbalize every observation that you make, and every action that you take.
* Listen. If the other pilot is saying something that is contradictory to your understanding of the situation, figure out why.
posted by schmod at 8:16 AM on December 28, 2015 [2 favorites]


whatever else you do, KEEP ON FLYING THE AIRCRAFT.

One takeaway from AF447 is that the copilot who panicked and stalled the plane had 3,000 hours in the cockpit, but very little of that was spent flying the aircraft; most of it was spent watching the autopilot fly the aircraft. Contrast with Sullenberger, who put US1549 down safely in a river because he was able to fly the plane by the seat of his pants even without engines.

So there is a constant tension between the favorability of automation and human skill. Automation is demonstrably better than humans at a lot of things; it doesn't get bored or tired or overwhelmed. Those checklists that come in sixty different flavors? Computers are great at sorting out that kind of thing.

But automation can only deal with what its designers have anticipated, and humans are much better at pushing the boundaries when a pitot tube fails or the engines all go out at the same time -- if they have the training and experience. But the way to give humans that experience is to let them do all that stuff that's easier for automation, so when it turns pear-shaped the human will have some idea what to try.

A long time ago I read a story about nuclear submarines, which don't naturally keep a stable depth so their depth must constantly be adjusted with wing-like planes. There is of course an automatic system to keep the sub at depth without human intervention, but it's never used so that the human operators will have the experience operating them when something goes wrong.

This is one reason I don't expect self-driving cars to be a tremendous boon to safety. Sure there are a lot of marginal drivers who won't be doing marginal things any more, but what do those drivers do when the car can't manage it?
posted by Bringer Tom at 8:25 AM on December 28, 2015 [11 favorites]


If there's a rule to prevent a 1 in 100 chance of vehicle loss, and you ignore it 20 times and don't lose a vehicle, it does *NOT* mean that ignoring the rule is safe. It means you've been lucky.

Or, as I like to put it: Just because you were successful doesn't mean you made the right decision; just because you failed doesn't mean you made the wrong one. Americans, deep in their bones, do not believe this.


so they're basically teenagers. Because, to be accurate, you'd have to ignore a one-in-100 rule 51 times without losing your vehicle to be officially "lucky" -- this apparently is how a teenage brain rationalizes danger. It's not that they don't accept a certain danger -- they just calculate the odds and then "go for it". That's certainly how I used to drive. What are the chances that somebody else is going to run that four-way-stop at the exact same time as me?

Unfortunately, do it enough and those odds will turn violently against you.
posted by philip-random at 10:29 AM on December 28, 2015


Using past results as a basis for later decisions is intellectually flawed but practically a biological imperative.
posted by rmd1023 at 10:51 AM on December 28, 2015


Feynman> "The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again."

Good ol' Survivorship Bias rears its ugly head again. One does not learn what makes a success by looking at what successes have in common, because for all anyone knows, they were all lucky. One learns what makes a success by looking at how they differ from failures. The problem of survivorship bias is that often the successes, the survivors of whatever trial or peril, are the only available dataset, because data perishes with failures.
posted by Sunburnt at 2:09 PM on December 28, 2015 [3 favorites]


This is a really good related link that's sort of orthogonal to this: The Problem With Technological Ignorance. The following passage really stuck out to me.
Recognizing the overwhelming complexity of our technologies makes it easier to see that bugs and glitches are essentially an inevitability. When anything becomes as complicated as Dijkstra describes, unanticipated consequences will arise. Baked into a proper recognition of the phenomenal complexity around us must also then be a sense of humility—our limits in the face of the technologies we have built—something that we need to acknowledge more and more.
posted by limeonaire at 2:29 PM on December 28, 2015 [1 favorite]


it hit the atmosphere. At walking speed, that's a nothing burger. At orbital speed?

The Columbia was nowhere near orbital speed when it broke up. Supersonic, yes; flying about Mach 23 according to this timeline - that was about 40 minutes after its de-orbit burn.
posted by Rash at 7:30 PM on December 28, 2015


When I bring up my mother's food handling practices with her she replies "Well, I haven't killed you yet".

"You're trying."
posted by LogicalDash at 8:32 PM on December 28, 2015 [3 favorites]


Sorry, but Columbia was very near orbital speed when it hit the atmosphere; Mach 23 is over 17,500 miles per hour. De-orbit burns do not actually slow the craft down very much; they basically reduce the orbital perigee to ground level, but the craft would still be in orbit if the Earth was a little smaller. Landing requires the atmosphere to do the rest of the slowing down.
posted by Bringer Tom at 6:41 AM on December 29, 2015 [2 favorites]


part B is similar enough to previously qualified part A that you don't have to go through and do all the environmental testing we normally require. Problem is, there are no good standards for defining how "similar" something has to be. There are guidelines, but they're not really followed.


I watched either Air Emergencies or Seconds from Disaster and it had a case of a mechanic changing a window in the cockpit used similar but too wrong screws and the window blew out and almost took the captain with it! Luckily, the flight attendants held on and he survived.
posted by LizBoBiz at 1:15 PM on December 29, 2015 [1 favorite]


This is closely related to the idea that in a community, some people are like a "missing stair" that everyone collectively learns to work around. The term comes from a mostly-sfw post on a sometimes nsfw blog, 'pervocracy.'
posted by rmd1023 at 7:28 AM on December 30, 2015 [1 favorite]


I'll think about this article now when I hear about a minor being tried as an adult. It seems to be the default these days for minors charged with serious crimes.
posted by shponglespore at 10:13 AM on December 30, 2015


I think it relates to the Overton Window in politics, too. Hasn't it been pointed out that Obama is to the right of Nixon on most issues?
posted by desjardins at 10:43 AM on December 30, 2015


Hasn't it been pointed out that Obama is to the right of Nixon on most issues?

Only by people who are either mistaken or willfully ignorant or intellectually dishonest. The liberal measures Nixon signed into law were pushed on him by Congress; he didn't initiate or advocate for them.
posted by asterix at 11:04 AM on December 30, 2015 [2 favorites]


This concept also explains a lot of the worst excesses of the British tabloid press (and I say this as a former journalist, albeit not at a national tabloid). Phone hacking, making up celebrity gossip, intruding on grief, you name it. If you're in an office full of people all doing this stuff as if it's normal, all of the following happens:

people within the organization become so much accustomed to a deviant behavior that they don’t consider it as deviant

Yup.

People grow more accustomed to the deviant behavior the more it occurs.

Yup.

To people outside of the organization, the activities seem deviant; however, people within the organization do not recognize the deviance because it is seen as a normal occurrence.

Yup.

In hindsight, people within the organization realize that their seemingly normal behavior was deviant.

May depend on the underlying morals of the individual and whether or not they end up in court.

Now I come to think of it, all of the above also applies to the MPs expenses scandal.
posted by penguin pie at 4:18 PM on January 2, 2016


Is that verifiable?

The concerns about the Challenger O-rings were on the record long before the shuttle exploded. Roger Boisjoly, an engineer at Morton Thiokol, wrote a strongly worded memo about the problem six months before the launch, and he also tried to stop the launch from happening once he realized that the weather was too cold for the O-rings to function properly. He was ignored. Allan MacDonald, an engineer responsible for signing off on the launch the night before, refused to do so because of concerns about the O-rings (here's a podcast interview with him). MacDonald was overruled and removed from his job, and the launch went on. I'm sure there were many others who raised concerns and were overruled or ignored.

I think that's part of what made this a tragedy (rather than just a sad, terrible accident). An accident just happens; a tragedy could have been avoided -- was so nearly avoided -- would never have happened, but for the hubris of those making the decisions.
posted by ourobouros at 2:42 PM on January 3, 2016 [1 favorite]


just picked up the silo effect by gillian tett from the library and thought of this thread; tett offers an anthropological perspective, by way of pierre bourdieu, that i think is worth bearing in mind:
  • First, Bourdieu believed that human society creates certain patterns of thought and classification systems, which people absorb and use to arrange space, people, and ideas. Bourdieu liked to call the physical and social environment that people live in the "habitus," and he believed that the patterns in this habitus both reflect the mental maps or classification systems inside our heads and reinforce them.
  • Second, Bourdieu also believed that these patterns help to reproduce the status of the elite. Since this elite has an interest in preserving the status quo, it also has every incentive to reinforce cultural maps, rules, and taxonomies. Or, to put it another way, an elite stays in power over time not just by controlling resources, or what Bourdieu described as "economic capital" (money), but also by amassing "cultural capital" (symbols associated with power). When they amass this cultural capital, this helps to make the status of the elite seem natural and inevitable. The wealthy French pupils at Bourdieu's boarding school, for example, exuded a "natural" sense of authority and power by wrapping themselves in dozens of tiny, subtle cultural signals, which nonelite people such as Bourdieu lacked.
  • Third, Bourdieu did not believe that the elite—or anyone else—created these cultural and mental maps deliberately. Instead, they arose as much from semiconscious instinct as conscious design, operating at the "borders of conscious and unconscious thought." The habitus does not just reflect our social patterns, but it ingrains them too, making these seem natural and inevitable. The elite and nonelite are both creatures of their cultural environment.
  • Fourth, Bourdieu believed that what really matters in a society's mental map is not simply what is publicly and overtly stated, but what is not discussed. Social silences matter. The system ends up being propped up because it seems natural to leave certain topics ignored, since these issues have become labeled as dull, taboo, obvious, or impolite. It any society, Bourdieu argued, there are ideas that are freely debated, and there can be differences of views about this (or a clash between the orthodoxy and heterodoxy). But outside that space of acceptable debate (or the "doxa") there are many issues that are never discussed at all, not because of any clearly articulated plot, but because ignoring those issues seems normal. Or as Bourdieu said: "The most powerful forms of ideological effect are those which needs no words, but merely a complicitous silence." The non-dancers in a village hall matter.
  • But a fifth key point that is implicit in Bourdieu's work is that people do not always have to be trapped in the mental maps that they inherit. We are not robots, blindly programmed to behave in certain ways. We can also have some choice about the patterns we use. How much choice humans have to reshape their cultural norms was—and is—an issue of hot dispute. When Bourdieu was first embarking on his academic career, Sartre, the French philosopher, declared that humans did have free will, and could develop their thoughts as they chose. Lévi-Strauss took another view: he thought that humans were doomed to be creatures of their environment, since they could not think out of their inherited cultural patterns.

    Bourdieu, however, rejected both of these ideas; or, more accurately, he steered a middle ground between these two extremes. He did not think that people are robots, programmed to obey cultural rules automatically. Indeed, he did not like the word "rules" at all, preferring to talk about cultural "habits." But he also believed these habits and the habitus shaped how people behave and think. Social maps are powerful. But they are not all-powerful. We are creatures of our physical and social environment. However, we need not be blind creatures. Occasionally, individuals can imagine a different way of organizing our world, particularly if they—like Bourdieu—have become an insider-outsider by jumping across boundaries.
posted by kliuless at 6:45 AM on January 4, 2016 [8 favorites]


ourobouros: one of the "when engineering goes bad" books that I've read (I want to say "Inviting Disaster", but I'm not sure) spends a fair bit of time talking about "strongly worded memo" as an institutional response and ways in which it fails.
posted by rmd1023 at 12:09 PM on January 4, 2016


Speaking of Normalization of Shady Bullshit...
posted by ctmf at 8:03 PM on January 4, 2016


'Normal Accidents'?
posted by kliuless at 8:08 PM on January 4, 2016


Oh! That might be it. I went on a binge of "engineering and humans are full of stupid failures" books a couple of years ago so they kind of blur together. :)
posted by rmd1023 at 8:40 PM on January 4, 2016


For those posting in this thread, you might be interested in the US CSB's latest: a press conference on repeated dangerous events at ExxonMobil. This follows up on their analysis of the Chevron fire where the CSB applied Rasmussen's Risk Management framework that I mentioned above.
posted by anthill at 9:33 AM on January 5, 2016 [1 favorite]


The CSB also has a Youtube channel for your safety-briefing needs.
posted by anthill at 9:45 AM on January 5, 2016 [1 favorite]


« Older Bad Mother: Understanding Maternal Ambivalence   |   Secret Hitler, the party game!(?) Newer »


This thread has been archived and is closed to new comments