The McNamara Fallacy
April 28, 2018 6:17 AM   Subscribe

Robert S. McNamara (previously, previouslier, previousliest, not previously) was known for his love of measuring progress using numbers - most notoriously, body counts in the Vietnam War. In 1972, Daniel Yankelovich outlined the four steps in the McNamara Fallacy. It has since been recognized as a problem in religion, medicine (PDF), economics, education, advertising, big data, business, and - back to our starting place - war.

The McNamara Fallacy has also been invoked to criticize foreign aid and support Duterte's killings.

The fallacy's steps:
  1. Measure whatever can be easily measured.
  2. Disregard that which can't be easily measured or to give it an arbitrary quantitative value.
  3. Presume that what can't be measured easily really isn't important.
  4. Say that what can't be easily measured really doesn't exist.
posted by clawsoon (61 comments total) 77 users marked this as a favorite
 
As a dyed in the wool empiricist, I actually rather agree with the statement that that which cannot be measured does not exist.

However, there definitely is a problem, in assuming that what you can measure has anything to do with what you want to measure. In the titular example it is measuring body counts instead of measuring progress towards achieving political goals by killing people. Because you cannot measure progress towards achieving political goals by killing people.

And there is a very sensible place to apply the principal: if it cannot be measured, it does not exist.
posted by Zalzidrax at 6:38 AM on April 28, 2018 [4 favorites]


Nice to have a name for this; I think these days there's a society-wide bias toward quantitative measures at the expense of qualitative ones, and I think it causes a lot of harm because it's often the fuzzy, qualitative stuff that really matters. For instance, improving life expectancy (quantitative) through aggressive medical treatment isn't really a win if it comes at the cost of quality of life (qualitative). Increasing economic productivity (quantitative) isn't really a win if it comes at the cost of immiserating workers and ruining the environment (qualitative).

Outside of the natural sciences, quantitative metrics are usually just proxies for whatever underlying qualitative condition is really at issue. When we lose sight of that and start fixating on the proxy and gaming the metrics rather than keeping the real issue centered in our thoughts, bad things happen.
posted by Anticipation Of A New Lover's Arrival, The at 6:46 AM on April 28, 2018 [23 favorites]


Oddly I’m reminded of both Big O notation and Fibonacci estimation in Agile.
posted by Artw at 6:51 AM on April 28, 2018 [5 favorites]


As a dyed in the wool empiricist, I actually rather agree with the statement that that which cannot be measured does not exist.

that makes you an idealist, I think
posted by thelonius at 6:51 AM on April 28, 2018 [23 favorites]


Computers magnify this effect. Networked computers multiply the effect, and networked computers mediating social connections, which are rarely quantified -- and not well -- make me feel sad.
posted by amtho at 7:02 AM on April 28, 2018 [12 favorites]


‘that which cannot be measured does not exist’ is akin to maintaining that that which is not already known doesn’t exist. There are plenty of things we just don’t know how to measure in a quantitative way, yet, just as there are phenomena we don’t have any way of explaining in a rational way, yet. Doesn’t mean they are not ultimately susceptible to explanation or measurement, but that the rules as they exist don’t capture them. Ignoring them is silly though. Claiming measurability as the sine qua non of existence is assuming that we already have all of the tools to do all of the measuring, which is something I am doubtful of.
posted by aesop at 7:18 AM on April 28, 2018 [39 favorites]


Four out of five dentists agree that the McNamara fallacy is a problem.
posted by XMLicious at 7:21 AM on April 28, 2018 [8 favorites]


There is a step zero vital to the application of this fallacy:
0. Redefine your unmeasurable quality (political victory) as a different measurable quantity (body count).
This is implicit in 1-4, but should be called out as the basis for the flawed method.
posted by hexatron at 7:21 AM on April 28, 2018 [30 favorites]


I actually rather agree with the statement that that which cannot be measured does not exist.

That's just a tautology then. "That which cannot be measured cannot be measured."

And the passive voice obscures a lot. You're really saying "That which I cannot currently figure out how to measure does not exist."

It seems like that kind of thinking plays right into this fallacy, encouraging people to pretend that what they can measure is what they want to know. (And that can so easily become "what can cheaply/easily be measured.")
posted by straight at 7:30 AM on April 28, 2018 [18 favorites]


As a dyed in the wool empiricist, I actually rather agree with the statement that that which cannot be measured does not exist.

As it happens, I am also a dyed-in-the-wool empiricist, but I'm inclined to agree with Yankelovich that this attitude tends toward suicide. To see why, it's essential to think seriously about the profound connection between measurements and models.

Almost nothing of interest in the sciences can be measured directly. We can't reach into space with a yardstick and measure the distance to another star, nor can we reach into the genome and spool out lengths of DNA to read off with our eyeballs. Measurements of all kinds are situated within frameworks that give interpretive meaning to the values that we can measure (so-called "operational measures"). Hence the interest astronomers have in "standard candles," objects whose absolute brightness is the same every time, which allows distance to be judged from their apparent brightness. The only way we can be confident that our standard candles are, well, standard is that we believe we understand the mechanism that gives rise to them. Sound measurement depends on the soundness of the model within which it is embedded.

The problems with the Vietnam body count are too many to enumerate, but its central problem was the model within which it was embedded: The belief that some number of casualties existed that would force the North Vietnamese to the negotiating table. McNamara was hardly alone in this belief: It was held even more firmly by General Westmoreland, who repeatedly affirmed its validity to the powers back in Washington.

Our models tell us what we are able to measure reliably, but an honest empiricist must be willing to admit that, beyond the limits of those models exist quantities that, although meaningful, are not yet readily quantifiable. To assert on airy epistemological grounds that everything that exists must be measurable is all well and good, but the reversal of that argument (that we may safely disregard quantities that we don't currently know how or even whether to measure) is dangerous, because most of our models are erroneous, and even our best models have limits.
posted by belarius at 7:33 AM on April 28, 2018 [46 favorites]


I feel like I read about the concept, but probably not the name, from Neil Postman.
I worked my whole adult life in jobs that were concerned with customer satisfaction, and I know that after reading Postman, I was dubious about the way customer sat was measured. It's like they need a number, and that number is more important than the satisfaction itself.
I remember one of my branch managers telling me that without measurements, one could not effectively manage a business. I think this sort of thing is why MBA's generally make bad managers.
posted by MtDewd at 7:47 AM on April 28, 2018 [9 favorites]


Whenever I am in a discussion with someone who tries to argue solely by the numbers I try to get them to read Bobby Kennedy's speech about the GDP. Or mention a quote attributed to Einstein:
Not everything that can be counted counts, and not everything that counts can be counted.

Robert F Kennedy, University of Kansas, March 18, 1968

Even if we act to erase material poverty, there is another greater task, it is to confront the poverty of satisfaction - purpose and dignity - that afflicts us all.

Too much and for too long, we seemed to have surrendered personal excellence and community values in the mere accumulation of material things. Our Gross National Product, now, is over $800 billion dollars a year, but that Gross National Product - if we judge the United States of America by that - that Gross National Product counts air pollution and cigarette advertising, and ambulances to clear our highways of carnage.

It counts special locks for our doors and the jails for the people who break them. It counts the destruction of the redwood and the loss of our natural wonder in chaotic sprawl.

It counts napalm and counts nuclear warheads and armored cars for the police to fight the riots in our cities. It counts Whitman's rifle and Speck's knife, and the television programs which glorify violence in order to sell toys to our children.

Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.

It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.

And it can tell us everything about America except why we are proud that we are Americans.


If this is true here at home, so it is true elsewhere in world.

.
posted by pjsky at 7:51 AM on April 28, 2018 [56 favorites]


Outside of the natural sciences, quantitative metrics

woah woah, nope. in the natural sciences, "all models are wrong, some are useful". This is an undergraduate lesson.
posted by eustatic at 8:00 AM on April 28, 2018 [10 favorites]


Oddly I’m reminded of both Big O notation and Fibonacci estimation in Agile.

Eh, Big O measures something real. It's just that a lot of people forget that it's merely an upper-bound.

Agile estimation is genuinely meaningless, though. It's literally just there so management can have pretty charts to look at.
posted by tobascodagama at 8:06 AM on April 28, 2018 [6 favorites]


You can't even quantify both the velocity and position of a single subatomic particle at the same time. How are you going to quantify what's going on with the huge masses of them that make up worlds and societies, except in the crudest ways?

One thing that I think the fallacy applies to, though I couldn't find an article making the direct link, is policing. You can count how many times an officer felt threatened because a suspect had a weapon. You can count how many of those situations ended in the death of the suspect. But many of the factors which determine whether the use of fatal force in a given situation was justified aren't quantifiable.

The education link has a good summary of these kind of factors for teachers:
The best teachers I know have a set of common characteristics:
  1. They are not only very knowledgable about their subject but they are almost unreasonably passionate about it – something which is infectious for kids.
  2. They create healthy relationships with those students in a million subtle ways, which are not only unmeasurable but often invisible to those involved.
  3. They view teaching as an emancipatory enterprise which informs/guides everything they do. They see it as the most important job in the world and feel it’s a privilege to stand in a room with kids talking about their passion.
Are these things measurable in numbers and is it even appropriate to do so?
Beyond the question of measurability itself, there's the problem, when humans are measuring themselves, of Goodhart's law. McNamara faced it in its crudest form - managers under him throwing old parts into the river, units in Vietnam lying about body counts - though I wonder whether he realized how his measurements were distorting what he was measuring.
posted by clawsoon at 8:09 AM on April 28, 2018 [10 favorites]



As a dyed in the wool empiricist, I actually rather agree with the statement that that which cannot be measured does not exist.

But how much do you agree with that statement?
posted by TedW at 8:11 AM on April 28, 2018 [22 favorites]


Agile estimation is kind of a rough, throw it at the wall number that can have a general usefulness in short term planning, but which gets harmful when fed into Jira where a management layer will start taking it seriously and making meaningless charts out of it that they then worship.

(BigO never reallly goes beyond developers and so is fairly harmless, though it going from casually discussed rule of thumb to A Super Important thing that must be nailed down in detail during interviews might highlight some problems with how interviews are done.)
posted by Artw at 8:12 AM on April 28, 2018 [1 favorite]


Agile estimation is genuinely meaningless, though. It's literally just there so management can have pretty charts to look at.

Something I find myself saying all too often during sprint retro: Jira exists to serve us, not the other way around.
posted by eustacescrubb at 8:36 AM on April 28, 2018 [5 favorites]


Garbage in, garbage out.
posted by snuffleupagus at 8:49 AM on April 28, 2018 [4 favorites]


I just love this quote, which I'd forgotten where I read first (Here, it's in the medicine link):
Goodhart’s law (named after the British economist) states that once a variable is adopted as a policy target, it rapidly loses its ability to capture the phenomenon or characteristic that is supposedly being measured. Adoption of a new indicator ‘leads to changes in behaviour with gaming to maximise the score; perverse incentives, and unintended consequences.’ Mario Biagioloi, professor of law and of science and technology at the University of California, Davis, cited this law in an analysis of how individual researchers and institutions ‘game’ bibliometric metrics, such as impact factors, citation indices and rankings.
I think I need to print it out and put it on my front door.
posted by mumimor at 8:53 AM on April 28, 2018 [32 favorites]


As a dyed in the wool empiricist, I actually rather agree with the statement that that which cannot be measured does not exist.
As an empiricist whose dye perhaps didn't set completely, I'd make a less strong statement: that which cannot be measured cannot be proven wrong. Working in a regime where one cannot be proven wrong is at least as dangerous as measuring the wrong things. The idea that McNamara would have chosen policies that lead to a better resolution of the US-Vietnam war if only he'd paid attention to qualitative feedback and his own instinct seems pretty hard to justify. As is the idea that a good outcome was ever possible. As mentioned above, if your model is bad, measurements don't help. But refusing to make measurements isn't any better.

To pick on the education example, there are plenty of reasonable complaints about education research practices. I'd agree that using evidence from sources other than controlled experiments is better than nothing. But, it's also true that the controlled experiments that have been done make it clear (at least in science and math education) that the intuition and qualitative feedback of both teachers and students is terrible at predicting the outcomes we claim to care about. Teachers, peer observers, and students are awful at deciding without specific controlled experiments which techniques improve students' problem solving abilities. Worse, qualitative estimates inevitably include biases related to race, gender, age, and appearance in ways that hurt everyone.

If it weren't for controlled tests that measure students' ability to solve the problems we claim we're teaching them to solve, we'd have no hope of improvement. Without metrics, we revert to the "it was good enough for my advisor" approach, which only rarely and coincidentally leads to the best outcomes. The world is full of students who loved their teachers and demonstrably didn't learn anything about the subject they were studying. That might actually be a fine outcome, but almost certainly isn't the one either the teachers or students would claim they wanted to see. If the thing you care about can be measured, refusing to try to measure it is a wasted opportunity.

The trick, of course, is to find observables that actually measure the things we care about. I'd happily agree that we're terrible at that, as a species.
posted by eotvos at 8:55 AM on April 28, 2018 [19 favorites]


See also: Jerry Muller, The Tyranny of Metrics

Muller makes a point that should give even the most dyed in the wool empiricist pause for thought: that what often gets chosen as actionable metrics is whatever is easiest to measure, regardless of what might actually matter to measure - whether that's strictly quantifiable or not.

The body count metric is a pretty good example of this. Much easier to count bodies than to do proper opinion polling, especially in wartime.
posted by flabdablet at 9:20 AM on April 28, 2018 [9 favorites]


But how much do you agree with that statement?

I mean, measured is probably a slightly wrong or at least misleading word - observed is mroe accurate. Qualitative measurements are measurements, too, and many things are too complex to be reduced to a single number.

I guess I am sort of ... agreeing from the opposite direction here? What you can observe is what is 'real' - the complex, messy reality that does not readily submit to simple ideas or numbers. Every measurement is messy and has some uncertainty. Every number you construct from those numbers has an uncertainty that you either guess at or ignore. Now if you remember this, and know what you're doing, you can use numbers to make models that will help make sense of it.

But high level things like "winning a war" and "general intelligence" and all these other things? Ultimately that's just struggling to make sense of an overwhelmingly complex and messy cosmos that you observe through human experience. They are not real. At best they are sometimes useful models that help you cope with things.

So if someone presents you with a number, claims it's a "measurement" of some high level thing? It's not. You need to figure out what they were actually observing to get that number. And what it actually means. If it even means anything.
posted by Zalzidrax at 9:23 AM on April 28, 2018 [3 favorites]


I worked my whole adult life in jobs that were concerned with customer satisfaction, and I know that after reading Postman, I was dubious about the way customer sat was measured. It's like they need a number, and that number is more important than the satisfaction itself.

Obligatory Wondermark
posted by flabdablet at 9:27 AM on April 28, 2018 [4 favorites]


The idea that McNamara would have chosen policies that lead to a better resolution of the US-Vietnam war if only he'd paid attention to qualitative feedback and his own instinct seems pretty hard to justify. As is the idea that a good outcome was ever possible. As mentioned above, if your model is bad, measurements don't help. But refusing to make measurements isn't any better.

McNamara proved this himself when he was confronted with data that he was wrong and did not adapt to it. There wasn't even need for a qual/quant distinction. His quant data people were telling him he was wrong but he ignored them for years.
posted by srboisvert at 9:43 AM on April 28, 2018 [10 favorites]


Big O isn't even a measurement. I'd think if there's a problem there it's that programmers are so used to this abstract framing of performance from school and interviews that they forget that one can use empirical methods.
posted by atoxyl at 9:45 AM on April 28, 2018 [4 favorites]


I really liked the McNamara Fallacy applied to medicine article because it is highly relevant to what is going on in US healthcare right now. In general, we have committed to a system that leverages the power of the free market to drive down costs. When it was clear that this wasn’t working — because people don’t shop for health care the way they shop for cars and computers, they don’t compare gas mileage or processor speed — the government stepped in and created basic measurements to compare health care systems: blood pressure control, post operative infection rates, and about a hundred others. Then Medicare/CMS phased in differential payments depending on how well payees performed. Private insurance companies rapidly followed suit.

Physicians mostly went along with this because, while we are never going to get to perfect, we can clearly do better on these measures than we are and we need to take ownership of the improving quality of health care.

The problem is that it turns out health care is a far more complex system and far more difficult to change than these crude measures imply. Mental health, housing status, racial bias, language proficiency, religious beliefs, etc all influence the item being measured far more than which medication the doctor chooses to prescribe. To truly create measurements that take all of this into account would require computing power and a bureaucracy beyond anything conceivable at this time.

The other problem is that this system was largely conceived and is entirely administered by non-clinicians who are creating rewards and penalties based on these numbers without any implicit understanding of the things that influence how the system is performing which are not under the control of the system.

The results of this experiment are not certain yet, but it is far easier to game the system than to earn improved scores. Some providers just refuse to care for more challenging patients. It is nearly impossible for me to refer an obese person for an elective surgery because their complication rates (infection, readmission) are too high. In my safety net organization, there is a huge push to attract more “normal” people (ie employed, stably housed, healthy) to make our numbers better despite this being explicitly outside our mission.

Maybe that which cannot be measured isn’t important, but certainly some of the things that society has chosen not to measure are important and ultimately we still have to deal with them.

Editorial — we are never going to have truly accurate measurements that allow consumers to make accurate analytical decisions about healthcare and the extra layers of administration to try to achieve this only add cost without creating value and this is why a market based health insurance system will never work, in fact has never worked anywhere.
posted by Slarty Bartfast at 9:54 AM on April 28, 2018 [21 favorites]


It's not that quantitive measures are not valuable, they are, often and maybe even mostly. But you really need to be able to apply critical thinking in order to ask the right questions.
Anecdotally, I've saved notes from all of my 28 years of teaching, in the hope of improving my performance. I note the methods I use and the results, and then I follow up with my former students to see what they have learnt and how they put it to use. Unsurprisingly, I'm a better teacher now than I was in the beginning in the sense that more students have acquired skills and knowledge that are useful for them, though surprisingly, the same relative number of students are successful in life.
However, there is one thing I consistently fail at teaching well, and that is critical thinking. I know it can be taught, since I learnt it. But the vast majority of my students just apply random negativity when they should be examining their sources and improving their research questions. Or they just happily buy every fake story they can find on the internet. Sorry.
posted by mumimor at 10:15 AM on April 28, 2018 [7 favorites]


My lack of a tape measure has lead me to question the relevance of this "distance" you keep bringing up.
posted by ardgedee at 10:47 AM on April 28, 2018 [3 favorites]


... if it cannot be measured, it does not exist.

So, there are no happy individuals.
posted by Kirth Gerson at 11:06 AM on April 28, 2018 [2 favorites]


the government stepped in and created basic measurements to compare health care systems: blood pressure control, post operative infection rates, and about a hundred others.

Spent the morning going down the rabbit hole of quality measurement in health care. There are now 2500 individual quality measures that are reported to CMS and the amount of money spent and number of people employed to manage this system is staggering. 20 billion dollars per year, or about $40,000 per physician per year is spent solely on reporting their quality measures. This happened over an eight year period since the ACA with *no* evidence that it has moved the quality or cost needle in the right direction.

It also makes me realize this is now an entrenched bureaucracy and when people realize the effort spent measuring health care quality isn’t paying off, the answer won’t be to do away with the system, it will be “we need to spend more money to get ‘better’ measures.”
posted by Slarty Bartfast at 11:28 AM on April 28, 2018 [12 favorites]


Complexity is probably a big reason many things are regarded as unmeasurable. Shouldn't computers be helping with this?
posted by amtho at 11:32 AM on April 28, 2018


Slarty Bartfast: The other problem is that this system was largely conceived and is entirely administered by non-clinicians who are creating rewards and penalties based on these numbers without any implicit understanding of the things that influence how the system is performing which are not under the control of the system.

This strikes me as a crucial insight. If you're using measurements to get better results from a system which you don't understand, you will mostly create perverse incentives and undesirable side effects.
posted by clawsoon at 11:39 AM on April 28, 2018 [10 favorites]


no relation
posted by MCMikeNamara at 11:47 AM on April 28, 2018 [11 favorites]


(well, I'm sure there is one somehow, but it's small enough it can't be measured so...)
posted by MCMikeNamara at 11:48 AM on April 28, 2018 [8 favorites]


atoxyl: Big O isn't even a measurement.

Nonsense. It's a measurement, it's just not measuring a concrete value; it's measuring a *ratio* -- particularly of increases in processing time as a function of the number of elements. That's why it is at least understandable (though certainly still regrettable) that it has gained so much prominence in interviews; understanding the efficiency dynamic of an algorithm is still an important part of picking the best algorithm for the task at hand, but such interview questions are often poor proxies for truly assessing those skills, and are only used when interviewers don't know of better "easy to ask and with clear right-or-wrong response" questions for the same skills.
posted by mystyk at 12:29 PM on April 28, 2018


This strikes me as a crucial insight. If you're using measurements to get better results from a system which you don't understand, you will mostly create perverse incentives and undesirable side effects.

Like for instance when the CEO of an automobile company tries to manage a land war in Asia (the first of the Classic Blunders).
posted by Slarty Bartfast at 12:54 PM on April 28, 2018 [2 favorites]


Oh god. Applying this to assessment is really disheartening. I’ve seen a huge number of projects created with no clear goal, then people cast around for something they can use to show success and pick things that are simple to measure rather than things that show what the project was supposed to do. Then the “work toward the metric” problem kicks in, and we find ourselves straining to improve numbers that aren’t even showing what the project is supposed to do.
posted by GenjiandProust at 12:58 PM on April 28, 2018 [1 favorite]


Nonsense. It's a measurement, it's just not measuring a concrete value; it's measuring a *ratio*

I guess to me a "measurement" is derived from practical observation - usually via an instrument designed for the purpose. So the distinction I'm making is that while one could get a Big O out of that, one usually gets it by on-paper analysis of an algorithm. Due to the complexity of computer systems in practice, sometimes it would be preferable to perform a practical measurement with a timer than to argle-bargle about on-paper complexity.
posted by atoxyl at 1:21 PM on April 28, 2018 [2 favorites]


amtho: Complexity is probably a big reason many things are regarded as unmeasurable. Shouldn't computers be helping with this?

One thing that computers have helped us realize is how quickly even simple equations can produce chaos.
posted by clawsoon at 1:43 PM on April 28, 2018 [2 favorites]


So, there are no happy individuals.

Nope, but that’s because Happiness exists only as a process not a state. ;)
posted by Celsius1414 at 1:51 PM on April 28, 2018


Slarty Bartfast: Like for instance when the CEO of an automobile company tries to manage a land war in Asia (the first of the Classic Blunders).

His first government job - before he started at Ford - was helping the U.S. government firebomb Japan in WWII.
posted by clawsoon at 2:06 PM on April 28, 2018 [2 favorites]


Bill Gates commenting recently on the Bill & Melinda Gates Foundation’s 20-year effort to improve K-12 education in the U.S.:
“We haven’t seen a big difference even after 20 years, but we’ll keep going,” Gates said. ...

[E]ducation is complicated, in part because it is “essentially a social construct,” Gates said. Providing equal access is challenging because you have to create the right culture where kids come to class, their parents care about their grades, teachers are engaged and interactive, and so forth.
posted by clawsoon at 2:16 PM on April 28, 2018 [2 favorites]


This reminded me of my old game theory prof; he had a theory that McNamara made the fundamental mistake of thinking he was playing a game against nature when he was actually playing against an opponent.
posted by sapere aude at 2:34 PM on April 28, 2018 [6 favorites]


Corollary from library land - if we can't collect statistics on it, it's not worth doing.
posted by lagomorphius at 4:26 PM on April 28, 2018 [1 favorite]


lagomorphius: Corollary from library land - if we can't collect statistics on it, it's not worth doing.

"Before you leave Storytime today, kids, please fill out this 37-item questionnaire."
posted by clawsoon at 5:28 PM on April 28, 2018 [2 favorites]


Our department was pushed very strongly to come up with two metrics to gauge our progress. Besides the obvious question of whether these metrics were supposed to probe our effectiveness or to gauge our progress—ie, whether they were exploratory or or normative—the deeper issue brought up by our discussions about possible metrics was that we looked for things that were easy to measure. In devising these metrics, we were offered no support from IT or the CMIO (chief medical information officer) in terms of finding ways to measure hard-to-measure things. We get no support in measuring outcomes or efficacy that might require data from multiple hospitals in the community. So we offered up some metrics that are easy to measure and that are easy to achieve good numbers on. It was a meaningless and time-wasting exercise.
posted by adoarns at 6:00 PM on April 28, 2018 [4 favorites]


It was a meaningless and time-wasting exercise.

My first exposure to this kind of "scientific" management was as one of five software developers employed by a small but expanding firm, not long after ISO 9001 quality certification first became a thing.

We had to attend a training session to indoctrinate us in the wonders of measurable software quality. Having not long before read Zen And The Art Of Motorcycle Maintenance, I asked the trainer what he meant by quality and how he proposed to measure it. His response was that quality, for ISO 9001 purposes, was "lack of variation", making it instantly clear that he had no clue at all about software.

We eventually got our ISO 9001 certification, mainly due to the efforts of the guy we brought on board to "document our processes". The only document of his that any of us ever actually looked at was the first one he produced, a "sample" process manual consisting of a binder full of Ikea-assembly-style diagrams explaining how to turn off all the lighting before leaving the building.

The entire certification process was 100% bullshit from soup to nuts, and this has also been the experience of just about everybody else I've talked to who has also gone through it. ISO 9001 certification looks very impressive on paper and remains the darling of the C-suite, but it seems to me that even an easily gameable reputation management system like eBay's would work better.
posted by flabdablet at 9:09 PM on April 28, 2018 [4 favorites]


In the past I've reminded coworkers that you can retain your ISO-9001 certification as a company shipping produce infected with e.coli, as long as you've properly documented the infection.
posted by ardgedee at 2:55 AM on April 29, 2018 [7 favorites]


I haven't read much in the field, but I make charts in the gig economy and have a hobby of studying the NHL draft, and I wanted to share what I've found from studying the draft.

The NHL draft is a pretty good sandbox to play in, because you have predictions each year, for at least 20 years now (from Central Scouting), and predictions from each team (by who they choose), and then results from each prediction not very long afterward. If I'm understanding your lingo, and please correct me if I'm wrong, most everyone takes a qualitative approach. They believe they can see what a good hockey player looks like at 17 by watching him play. In fact, if you study their results, they can't, particularly outside of the very top.

The first thing that qualitative thinkers get wrong is strength of opponent. A 17 year olds playing in high school in generally overvalued, where he is drafted produces fewer results than that draft position should. High school players look better than they are because they are playing weaker competition. Similarly, players on good teams are overvalued, because no one looks good losing. Black players? woohoo qualitative thinkers. Players born Dec 31 as opposed to Jan 1? It's complex for this forum, but the number of wasted picks outside the top 50 on January born players is really high. Players with names that roll off the tongue are favored, as is their hotness, but try you making the hotness argument to a qualitative.

When I first started this, maybe 5 years ago, I thought I'd be taking the qualitative list and fixing it, but it turns out their list is so useless, I only use it to arrange the picks in my draft contest (if you know the qualitatives think this prospect should be drafted 200th, you don't need to draft him much earlier than that).

My approach involves scoring as related to the team and the success of the team, then fixing for exact age and size. My approach works, I think, we'll know better in 3 or 4 years. But I'm not changing any minds because the qualitatives refuse to test their process.

As to the links, I think Yankelovich misrepresents McNamara because I believe McNamara was a qualitative. He used data poorly to argue his decision that was already made, likely already made by Nixon. Someone upthread talked about how McNamara's data people tried to show him his error, which actually works with quantitative people, but fails with qualitative. For me, the model needs to lead to the best answer, which isn't really how data is used in many cases.
posted by rakish_yet_centered at 7:17 AM on April 29, 2018 [3 favorites]


Relevant books (aside from McNamara's own, especially Argument Without End):

The War Managers (Kinnard)
The Perfect War: Technowar in Vietnam (Gibson) -- author interview (1987)
posted by snuffleupagus at 9:33 AM on April 29, 2018


I guess to me a "measurement" is derived from practical observation - usually via an instrument designed for the purpose. So the distinction I'm making is that while one could get a Big O out of that, one usually gets it by on-paper analysis of an algorithm.

It's been a long time since school, but I thought the point of it is that it's a theoretical metric of the time cost of an algorithm, and has nothing to do with the details of the physical computing platform that is used to actually run a solution. So measuring would seem to miss the point.
posted by thelonius at 11:00 AM on April 29, 2018 [1 favorite]


I think this is the perfect thread for the post-facts era. Obviously society is in the process of recognizing that assembling data or quantifying results is useless, and soon we will go back to arguing issues based on whatever internal whims we have, or which ancient philosopher appeals to us.

You can say that spending millions to feed and shelter the homeless leads to good results, but the mayor can say that spending that money on poetry and public art has results that are just as good. Or at least they make HIM feel good. And since we can't trust your quantitative statistics and figures, that don't measure the qualitative happiness of the city, why bother with them?
posted by happyroach at 12:54 PM on April 29, 2018


rakish_yet_centered: I haven't read much in the field, but I make charts in the gig economy and have a hobby of studying the NHL draft, and I wanted to share what I've found from studying the draft.

I've long been intrigued by the case of Wayne Gretzky and Mark Messier. Gretzky dominated in all statistical categories, of course, but after they stopped playing together Messier went on to win two more Stanley Cups and Gretzky zero. There's been the suggestion that Messier brought an intangible "leadership" quality to the dressing room that couldn't be measured in his personal stats that made his whole team better.
posted by clawsoon at 4:08 PM on April 29, 2018 [2 favorites]


Is this a trick question by referencing a not so good metric similar to body counts in Vietnam? The Pocket Rocket won 11 cups, does that make him 11/4 better than Gretzky?

Yeah, I think hockey is played between the ears. I think one of the failures of people that do analytics in hockey is not recognizing that. One could argue, and I have a suspicion that this is the question you are asking, does the quantitative draft analyst miss Mark Messier and only sees Wayne Gretzky? My guess is that it's less true than you would think. Messier was chosen in the 3rd round, so it's not like the qualitatives saw his inner soul at age 17.

My view is that there's a division of labor between what the GM does (talent acquisition) and what the coach does (motivation and team strategy). What the GM is trying to find a group that can win, a lot of people believe that involves players the size of Messier so they are not pushed around like the Nashville defense was in Game 1. For me it's the coach's job to connect to the Messiers of the world to keep him emotionally connected.

Hockey is interesting for me because it's a little complex. Different analytical styles work for the person doing that analysis. The best player in hockey didn't make the playoffs this year, and each group will tell you a different reason why. I think it's a reasonable argument that they have player that scores (somewhat) like Gretzky but think they have someone that leads like Messier. I also think it's a reasonable argument that that McJesus' style player has less influence on wins and losses than you would think from watching for 5 minutes.

Most people talking about hockey take a qualitative approach, those taking a quantitave approach are often using bad metrics, because what we are talking about is difficult. Since each of those groups believe sincerely they are correct, it might be true that it is me that is full of shit, but I don't think so.
posted by rakish_yet_centered at 6:39 PM on April 29, 2018 [2 favorites]


Metafilter: It might be true that it is me that is full of shit, but I don't think so.
posted by skoosh at 7:13 AM on April 30, 2018 [1 favorite]


Obviously society is in the process of recognizing that assembling data or quantifying results is useless, and soon we will go back to arguing issues based on whatever internal whims we have, or which ancient philosopher appeals to us.

The point at hand is that much of the data currently assembled, and many of the results currently quantified, are indeed useless and that the process modifications and gaming provoked by the requirement to perform those measurements can in many cases be actively harmful.

Nobody with more than two brain cells to rub together is suggesting that assembling data and quantifying results are inherently useless pursuits in and of themselves. But anybody who has spent more time in the real world than in the rarefied atmosphere of bean counting understands that choosing what to measure is an undertaking that needs to be approached with care and skill and that all too often it's approached with neither.

One example is the craze for outsourcing everything that began in the 80s and continues apace today. The standard argument for outsourcing is that if an outfit has to pay fewer employees it can make huge savings in its wages budget - which is all fine and dandy as far as it goes, but anybody who has spent any time working in an organization that's gone through this process will have stories about people being sacked and then brought back as independent consultants, usually with a consulting fee representing a hefty premium on their previous salary. If the chosen metric had been total expenditure on services required, as opposed to total expenditure on wages, this kind of idiocy would not occur.

The loss of corporate memory inherent in mass replacement of dedicated employees with floating consultants is much more difficult to quantify and therefore measure than expenditure on wages, but in many cases has been shown to cause inefficiency-mediated costs that vastly exceed even the usually-fake "savings" from outsourcing. ISO 9001 attempts to fix this particular issue by freezing corporate memory in the form of documentation, but that can only go so far. Exactly how far is difficult and often expensive to measure.

Running organizations well is always going involve wisdom and judgement as well as knowledge and good information; reflexively deriding these as "internal whims" simply displays a lack of both.
posted by flabdablet at 7:53 AM on April 30, 2018 [3 favorites]


rakish_yet_centered: Is this a trick question by referencing a not so good metric similar to body counts in Vietnam? The Pocket Rocket won 11 cups, does that make him 11/4 better than Gretzky?

You're correct to criticize it as a very poor metric with a tiny sample size. The only thing that makes it interesting to me in this case is that there's a tiny bit of control involved, given that they were both on the same team and then both went to different teams.

Part of it was that I grew up an hour or so from Edmonton, and I remember the sky-is-falling, how-could-Pocklington-possibly narrative when Gretzky was traded away. And then the Oilers won another cup under Messier... and then Messier went to New York and won yet another cup.

...which is still not a big enough sample size to get worked up about.
posted by clawsoon at 11:19 AM on April 30, 2018


ISO 9001 certification looks very impressive on paper and remains the darling of the C-suite, but it seems to me that even an easily gameable reputation management system like eBay's would work better.

I got involved in the same dog-and-pony show as well, and soon realized that it's nothing about quality at all, but consistency. The Mafia could get certified, provided all their procedures for collecting protection money were written up in ring binders.
posted by 43rdAnd9th at 8:19 AM on May 1, 2018 [2 favorites]


That's probably true. The certification industry uses the word quality to mean something different from what the general public uses it to mean. They don't mean high-quality or low-quality; those are irrelevant to them. They mean documented and precisely repeatable. An ISO 9001-certified company can produce really crappy products.
posted by Kirth Gerson at 1:48 PM on May 1, 2018


Quality means the kind of consistent crappiness that results in the crappy parts all fitting together so that they can be assembled into the crappy product, as opposed to the inconsistent crappiness a group of 19th-century craftsmen working on their own instead of on an assembly line might produce: the occasional high-quality product, but the parts aren't interchangeable. Quality is shitness for purpose.
posted by XMLicious at 2:22 PM on May 1, 2018 [2 favorites]


« Older A Ripping Tale of Algae and Typos   |   Easy! Easy! Easy! Newer »


This thread has been archived and is closed to new comments