Evidence-based software development
December 18, 2013 10:40 PM   Subscribe

Greg Wilson talks about What We Actually Know About Software Development, and Why We Believe It's True (slides for one iteration of this talk) posted by a snickering nuthatch (51 comments total) 128 users marked this as a favorite
 
And, of course, he mentions his own book (then in-progress), Making Software: What Really Works, and Why We Believe It.
posted by a snickering nuthatch at 10:42 PM on December 18, 2013 [8 favorites]


Nice tag. :-)

(and nice post)
posted by Tell Me No Lies at 11:02 PM on December 18, 2013


oOo. This is right up the alley of info I need for a current project. Neat.
posted by daq at 11:14 PM on December 18, 2013


Oohh, Jpfed has made an FPP just for me!
posted by Harald74 at 11:20 PM on December 18, 2013


10 minutes in, when does the software part start?
posted by saber_taylor at 11:29 PM on December 18, 2013


(Right about then.)
posted by saber_taylor at 11:39 PM on December 18, 2013 [4 favorites]


If you don't care so much about software engineering, his appeal for rational, evidence-based thinking in the conclusion of his presentation is well worth watching from about 48:15 to 54:30.
posted by peeedro at 12:25 AM on December 19, 2013


I know IEEE stands for Institute of Electrical and Electronics Engineers, but the domain http://ieeexplore.ieee.org seems like some sort of Internet Explorer joke for a second.
posted by EndsOfInvention at 3:33 AM on December 19, 2013 [6 favorites]


It's unfortunate that he claims that smoking "unequivocally" causes cancer based on observational data, and then talks about double-blind experiments, which are interventional data. The former is inconclusive without additional causal assumptions.

The correct experiment would be to take a random sample of people and force half of them to smoke (ideally somehow unbeknownst to them), and then see who develops cancer. Obviously, such experiments are immoral.

The existence of the causal link between smoking and cancer was debated for decades by the tobacco companies, and is discussed in Pearl's famous book "Causality", where he resolves the dilemma. His work is now the gold standard for epidemiology research where you cannot do a double-blind experiment, but still want to discover causal relationships.

Many examples in the talk are infected with the same sloppy error. For example, if you're trying to conclude that language choice or development cycle have to do with development time, then the correct experiment is to take multiple groups and force them to use a particular language or development cycle. The observational data he considers would require additional assumptions to make causal claims.
posted by esprit de l'escalier at 3:36 AM on December 19, 2013 [18 favorites]


Seriously it's pretty scary how much people do not know anything about how to create software.
posted by save alive nothing that breatheth at 4:17 AM on December 19, 2013 [3 favorites]


The IEEE is named for the sound you make whenever you have to deal with them.
posted by mhoye at 5:08 AM on December 19, 2013 [11 favorites]


I do however like the idea of doing real experiments to establish good software practice. (Thanks for the post, jpfed.) I wonder what a good experiment would be to understand the value of code reviews and language choice.
posted by esprit de l'escalier at 5:23 AM on December 19, 2013


Very good talk!  
"Productivity and reliability depends on length of program text independent of the language."  
"Hour for hour the most effective way to get bugs out of code is to sit and read the code."
posted by jeffburdges at 5:39 AM on December 19, 2013 [1 favorite]


I wonder what a good experiment would be to understand the value of code reviews and language choice.

You should probably read the text and dig into the footnotes, then. There's been a lot of work done on the empirical analysis of the software development process, but the barriers to entry there are the IEEE's charging $20/paper for non-members and the fact that their preferred publication system is a shambling baroque mess of zombie 80's-era technology.
posted by mhoye at 5:46 AM on December 19, 2013 [2 favorites]


His general topic about how many apparently inviolate principles of software development are built on sand is interesting to computer people (and was surprising to me - as somebody who thinks he knows this stuff). The wider set of examples and principles about why this, how it should inform research and why that research is important - deserve a wider audience. I look forward to his book.
posted by rongorongo at 5:54 AM on December 19, 2013


I'm glad the practice of backing assertions with evidence is gaining some ground.

Thanks for posting. Great talk!
posted by JoeXIII007 at 6:26 AM on December 19, 2013


The book came out in 2010, and I thoroughly recommend it.
posted by mdoar at 6:56 AM on December 19, 2013


Great post!

an aside: anyone know why trying to jump ahead in the video resets it back to 0:00?
posted by slater at 7:31 AM on December 19, 2013


"Hour for hour the most effective way to get bugs out of code is to sit and read the code."

But hours spent reading code are long hours.
posted by Segundus at 7:38 AM on December 19, 2013 [1 favorite]


anyone know why trying to jump ahead in the video resets it back to 0:00?

Vimeo has done that to me often enough that now I don't bother arguing with it. I go make a cup of coffee and just wait instead.
posted by flabdablet at 7:40 AM on December 19, 2013


I found it mildly amusing that his whole thesis is in favor of restricting claims about software development process to those that are verified by empirical data and bemoaning that most of the literature is full of things that are simply asserted based on gut instinct and then he asserts that he could take 10% of the cost of R&D for a single drug and construct studies that would lead to 5% improvement in software efficiency based pretty much on nothing but his naked assertion that it is so. Despite that brief foray into the same trap he cautioned against, it was an interesting presentation and I think he is generally quite right about the need for better studies.

A funny thing about computer science is that even in those rare instances where there are well-studied truths that have been conclusively proven to be true, a great many shops fail to use these best practices. I just worked with an organization last week that did not even use source code version control. There are a huge percentage of shops that do not conduct peer review and even more that pay lip service to the idea without really doing it in a meaningful way. Despite the wide-spread availability of automated test tools for integration and regression testing, I doubt that 30% of all dev shops use them. I can think of very few industries that have been more succeptable to buying into ridiculously inflated claims from snake oil salesmen on one stripe or another. I don't think I can even count the number of times something has been pitched to me as a revolution in software engineering that will solve all my problems and save millions, if not billions. I've seen incredibly sophisticated people swallow outragous claims about the ability of say an ERP to eliminate all their headaches and transform the complex into the simple. Depending on how long you have been in the business, you can substitue a dozen differnt nouns for ERP. What was the first one, I wonder? For me it was CASE tools. I suppose if you go back far enough, people probably made those kind of claims for COBOL.
posted by Lame_username at 7:50 AM on December 19, 2013 [6 favorites]


Whoa, that's a flash from the past. I was one of the organizers for this conference.

I've always wanted to look up the citations, but never got around to it. Thanks, Jpfed!

>A funny thing about computer science is that even in those rare instances where there are well-studied truths that have been conclusively proven to be true

Listen, I think we're mostly on the same side here but talking about "best-pratices" always gets me going. I would cringe in horror and back away from any shop that doesn't use source control but: do you have a list of "well-studied and conclusively proven" best practices ;)?

I think we lack the language and, on average, the training to even discern these things. It's slippery!

But if it's any comfort,

>I can think of very few industries that have been more succeptable

I have a "pretty good impression" that cutting corners and cargo culting are extremely common facets of all aspects of human behaviour :).
posted by pmv at 8:04 AM on December 19, 2013 [2 favorites]


I deal with industry folks a lot (in a different context)---their favourite bizzword right now is "learnings". One doesn't make conclusions from studies, one takes "learnings". It's such a nice fuzzy word, so freeing of the constraining rigour of study design or statistics.

The best way to communicate "learnings", of course, is a workshop. One gathers a bunch of folks around a table in a fancy hotel and have them show some powerpoints and tell of their "learnings" on various "experiences". Then a pricy consultant can write it up, put a fancy cover on it and put it in a nice ring-bound report.

It were ever thus, new glitter on old nonsense.
posted by bonehead at 8:16 AM on December 19, 2013


their favourite bizzword right now is "learnings"

/lights match, slowly walks away as lawn explodes in huge fireball
posted by RobotVoodooPower at 8:19 AM on December 19, 2013 [5 favorites]


I'm glad the practice of backing assertions with evidence is gaining some ground.

Mine are usually backed will null pointers and comments saying things like:

// this should be impossible.
posted by tylerkaraszewski at 8:20 AM on December 19, 2013 [4 favorites]


Learnings give me feels.
posted by COBRA! at 8:22 AM on December 19, 2013 [2 favorites]


(Confession: I didn't watch the video - it's over an hour long, why don't people provide transcripts for such things? I can read probably 4x faster than he can talk...)

But how are we supposed to test these methodologies?

Sure, we know something about how to write small "toy" programs and some of that has been tested. But if you've been writing programs for a while, one of the things that you very quickly learn is that writing large systems is very very different from writing tiny ones. Another is the quality of the team you have makes a huge difference in the final results.

So we simply don't have the money to waste setting up duplicate teams to write the same software in different ways - but more, even if they did, it would be extremely difficult to get two evenly matched teams.

I don't see this changing any time in my professional lifetime, either...
posted by lupus_yonderboy at 8:33 AM on December 19, 2013 [2 favorites]


Oh, and "learnings" - WTF is wrong with "lessons", which has the distinct advantage of being an actual English word?

No one I've ever worked with has said "learnings" to me, which is lucky for them...
posted by lupus_yonderboy at 8:34 AM on December 19, 2013


I've seen incredibly sophisticated people swallow outragous claims about the ability of say an ERP to eliminate all their headaches and transform the complex into the simple. Depending on how long you have been in the business, you can substitue a dozen differnt nouns for ERP. What was the first one, I wonder?

I'm sure there were companies who got sold on punchcard systems for managing piecework payroll back in the 20s and 30s and got burned by it. Those machines — and early computers — were staggeringly expensive, at least as bad as a modern ERP system is now.

There's an interesting pendulum that goes back and forth on the sales side, between customized systems and COTS / OOTB systems. I think there's been at least 3 or 4 swings back and forth between "here, buy this thing and use it," with the expectation that you're going to change your business processes to conform to the thing you're buying, and "here's a thing that you can use to build something that will work for you", where the expectation is that you're also going to hire a bunch of full-time people to customize it for your processes. Punchcard sorters and other primitive computers (comptometers, adding machines) were basically the former, since they're not really 'programmable'. And then at the other end of the spectrum, early business computers were basically empty boxes that you had to write custom software for, otherwise they were totally useless, and the major investment was really in the custom software, written specifically for the business and taking into account actual day-to-day processes.

Right now we're sort of at the "OOTB" end, although it's software that's being sold rather than hardware. But if you go out and buy SAP or Oracle ERP, you can customize it by adding components and stuff, but there's a basic expectation that you're buying the product and are going to use it in a specific way, and if your business doesn't work in a way that permits that, you should probably change your processes (to conform to "best practices" which those products obviously represent).

In fact, it's pretty common for companies to go out and buy ERP or other "business platforms" as part of an effort to gut-renovate all of their internal processes. The software is really just a tool for ham-fistedly accomplishing what's really a managerial change. My advice, if you work some place that is doing this: stop whatever you're doing and polish your resume; your employer is not in good shape. It's cargo-cult management at best. IMO, it's typically because management doesn't have any real ideas of their own, or doesn't want to take any risks doing anything interesting, so they take the easy way out. Implementing SAP is today's version of buying IBM.

With that kind of cargo-cultism at the top levels of management, it's not hard to see how the same sort of thinking can trickle down into development activities. I mean, if you've just been told to tear out 25 years of custom code, built specifically for your business and its unique set of challenges and market position and competitive advantages, and replace it with some shrinkwrapped box from some company that specializes not in your industry but in the production of shrinkwrapped-box software, with no real evidence that it's going to work out well, what kind of message does that send? Clearly, ideology, not effectiveness or evidence, is the name of the game. So why not break out your own pet ideology and stuff it down the throats of your team?

I'd like to see the pendulum swing back the other way again, towards custom solutions, but with an evidence-basis. But I'd also like to see "evidence-based management". Because the time and energy wasted by software developers pursuing pet ideologies and trendy methodologies is a rounding error compared to the waste created by MBAs doing the same thing.
posted by Kadin2048 at 8:37 AM on December 19, 2013 [4 favorites]


Hmm. Peopleware and The Mythical Man-Month are based on research. There's also the entire oeuvre of Capers Jones: The Economics of Software Quality, Software Engineering Best Practices (case studies), Applied Software Measurement, and so on. Are those insufficiently academic to be interesting?
posted by sonic meat machine at 8:39 AM on December 19, 2013 [3 favorites]


> Peopleware and The Mythical Man-Month are based on research.

I'm very familiar with TMMM at least. It isn't based on "research" in the sort of way you think of scientific research progressing - it's based on surveys of a lot of successful and failed projects, "case studies" in other words.

As is pointed out above, it's very hard indeed to get solid conclusions from case studies, because you are always comparing apples and oranges. It's certainly clear that a strong team will probably succeed with almost any methodology they choose to use, and a weak team will probably do badly no matter what good methodological decisions they take before they start.
posted by lupus_yonderboy at 8:49 AM on December 19, 2013


sonic meat machine: "The Mythical Man-Month [is] based on research."

Fred Brook's Mythical Man-Month was pretty light on research conducted outside the cube farms of IBM. Obviously it's damn near impossible to gather proper data, but even then, it was pretty light on data and pretty heavy on theories.

While books using academic citations are nice, what I'd really like to see is meta analysis of several studies purporting to analyze the same thing.
posted by pwnguin at 8:50 AM on December 19, 2013


By Grabthar's Hammer....what a learnings.
posted by Jon Mitchell at 8:50 AM on December 19, 2013 [2 favorites]


  • Learnings
  • Solution (as a verb in place of "to solve")
  • And my all time favorite, "The ask" (Oh, you mean the requirement that is poorly conceived and ill thought out?)
posted by rocketpup at 8:54 AM on December 19, 2013


Solutioning the C-suite's resist to up the spend on learnings is a big ask.
posted by flabdablet at 8:59 AM on December 19, 2013 [13 favorites]


I just placed my first amazon book order for 2013.

As for smoking, you'd have to have a double-blind study where some people are given cigarettes, and some are given devices which heat air to the same temperature, some are not, etc... it could just be inhaling hot air is a bad idea, or the particles, or the nicotine, or tar, or the burned rolling paper... or something else in there.
posted by MikeWarot at 9:00 AM on December 19, 2013


We should get a facilitator to workshop with the stakeholders.
posted by bonehead at 9:01 AM on December 19, 2013 [2 favorites]


Would that enhance our outcomes going forward?
posted by flabdablet at 9:02 AM on December 19, 2013 [1 favorite]


> an aside: anyone know why trying to jump ahead in the video resets it back to 0:00?

It's because of the software.
posted by benito.strauss at 9:02 AM on December 19, 2013 [2 favorites]


Proper incentivization is a key learning for high-uptake engagement.
posted by bonehead at 9:05 AM on December 19, 2013


We are committed to 360ing that.
posted by flabdablet at 9:07 AM on December 19, 2013 [1 favorite]


learnings

No. I will not believe this happened, because I cannot believe it. It must not be. The only answer is Borat. The only answer. The only.

You work with Borat. This is the answer.
posted by aramaic at 9:11 AM on December 19, 2013 [1 favorite]


Well, I'm glad that's settled. What say we pause and take a health break?
posted by bonehead at 9:12 AM on December 19, 2013 [2 favorites]


Let's go ahead and do that, now that we have incentivized the brainstormers to ideate actionable learnings.
posted by flabdablet at 9:30 AM on December 19, 2013


Ooh, yeah. Can't believe I forgot "the spend."
posted by rocketpup at 10:08 AM on December 19, 2013


I don't see this changing any time in my professional lifetime, either...

The appalling thing here is that at $20/paper, and given the IEEE's shutty, antiquated search tools, you're unlikely to ever be able to find out how wrong you are.

This research is being done, enthusiastically and with great rigor and precision, right now, today. But if you're outside the pay wall, you'll never know.
posted by mhoye at 10:56 AM on December 19, 2013


Can we please keep the comment list more contextualized to the original post please? Thank you
posted by Colonel Panic at 10:56 AM on December 19, 2013 [1 favorite]


Solutioning the C-suite's resist to up the spend on learnings is a big ask.

I hate you so much right now.
posted by mhoye at 10:57 AM on December 19, 2013 [4 favorites]


I've always wanted to look up the citations, but never got around to it.

I'm not sure I got everything right; there were some discrepancies between what was on the slides and what I could find. Greg, if you're out there, part of science is proper bibliographies!

I am really grateful for this talk. We're taking on some new projects at work and I'm hoping to use them as an excuse to try out some changes to our workflow. The code review stuff in particular is really important. I was initially shocked at the idea that a rewrite becomes more economical if even 25% has to change, but it makes sense; 25% is actually quite a lot.
posted by a snickering nuthatch at 12:37 PM on December 19, 2013 [1 favorite]


I'm sorry for the sidebar, Facilitator Panic. We're parking-lotted.
posted by bonehead at 12:58 PM on December 19, 2013 [1 favorite]


"Evidence based" is only a useful term in opposition to making things up whole cloth. The evidence is still very thin.

I've read through the results of the papers included in "What we actually know about software dev", and the answer is still... not much. I haven't gone through every paper in extreme detail (although I had read several prior to reading the book), and I'm not sophisticated enough to critique the statistical methods. However, the data they use ... hasn't been proven to be generalizable. To their credit, the authors know this, and a lot of the data/methods are critiqued by the authors themselves in the conclusions of their papers. So, this does not mean they are lying, just that it's still very early, and I would not rely on any kind of general prescription based on this research.

In other words, I would follow Greg's implicit advice -- don't believe him. Read the studies in question and, in particular, see if the types of projects and organizations measured relate to the project you're undertaking and the people who are doing it -- and be prepared to change mid course if the attributes of your project don't exactly line up. Numbers alone, without careful framing, will lie to you.
posted by smidgen at 1:07 PM on December 31, 2013


« Older Multilingual children's books that can be read...   |   The Welfare Queen Newer »


This thread has been archived and is closed to new comments