"There are no real consequences for having bad security.”
November 7, 2013 10:31 AM   Subscribe

Should software makers be held financially liable for the insecurity of their products? "The joke goes that only two industries refer to their customers as “users.” But here's the real punch line: Drug users and software users are about equally likely to recover damages for whatever harms those wares cause them."

"The most common analogy is the car. And there are legitimate parallels between the vehicle safety crisis of the 1960s and today’s software security conundrum. Then, state and federal courts were reluctant to apply tort law even where automobile-accident victims claimed their injuries resulted from the failure of manufacturers to exercise reasonable care in the design of their motor vehicles. Over the next thirty years, however, the courts did an about-face: they imposed on automobile manufacturers a duty to use reasonable care in designing products to avoid subjecting passengers to an unreasonable risk of injury in the event of a collision; applied a rule of strict liability to vehicles found to be defective and unreasonably dangerous; and held automobile manufacturers accountable for preventing and reducing the severity of accidents.

Yet to insist that software defects and automobile defects should be governed by substantively similar legal regimes is to ignore the fact that “software” is a category comprising everything from video games to aircraft navigation systems, and that the type and severity of harms arising from software vulnerabilities in those products range dramatically. By contrast, automobile defects more invariably risk bodily injury and property damage. To dismiss these distinctions is to contribute to an increasingly contrived dichotomy, between those who see the uniqueness of software as an argument for exempting software programs from traditional liability rules altogether, and those who stress that software is nothing special to claim that the road to software vendor liability lies in traditional contract or tort remedies."

Part 1: Bad Code: Should Software Makers Pay?
Part 2: Why Is Our Cybersecurity So Insecure?
Part 3: What You Don't Know About Internet Security Will Definitely Hurt You
Part 4: We Need Strict Laws If We Want More Secure Software
Part 5: The Security Burden Shouldn't Rest Solely on the Software User
posted by not_the_water (90 comments total) 19 users marked this as a favorite
 
I have a t-shirt that says "Strict Liability" on it, just because I love that phrase so much. That is all.
posted by anotherpanacea at 10:36 AM on November 7, 2013 [1 favorite]


What is this, I don't even....

I can't even express how much this would increase the cost of even trivial applications. I don't believe that any software has ever shipped without bugs, and that includes programs that deal directly with life and death. To say that we could create bug-free software would be to imply that we understand the causes and can realistically have zero defects. I don't know what sort of magical world that would be, but I would like to live in it.
posted by blue_beetle at 10:43 AM on November 7, 2013 [19 favorites]


Windows is a Gatesway drug, amirite?
posted by weapons-grade pandemonium at 10:45 AM on November 7, 2013 [1 favorite]


Should journalists be held liable for spreading stupid ideas?
posted by stbalbach at 10:47 AM on November 7, 2013 [32 favorites]


Sorry, but caveat emptor applies here. Software is a fluid technology and what may be crashproof/hackerproof today may not be true tomorrow.

Also the other thing they have in common is that you should always get to know your dealer.
posted by Renoroc at 10:48 AM on November 7, 2013 [1 favorite]


I'm really hoping that formally verified software slowly takes over the world. Or at least, the core dependencies that other software uses.

It would be pretty awesome to have a formally verified browser running on a formally verified operating system; even if the market won't make these, academia eventually will.

I realize that verification doesn't do anything about security bugs in design, but at least it can help with bugs in implementation.
posted by a snickering nuthatch at 10:50 AM on November 7, 2013 [1 favorite]


Due care and foreseeability won't make this Cardozo class liability. I am for this.
posted by Samuel Farrow at 10:52 AM on November 7, 2013


Isn't this largely a matter of convention plus market failure plus most software's inability to physically injure?

There is one glaring problem with the software market : Software should not receive any copyright protection unless the source code is public. Why the fuck can you derive a copyright on a piece of machine generated code while hiding the human readable part? All the constitutional arguments for patents being published apply to copyright as well, but machine generated copyrighted works didn't exist back them.

If we de facto compelled all code to be open source or custom by revoking the copyright for all close source code, then we facilitate a secondary market for insurance against software defects, which might circumvent the market failure part.

Ain't so easy to change conventions to make people buy insurance though, as the health care debate proves.
posted by jeffburdges at 10:53 AM on November 7, 2013 [3 favorites]


I can't even express how much this would increase the cost of even trivial applications. I don't believe that any software has ever shipped without bugs, and that includes programs that deal directly with life and death.

At present, I believe that if a manufacturer of robot gun turrets or automated, armed predator drones ships defective code, they should probably be liable for the murders. But then, I'd also like to discourage these things from existing.
posted by Going To Maine at 10:54 AM on November 7, 2013


You guys might consider reading the articles.
posted by enn at 10:55 AM on November 7, 2013


(Relatedly, from 2007: Robot Cannon Kills 9, Wounds 14)
posted by Going To Maine at 10:55 AM on November 7, 2013 [3 favorites]


The analogy to cars would be apt if there were substantial financial incentives to find new and previously unimagined ways in which objects could be collided.
posted by 0xFCAF at 10:58 AM on November 7, 2013 [4 favorites]


Software security and vehicle safety are not analogous. Software security and vehicle security are. And you can't sue Ford because you leave your car out in the street and somebody busts your window to steal your stuff.
posted by burnmp3s at 11:00 AM on November 7, 2013 [3 favorites]


Should Woddy Allen be held financially liable for the insecurity [portrayed in] his product[ion]s?
posted by The 10th Regiment of Foot at 11:05 AM on November 7, 2013 [3 favorites]


In broad strokes, I'd agree with burnmp3s that software security is analogous to vehicle security, not vehicle safety. There is however a grey area for system software : Microsoft should probably become liable if they say Windows is suitable for running a medical robot, but probably not if they never make that claim. What happens if you claim Linux, etc. is the most stable operating system? A priori, probably nothing. But what if you'd a financial interest in the medical robot maker believing you? It's tricky. You need a secondary insurance market for operating system vulnerabilities, but that benefits enormously form open source.
posted by jeffburdges at 11:07 AM on November 7, 2013


> I can't even express how much this would increase the cost of even trivial applications. I don't believe that any software has ever shipped without bugs, and that includes programs that deal directly with life and death. To say that we could create bug-free software would be to imply that we understand the causes and can realistically have zero defects. I don't know what sort of magical world that would be, but I would like to live in it.

I've been a professional programmer for thirty years now, and I can't tell you how often I've heard this rant.

It embarrasses me every time - suppose architects felt this way:

"I can't even express how much this would increase the cost of even trivial buildings. I don't believe that any buildings have ever been put up without flaws, and that includes buildings that deal directly with life and death. To say that we could create buildings without flaws would be to imply that we understand the causes and can realistically have zero defects. I don't know what sort of magical world that would be, but I would like to live in it."

This argument is the very reason that we can't have software that won't break - because developers are unwilling to take responsibility for their own work.

Of course, it's impossible to guarantee that a program won't fail 100% of the time, even a critical one - but it's impossible to guarantee 100% that a building won't fall down either - that's what liability insurance is for.

There is at least one piece of major software that was essentially bug-free - the software controlling the space shuttle - but that was because they decided that making it work completely correctly was a priority of theirs. That article is very interesting - because none of the people involved is technically a "rock star programmer", and none of them put in long hours, and yet their productivity in terms of code developed is not much lower than the industry average.

The reason that software is so buggy should be immediately obvious to everyone in the industry - because managers are eager to push new features out of the door with reliability a distinctly secondary concern. If they thought their company might get sued if something went wrong, they wouldn't be so fast to tell you to "throw it over the wall and ship it".
posted by lupus_yonderboy at 11:16 AM on November 7, 2013 [47 favorites]


Drug users and software users are about equally likely to recover damages for whatever harms those wares cause them.

What are these damages that we're speaking of?

As an end user, you are quite unlikely to be materially harmed by flaws in your own software. Being co-opted into a botnet is not actually significantly harmful to you. Most actual damage, which is generally in the form of identity theft, seems to be the result of phishing scams. I don't think software companies should be liable for your choice to give out your password, no matter how convincing an imposter asks you for it.

Even if you are targeted by a botnet, and this results in material damages, who would be liable? The authors of the software with the security flaw that allowed the botnet software to be installed on the attacking computers? This is almost impossible to discover and may vary wildly between different parts of the botnet. You were DDOSed by 50,000 infected computers. Do you sue Microsoft for a windows bug? Adobe for a Flash bug? Someone else? All of them?

How would you trace particular damages to particular software? What sort of examples or evidence are there that there is a significant amount of damages being caused that could be traced this way?

Certainly, this is easy for defective software in a medical device or an aircraft's autopilot, but it seems a lot less clear for a PDF reader with a buffer overflow vulnerability.
posted by tylerkaraszewski at 11:20 AM on November 7, 2013 [2 favorites]


From Neil Gaiman and Terry Pratchett's "Good Omens" (published in 1990), a demon from hell's admiring take on computer industry warranties:
Along with the standard computer warranty agreement which said that if the machine 1) didn't work, 2) didn't do what the expensive advertisements said, 3) electrocuted the immediate neighborhood, 4) and in fact failed entirely to be inside the expensive box when you opened it, this was expressly, absolutely, implicitly and in no event the fault or responsibility of the manufacturer, that the purchaser should consider himself lucky to be allowed to give his money to the manufacturer, and that any attempt to treat what had just been paid for as the purchaser's own property would result in the attentions of serious men with menacing briefcases and very thin watches. Crowley had been extremely impressed with the warranties offered by the computer industry, and had in fact sent a bundle Below to the department that drew up the Immortal Soul agreements, with a yellow memo form attached just saying: "Learn, guys..."
posted by George_Spiggott at 11:21 AM on November 7, 2013 [11 favorites]


And you can't sue Ford because you leave your car out in the street and somebody busts your window to steal your stuff.

You can sue Ford if:
  • they decide it's cheaper to use the same key for every car
  • they decide to use those round-key locks that you can pick with a Bic pen years after it's become known that they are not secure and after industry best practices have moved on to more reliable lock types
  • they keep a database of key patterns for individual cars and inadvertantly publish that database on ford.com
...etc. All of which are more analogous to actual failures on the part of software makers than your scenario that places all of the blame on the user.

It is disappointing to see programmers mocking this idea. We should know better than anyone how awful most software is, and we should also realize better than anyone that only legal pressure (whether via litigation or legislation) will cause people to accept the longer development times and higher costs that would be necessary to have better software, since users generally do a very poor job of judging software quality and security until it's too late.
posted by enn at 11:22 AM on November 7, 2013 [2 favorites]


Strict liability means open-source software would basically end, for what it's worth. Why scratch an itch when you might be incurring a catastrophic liability risk?
posted by mhoye at 11:22 AM on November 7, 2013 [6 favorites]


There's no reason, that I as a government, or large corporation, or hell, a citizen group couldn't go to SuperSoftwareCompany and say, "Hey, I need a system to run our jelly bean factory, you will be held liable for damages resulting in bugs in the system, please give us a quote".

SuperSoftwareCompany would go and talk with Lloyds of London, get their crack team of lawyers together and work out the details about what would be considered under the liability and therefore the insurance rates to cover that risk.

They'd take that back to us, submit the quote and that's it-- the system works. The only question is if you'd want to pay the quote they'd give you.
posted by Static Vagabond at 11:25 AM on November 7, 2013 [1 favorite]


What sort of examples or evidence are there that there is a significant amount of damages being caused that could be traced this way?

There are numerous such scenarios discussed in TFA. Most have to do with customer data being stolen; either there is a class-action suit against the company that was breached, or that company sues the vendor of the software with the vulnerability.
posted by enn at 11:25 AM on November 7, 2013


Those are actual decisions on their part that knowingly make the product less secure, basically they're akin to fraud because they're claiming they did the reasonable thing when actually they cut unreasonable corners. Accidental defects are a liability when they make products unsafe. I'd agree that websites not hashing user passwords rises to the level required to trigger liability probably.
posted by jeffburdges at 11:28 AM on November 7, 2013


In a way it's as if it's still 1976 and personal computers are a hobbyist thing; by plugging it in and installing software you're basically the builder, and if you want to try managing your household finances with it, well aren't you clever, have fun with that!

Think of kit plane manufacturers: you can buy a entire aircraft, in multiple crates, by writing a single check, but they make it abundantly clear that what they have sold you should in no way be regarded as an aircraft -- it is a set of parts and if you decide to put them together in such a way as to make an aircraft and then go fly in it, well, no skin off our nose and aren't you enterprising!
posted by George_Spiggott at 11:28 AM on November 7, 2013 [1 favorite]


There are numerous such scenarios discussed in TFA.

Which article? because they're not in the headline article. I have not read all of part 1-5 at this time.
posted by tylerkaraszewski at 11:31 AM on November 7, 2013


The analogy to cars would be apt if there were substantial financial incentives to find new and previously unimagined ways in which objects could be collided.

From 2010: Hacker Disables More Than 100 Cars Remotely Not exactly an apt example, but a reminder that your car is getting many digital things installed. More appropriately, from 2012: High tech car theft: 3 minutes to steal keyless BMWs.
posted by Going To Maine at 11:39 AM on November 7, 2013


jeffburdges: "I'd agree that websites not hashing user passwords rises to the level required to trigger liability probably."

What if a newbie coder builds their own multi-author blog system with plaintext passwords and puts it up on github, a few people fork it and make changes to the theme of it, of those forks one is particularly nice and a few dozen people install it-- one of those sites gets popular and has hundreds of members. That site is hacked and one of the members' is damaged by it.

Who's liable?
posted by Static Vagabond at 11:42 AM on November 7, 2013


You can sue Ford if:
they decide it's cheaper to use the same key for every car
they decide to use those round-key locks that you can pick with a Bic pen years after it's become known that they are not secure and after industry best practices have moved on to more reliable lock types
they keep a database of key patterns for individual cars and inadvertantly publish that database on ford.com


I'm not a lawyer, but at least for the first two I'm not sure that you could successfully win a lawsuit about lock quality on a product. The reason why Ford uses key systems that can't easily be defeated is because if they didn't everyone would buy a different brand of car with better security features, not because they are legally obligated to use a state-of-the-art system or are otherwise going to be held liable for vehicle thefts. Ford could legally sell cars with no door locks, or no doors for that matter.
posted by burnmp3s at 11:43 AM on November 7, 2013 [1 favorite]


What if a newbie coder builds their own multi-author blog system with plaintext passwords and puts it up on github, a few people fork it and make changes to the theme of it, of those forks one is particularly nice and a few dozen people install it-- one of those sites gets popular and has hundreds of users. That site is hacked and one of the users' is damaged by it.

Who's liable?


A possible answer: Who has guaranteed what? If the newbie coder guaranteed that the passwords were being hashed, he/she would be. If the company that put up the site was guaranteeing that they passwords were hashed, it would be. If no one has guaranteed anything, the users are stone-cold out of luck.
posted by Going To Maine at 11:46 AM on November 7, 2013 [1 favorite]


If the software in your car fails in a dangerous way, it is handled in the same way other automotive safety issues are. It doesn't get a free pass. I would imagine the same thing is true of software in airplanes. Meanwhile, I can't sue a lock company because their lock doesn't prevent a break-in or even a murder, presumably because the lock doesn't cause the break-in or murder. I don't think it's unreasonable to put software security generally in the same bucket as a lock.
posted by davejay at 11:53 AM on November 7, 2013


Do other peoples' companies not have to negotiate the details of liability as part of the contract when they build software for clients?
posted by verb at 11:56 AM on November 7, 2013


This argument is the very reason that we can't have software that won't break - because developers are unwilling to take responsibility for their own work.

I think there's more to it than that. My software is going to be run on a number of different operating systems (even if you just limit yourself to Windows, there are dozens of flavors and service packs out there) with a bunch of different hardware configurations and who knows what other software running on the system? Plus, people do all sorts of crazy shit with software.

People drive cars on roads. Houses are built on the ground. If people built houses that hung from trees that were accessed by sliding down ropes and electricity came through a bare cable that ran straight through the living room and the bathroom had no walls (because it spoils the view, ya know?) then architects would have a much harder job building safe houses. If people drove cars at 150mph, routinely, and drove on ice, through meadows, etc, retrofitted steering wheels on the roof and drove from there, and tied a couple of hundred boxes to the rear bumper, then cars would be really dangerous.

But that's how a lot of consumer software is used. It's bizarre. The reliable software (and it does exist) is typically embedded, running on a single hardware platform, with nothing else on the machine. I'm not allowed to tweak it or mess with it - I have to let it do its thing. That stuff is pretty reliable.
posted by It's Never Lurgi at 11:56 AM on November 7, 2013 [4 favorites]


But the newbie coder hashed with MD5 without salt, so it's actually worthless. Or there's a bug and only a small number of salt values got produced and now a rainbow attack is feasible. Or he used a single round of a fast hash function when he should have used multiple rounds, so it's crackable in a reasonable amount of time. Or there's a timing attack on the hash algorithm that just got discovered and now that's being exploited.

Is one of those negligence? All of them? None of them? Are we going to expect the courts or the legislature to determine what an appropriate password storage strategy is?
posted by 0xFCAF at 11:58 AM on November 7, 2013


But that's how a lot of consumer software is used. It's bizarre. The reliable software (and it does exist) is typically embedded, running on a single hardware platform, with nothing else on the machine. I'm not allowed to tweak it or mess with it - I have to let it do its thing. That stuff is pretty reliable.

Interestingly enough, it's precisely that kind of software that folks like Cory Doctorow hate the most. He's said many a time that locked-down software-driven products should not exist, and that they're an affront to the human spirit and all that.
posted by verb at 12:00 PM on November 7, 2013


This argument is the very reason that we can't have software that won't break - because developers are unwilling to take responsibility for their own work.

Err, I don't think it's that simple and moreover, I think you are conflating the word "responsibility" as used conversationally, and the word "responsibility" as used in this article to mean legal liability.

Real-world example: let's say that I am a programmer working on user login. I know for a fact that whatever proven algorithm I pick to hash my passwords is likely to become obsolete at some point in the future, and conversely, that the latest-and-greatest algorithms will take years to be proven. Do I want to end up in a court with a lawyer barking at me, "So you KNEW that the SHA256 algorithm will have become outdated in the next several years and you still used it, RECKLESSLY and NEGLIGENTLY"?

There is at least one piece of major software that was essentially bug-free - the software controlling the space shuttle - but that was because they decided that making it work completely correctly was a priority of theirs.

Their attitude may have been a part of it but consider that per your source, we are talking (1) 260 of America's "best and brightest" (2) on a single years-long project (3) where they had complete end-to-end control of their software.

The real world of "cloud" software, which is what most of those "weak link" end consumers interact with on a daily basis, is not like that at all. It's not always the best and the brightest; the projects are always understaffed; development cycles are short; and most importantly, there is a jumble of building blocks, open source and otherwise, cobbled together without the slightest hope of thoroughly understanding their inner workings and security implications thereof.
posted by rada at 12:05 PM on November 7, 2013 [7 favorites]


People who want stricter liability for software should remember that we're dealing with software companies.

As in limited liability corporations whose physical wealth is not exactly very large.

If we instituted strict liability, the first plaintiff to win a lawsuit will get not much more than a truckload of aeron chairs and a whole lot of Oreilly animal books.
posted by ocschwar at 12:06 PM on November 7, 2013 [6 favorites]


He's said many a time that locked-down software-driven products should not exist, and that they're an affront to the human spirit and all that.

It's a poor excuse for Fry & Laurie, but I'm going to take it! "For too long broadcasting has been in the grip of a small elite. We must expand and offer more choice!"

posted by Going To Maine at 12:09 PM on November 7, 2013 [1 favorite]


Who's liable?

I would think that generally it starts with someone accepting cash in exchange for a product.

If you find something free on the street and it turns out it doesn't work, that doesn't sound very much like a liability situation to me. Does it sound like a liability situation to you?
posted by anonymisc at 12:10 PM on November 7, 2013


If we instituted strict liability, the first plaintiff to win a lawsuit will get not much more than a truckload of aeron chairs and a whole lot of Oreilly animal books.

Are you being serious?!
This is a solved problem. An American surgeon doesn't keep a suitcase of money stuffed under his mattress in case he accidentally amputates on the wrong person, yet a person wrongfully losing a limb can still be compensated with more than the stuffed toys from the clinic waiting room.
posted by anonymisc at 12:16 PM on November 7, 2013 [3 favorites]


verb: "But that's how a lot of consumer software is used. It's bizarre. The reliable software (and it does exist) is typically embedded, running on a single hardware platform, with nothing else on the machine. I'm not allowed to tweak it or mess with it - I have to let it do its thing. That stuff is pretty reliable.

Interestingly enough, it's precisely that kind of software that folks like Cory Doctorow hate the most. He's said many a time that locked-down software-driven products should not exist, and that they're an affront to the human spirit and all that.
"

I don't know, there's a tangle of assumptions here that I'm not sure hold up. The visibility of the source and the modifiability of the platform are technically orthogonal, even if it's atypical in practice. Moreover, while you could make the argument that designing a platform for modifiability is a good way to introduce bugs and vulnerabilities, it's easy to imagine a controlled platform that doesn't take steps to actively prevent tinkering, while also not making any guarantee of proper operation in the event of such modifications being made.

Also: is that something that Cory Doctorow really believes about all computing environments? I always thought that notion as popularly espoused really only applied to personal computing.
posted by invitapriore at 12:21 PM on November 7, 2013


@anonymisc: incidentally, liability insurance costs for US surgeons = salaries for US programmers.
posted by rada at 12:22 PM on November 7, 2013


I'm sitting here sighing and rubbing my eyes. Because this kind of liability does exist, but it's covered by contract law. In other words, if Corporation A hires Corporation B to write Program X, they can make legal liability for failures in the software part of the contract. That happens. It's not super common, because it makes things more expensive and makes software companies much more fussy about things like "specifications" and "change orders" and all that bothersome shit clients don't want to mess with. But it does exist.

You don't see it much in traditional commercial software, however, because of the nature of IP law and the general popularity of clickthrough licensing agreements that scream, "We're not responsible if you choke on this copy of Acrobat Pro and die."
posted by verb at 12:23 PM on November 7, 2013 [8 favorites]


Strict liability for bugs is a great thing for anyone who wants to wait the better part of a decade for that new feature to be added.
posted by one more dead town's last parade at 12:27 PM on November 7, 2013 [6 favorites]


Also: is that something that Cory Doctorow really believes about all computing environment

He hasn't offered any exceptions along those lines, so I don't know what he'd say if asked directly, but in his past writing and speaking he's made it fairly clear that he thinks "general purpose computers" should not be locked down, and that attempts to create "appliances" rather than hackable hacker toys is part of a giant war on human freedom.

On one hand, I'm sympathetic. On the other, Lurgi's comment nails it. General purpose computing devices are being used to make gadgets and gizmos and cars and toasters and routers and so on. The way we make those kinds of things reliable, and safe, and bulletproof is simple: we build them, test them, lock them down, and epoxy the case shut. We make it impossible to modify, and then say, "There. It works like that."

There's a real tension between the desire for fast-moving "Just build the software while we think it up!" demands by clients, the public demand for Features Features Features, hackers' desire for 110% accessibility, and the desire for 100% reliability.
posted by verb at 12:32 PM on November 7, 2013 [1 favorite]


Contracts, reasonable expectations, products being fit for the purpose for which they are marketed, products turning out to be unfit, recourse when those things fail, solutions... this is not some strange new land uncharted by human eyes. It's old and well-trodden and successful tried-and-true stuff.

Defensiveness that warrantying a product just wouldn't work if that product happens to be software... I don't think that has traction with the best and brightest.

"But I like the status quo!" is entirely legitimate and has merit. But throwing up hands at problems as if catastrophic when everyone else is already leaping them and doing fine, that's just embarrassing.
posted by anonymisc at 12:33 PM on November 7, 2013 [2 favorites]


@anonymisc: incidentally, liability insurance costs for US surgeons = salaries for US programmers.

Adding the equivalent of another salary to the team might turn out to be a wise investment that brings in the big sales if it starts to make your competitors look like fly-by-night seat-of-pants operations in comparison. (And really, most of them are :) )
posted by anonymisc at 12:40 PM on November 7, 2013


On one hand, I'm sympathetic. On the other, Lurgi's comment nails it. General purpose computing devices are being used to make gadgets and gizmos and cars and toasters and routers and so on. The way we make those kinds of things reliable, and safe, and bulletproof is simple: we build them, test them, lock them down, and epoxy the case shut. We make it impossible to modify, and then say, "There. It works like that."

There's a real tension between the desire for fast-moving "Just build the software while we think it up!" demands by clients, the public demand for Features Features Features, hackers' desire for 110% accessibility, and the desire for 100% reliability.


From this month of this year, to continue my car-centric commenting: Your car is about to go open source.
posted by Going To Maine at 12:40 PM on November 7, 2013 [1 favorite]


These articles omit the reality that there is a marketplace for software defects. Law enforcement, military, intelligence agencies, and criminals are all bidding up the price. As long as a 0-day is worth more than a programmer's annual salary, there are going to be defects.

Automobile safety design is a bad analogy; there isn't a small team of government-funded experts trying to figure out how to make your car kill you.
posted by RobotVoodooPower at 12:40 PM on November 7, 2013 [1 favorite]


Strict liability means open-source software would basically end, for what it's worth. Why scratch an itch when you might be incurring a catastrophic liability risk?

As an author of a fair amount of Free Software, I'm not so sure — it seems to depend on exactly where the liability is implemented. Take Static Vagabond's question and Going To Maine's answer. That's one possible implementation that doesn't affect open-source coders much, if at all. But there are other (possibly better) implementations. To me, there is an opportunity to make this better for most people, and that would be to make the person deploying a site for others to use responsible for paying due diligence to these issues. That doesn't have to mean reading the code or hiring experts to do so, though in some cases that may be advised. If you're deploying a site that does no commerce like a blog site, your liability is most likely smaller than with software that takes credit cards or controls cancer-fighting pulses of gamma radiation.

So I guess I'm saying this: should every coder be personally liable for publishing code somewhere that has security problems that should be obvious to a seasoned practitioner? Probably not, after all even very green programmers put their code out in public these days, and the world is much better for it. But should a major video game manufacturer be held liable when they keep millions of credit card numbers in plain text on their networked video game store and it gets rooted? Yes — because they were conducting business in such a way that there was a reasonable expectation that they'd done due diligence which would preclude them from doing something so foolish.
posted by atbash at 12:41 PM on November 7, 2013 [2 favorites]


Are you being serious?!
This is a solved problem. An American surgeon doesn't keep a suitcase of money stuffed under his mattress in case he accidentally amputates on the wrong person, yet a person wrongfully losing a limb can still be compensated with more than the stuffed toys from the clinic waiting room.


And you can expect insurance companies to offer commercial software liability insurance as soon as they have a reliable corpus of data on the frequency and severity of liability incidents in the industry.

In other words, right around the time our great grandchildren get their first developer jobs. They can pick up the discussion then. Over your grave or mine?
posted by ocschwar at 12:41 PM on November 7, 2013 [1 favorite]


"@anonymisc: incidentally, liability insurance costs for US surgeons = salaries for US programmers."

$30,000?
posted by klangklangston at 12:41 PM on November 7, 2013


As long as a 0-day is worth more than a programmer's annual salary, there are going to be defects.

An unauthorized exploit in a Vegas gambling machine's code is likewise worth more than the programmer's annual salary. But the machines turn out pretty reliable. It's a solved problem... if there is motivation to solve it.
posted by anonymisc at 12:48 PM on November 7, 2013 [1 favorite]


Adding the equivalent of another salary to the team might turn out to be a wise investment that brings in the big sales if it starts to make your competitors look like fly-by-night seat-of-pants operations in comparison. (And really, most of them are :)

I think that would vary wildly depending on the kind of business the client does. It's a lot like saying that building a shatterproof, waterproof cell phone that lasts a month without recharging, but costs $1500 and weighs three pounds, is a way to distinguish yourself from your competition. I work for a company that has to weigh the costs of certain types of liability insurance against the value of the contracts that it would allow us to bid for. At least right now, it is not worth it.

To put things in perspective, I'm staring right now at an RFP from a major international news organization. They want someone to design, implement, test, and launch a bespoke news platform for them in 90 days, starting roughly now. The odds are slim that they will be able to define what constitutes 'done' before the product is supposed to be done. And it is not uncommon.

We tend to pass on gigs like that, because, well, crazypants. But someone will take them up on it, and the thing they build will probably not satisfy the client's desires at the end of 90 days. The client will grumble, and pay them to work on it for 60-90 additional days, apologize to their users for the bumpy launch, and roll out 'improvements' over time.

Again, commercial software is a bit different. And it's absolutely true that many programmers blame the fluidity of software for the bad quality of their work.
posted by verb at 12:48 PM on November 7, 2013 [4 favorites]


The articles are not discussing just any kind of bug, they are discussing security failures. In fact, the articles explicitly avoids talking about liability for bugs in general, which is a whole other can of worms.

Given that, and the state of "security research" these days, I have to say that this is a horrible idea. I do not want to be dragged into court to defend my code as "reasonable practice" because some "white hat" decided to get some extra cash by figuring out some obscure timing attack on my code. And make no mistake, this will be used as another income source for amoral lawyers and a pay day for the defcon crowd as "consultants" if it were to be made into law.

The tools and the platforms are not good enough, and it hasn't been proven yet that they are ever going to be good enough. Hell, we now have a virus which sits in the memory of network chips and bioses. Are those manufacturers liable? The article seems to want to make that case, but it feels very unreasonable to me.

BTW, the space shuttle is a bad example, because, as far as I know, there wasn't any danger of a third party trying to infiltrate it. That is, they had zero security concerns, just reliability concerns (and they *still* got stuff wrong, just several orders less than your average commercial piece of software).
posted by smidgen at 12:49 PM on November 7, 2013 [1 favorite]


In my role as software engineer for $TECHNOLOGY_COMPANY I have many times been placed in the position of taking long series of meetings with $BIG_SERVICE_PROVIDER, in which I defend platform and software infrastructure choices that our product uses, all of which have known, published risks and defects, which I have to knock down one by one. The frustrating thing about this is that I made none of these choices personally, nor would I have made them. The engineers who did have moved on or in not a few cases been dismissed for reasons relating to competence. Most of these risks are hypothetical, most of them are irrelevant to our customers' use cases, and I'm pleased to be able to say so. For the rest, I have to make the case that we understand and sufficiently mitigate these risks in some way. I do not have the option of agreeing with them that the choices were poor, nor do I have the option of refactoring the product to remove or replace these platform choices, for a whole bunch of reasons, most of them business reasons, none of which do I consider good reasons from any sound engineering perspective. If I don't like it my choices are to ask for someone else to field these meetings -- the choices there are limited and I don't think any of them would do as well -- or I can take a different role with the same company, or a different company. But the one choice I do not have is to not be involved in a product for which this stuff is untrue, because that product doesn't exist.
posted by George_Spiggott at 12:50 PM on November 7, 2013 [4 favorites]


Here's a typical software project in terms of what an architect does:

The client comes in and says, "I need a building built, we have 12 months to get it up, so I need you to start designing it now. How many people will be in it? I don't know. Somewhere between 5 and 5,000. It's your job to figure out how that works, not mine. I have promised this building to my customers and I need to get it built."

The architect gets to work and comes up with a bizarre design that works for 5 to 5,000 people. The client looks at the design, "It turns out that what my customers really need is something that can move them from point A to point B. Can you add some wheels to this? What do you mean that's not what you do? This is a building, you should be able to make it move! No, we don't have time to start over. Just figure out how to make it move! I don't want to be bothered with the details. By the way, we have partnered with a brake manufacturer and you'll need to use their brakes on the building. I know, the brake system is 20 years old and has no documentation, but we've already signed the partnership, so that's what we're stuck with. Make it work."

The architect hires some engineers and somehow shoe horns a system into the building that allows it to move, albeit slowly. It turns out the brake company uses a proprietary measurement system for its parts, and with no documentation, the engineers spend way too much time reverse engineering how to fit the brakes to the wheels.

The client comes in to take a look at the progress. "This looks great! I knew you could do it! One minor thing. When I said that we needed to move from point A to point B, what I meant is that the building needed to be able to fly. This is a great start, but... Hey, what are you doing with that letter opener? Why is your face red? Hey man, I'm just passing on what the customers want. Calm down, okay? Oh sh-"

Who is liable for the building not flying?
posted by ryoshu at 12:59 PM on November 7, 2013 [18 favorites]


All this, and no one commenting paid attention in theory class? Turing decidable/complete, halting problem, etc ? Verified software indeed.
posted by k5.user at 1:13 PM on November 7, 2013 [6 favorites]


unsafe at any clock speed
posted by uosuaq at 1:13 PM on November 7, 2013 [2 favorites]


halting problem, etc

The computer will always halt when viewed on an appropriate time scale. ;)

Not quite sure how that relates to appropriate liability for not taking appropriate measures to secure products or services.

posted by atbash at 1:20 PM on November 7, 2013


k5.user: "All this, and no one commenting paid attention in theory class? Turing decidable/complete, halting problem, etc ? Verified software indeed."

It's amusing in the context of this discussion that this is basically unparseable. I can't tell if you're arguing in favor of formal verification or wrongly asserting its practical impossibility.
posted by invitapriore at 1:23 PM on November 7, 2013 [1 favorite]


I write software for a living and I don't object to this, but I don't think for a moment the market is willing to pay for secure code. There are certain subject domains where it is necessary and paid for and a vast amount of domains where it is necessary and rarely paid for.
posted by dgran at 1:26 PM on November 7, 2013 [2 favorites]


ryoshu - I laughed, I cried. That's it exactly. I was recently on a project where the client sent frequent vague emails about "must-have" additions to the project and yet there was not a single employee at the client whose job it was to spend even 1 day a week worth of effort coordinating on their end. They were also trying to implement like 4 large systems at once, with different vendors for each one, and asking us to write glue code that would work with all of those. It's completely unlike a team building software for a well-defined agency project like the space shuttle.
posted by freecellwizard at 1:29 PM on November 7, 2013


Computability theory and being able to say with full accuracy what some arbitrary code will do on all inputs. (halt, not halt, crash, not crash, divulge your mothers maiden name and your amazone wish list, etc)
posted by k5.user at 1:32 PM on November 7, 2013


As long as a 0-day is worth more than a programmer's annual salary, there are going to be defects.

While i have no data to back this up, because that data would be impossible to get unless you were an exploit broker... i get the feeling that a HUGE percentage of these bugs, if not so much as to make the side that's programmers not even show up on a pie chart, are found by independent crackers.

I don't even see what basis you'd have to claim that other than that it sounds good, although i don't really have anything to back up my theory either.

It's just that independent cracker seems a lot more plausible to me than some dev risking their job for a payout, unless they had just been fired.
posted by emptythought at 1:34 PM on November 7, 2013


k5.user: "Computability theory and being able to say with full accuracy what some arbitrary code will do on all inputs. (halt, not halt, crash, not crash, divulge your mothers maiden name and your amazone wish list, etc)"

You are very good at listing terms, yes, but formal verification doesn't require the ability to make such guarantees about arbitrary code, only the code in front of you. There are cases where you'll run up against something improvable, but that doesn't invalidate all the cases where you don't. Here's [PDF] a paper on a formally-verified OS kernel.
posted by invitapriore at 1:39 PM on November 7, 2013


Computability theory and being able to say with full accuracy what some arbitrary code will do on all inputs.

Godel proved there are some mathematical statements can't be proved and that didn't stop mathematicians from trying to prove specific statements. Turing's proof shouldn't stop programmers from trying to prove specific programs as correct either. It helps that we're not trying to write a program that can decide if an arbitrary program is safe with arbitrary input. It's more about specifying limitations when writing programs to make it easier to prove that the finished program is safe. Much easier.
posted by Green With You at 1:42 PM on November 7, 2013 [3 favorites]


freecellwizard: "ryoshu - I laughed, I cried. That's it exactly."

It's not the clients fault-- they want the world and don't have the technical ability to deliver it, that's why they came to you.

When the project started there should have been a tight specification agreed between you and the client that you worked towards, that's where your knowledge and experience shines, because you can help them define the best system for them that you're able to deliver on time.

If the client then asks for Facebook integration or fancy blinkenlights, then you say the magical words "Sorry, that's not in the scope-- would you like us to quote you for that change and let you know how far that pushes the project back?"

And that's how it somewhat sadly works in the bigger development contracts, a Government asks for a bid based on some shitty in-house built software requirements, then the tricky software companies make low-ball bids, knowing full well the governments scope changes will be worth more then the initial bid.

The real problem is this frustrating constant feeling among smaller software developers that they are the underdog and have to appease the client's every whim, rather then doing some upfront tight requirements to solve the majority of the problems.
posted by Static Vagabond at 1:46 PM on November 7, 2013 [2 favorites]


I'd like to add that in a business context it is very common for the development focus to be on user-requested features and not on things like security, scalability, etc. Especially on small projects you may have a team where developers are aware of these areas but are not experts in them, or even if they are little or no development time is allocated to them. That is, there may not be a time during the project where one person comes in every day for a few weeks and load tests the site (*cough* ACA site *cough*) or tries to hack it. That's my experience at least. Clients also often are not willing to see project schedules that have big chunks of time where useful end-user features are not being worked on.

Remember you're dealing with a world where if you show a client a mockup that allows any interaction at all they think the project is 90% done.

The up front cost of all projects would go up if everyone used best practices for security etc., but maybe that's a good idea.
posted by freecellwizard at 1:51 PM on November 7, 2013 [3 favorites]


The up front cost of all projects would go up if everyone used best practices for security etc., but maybe that's a good idea.

I think it is, to some extent, but in my experience only a cataclysmic failure convinces a company that it's worth the investment. Until they are actually the company that leaked a million client credit card numbers, companies tend not to see the value of time and resources spent verifying (rather than hoping for) basic security.
posted by verb at 1:58 PM on November 7, 2013


I write software for a living and I don't object to this, but I don't think for a moment the market is willing to pay for secure code. There are certain subject domains where it is necessary and paid for and a vast amount of domains where it is necessary and rarely paid for.

Precisely, this is why market solutions are inadequate here (just as they were in the case of auto manufacturers; if we'd waited for the market to demand better crash protection, we'd still be waiting), and the only viable solution is a regulatory one.
posted by enn at 2:02 PM on November 7, 2013 [1 favorite]


The real problem is this frustrating constant feeling among smaller software developers that they are the underdog and have to appease the client's every whim, rather then doing some upfront tight requirements to solve the majority of the problems.

Sorry if I wasn't clear. I wasn't saying we jumped and coded every whim. In fact we spent a ton of time clarifying the whims, converting them to management user stories, prioritizing them, and so on. I don't really ever expect anyone to know exactly what they want up front, which is why giant spec documents are out of favor compared to doing iterative development. I just meant that too many clients think hiring a company to build software means they don't have to think about it themselves. In the case I mentioned, 4 big outsourced projects should have meant a minimum of 4 in house full time project managers to interact with the software vendors. In the absence of that, things get messy. If you hire a company in say India to build you something, great, but you need to fly people to India frequently to interact with the team in person. Fire and forget projects never, ever work out.

I may have strayed from the original security topic ... apologies. But certainly the idea that the business world is full of neat little projects that are super secure and bulletproof in other ways is false. The bespoke software industry is a mess. Some basic enforceable standards might not be a bad idea.
posted by freecellwizard at 2:04 PM on November 7, 2013 [1 favorite]


Precisely, this is why market solutions are inadequate here (just as they were in the case of auto manufacturers; if we'd waited for the market to demand better crash protection, we'd still be waiting), and the only viable solution is a regulatory one.

A few things (like making PCI compliance an actual requirement for anyone collecting payment information) wouldn't be a bad idea. However, the popularity of the catch-all label "security" doesn't give me much reason to hope that clueful people would actually write said legislation. Can we legislate that people fooled by social engineering be fired, for example?

In addition, the "automotive safety" vs "software security" comparisons are still pretty weird.
posted by verb at 2:20 PM on November 7, 2013


@klangklangston: $30,000?

I think you are being sarcastic? Can't tell. Here is the thing though. Liability insurance for surgeons varies by risk, from $10k for a low-risk general surgeon in rural Minnesota to $200k+ for a high-risk specialist surgeon in California. But if you are a software developer, you are nearly universally high-risk: you always hope to sell as many copies of your software as possible and there is always a domino effect.

If Joe Bad Coder leaks just 100,000 login credentials via joe.com, and these credentials happen to substantially co-exist1 on banking and brokerage websites, and assuming the average account balance of $10k, Joe is now responsible for one billion dollars of real-money losses. Do you really think there will be low-risk, low-cost liability insurance available given how easy it is to get from 100,00 sign-ups to $1B in damages?

1 What I mean by "substantially co-exist": yes, people are required to use longer passwords on banking websites - but instead of using different passwords, they just tack on a few numbers at the end so the passwords are longer but just easily crackable in machine-time.
posted by rada at 2:23 PM on November 7, 2013 [2 favorites]


There's real value in making software quickly that mostly works, and that's why we're still doing it. Software that mostly works that hits the market in three months seems to be a lot more valuable than software that completely works and takes two years to hit the market. Similarly, there's real value in being able to make something even though you haven't totally figured out what the exact solution will look like. There's real value in allowing business people to change their minds about what they want while the process is ongoing. One of the truly awesome and wonderful things about software is how malleable it is.
posted by chrchr at 2:25 PM on November 7, 2013 [4 favorites]


I had to call my bank a few weeks back to get my web password reset. I had forgotten all the answers to my password recovery questions.

While on the phone I complained about the questions saying "I don't have a spouse, I don't have and siblings, I don't have any pets, almost all the questions are about those things. Luckily I'm not an orphan and know my mother's maiden name"

They told me just to use the same answer for every question, which of course everyone does already.

I have to take a very vague thousand foot overview security training every 6 months. It tells me I have to follow PCI rules, implement ISO whatever secure coding standards, ensure data availability by proper backup procedure, follow proper security incident handling protocols. These are corporate policy, they have made it clear to me. Now it is on my head if I am not following corporate policy and there is an incident because they told me to make everything secure didn't they?

It is kind of ruefully funny that a huge corporation is setting me up to take the blame. I can just picture myself getting hit with some kind of criminal negligence case if we get hacked and confidential financial data goes public.

I gotta get out of the financial sector, iOS games are clearly the wave of the future.
posted by Ad hominem at 2:46 PM on November 7, 2013



I think you are being sarcastic? Can't tell. Here is the thing though. Liability insurance for surgeons varies by risk, from $10k for a low-risk general surgeon in rural Minnesota to $200k+ for a high-risk specialist surgeon in California. But if you are a software developer, you are nearly universally high-risk: you always hope to sell as many copies of your software as possible and there is always a domino effect.


Which is why if I were to offer the software developer a policy for liability insurance, I would charge by the copy rather than a flat rate.

But first I would need some actuarial data in order to get the premiums right.
posted by ocschwar at 2:52 PM on November 7, 2013


As others have mentioned this is not a new problem. Buildings were much cheaper and people were happy to throw up anything that kept out the rain before building codes, but there's a reason building to code is required.

It's really not that far-fetched or overly difficult to implement the practice, but it requires a cultural shift first, and that will be the issue.

At some point in the future, I imagine critical code will be required to go through a vetting process. The rudiments of the infrastructure to support this already exist. I'd imagine one scenario would be a snapshot of the system code is provided to an inspector who, through manual inspection and automated tests ensures it meets the minimum standard. In return the software is certified. If an issue occurs in the future the inspection office has the code base they tested that can be compared to the code that had the issue. If the software company met the certification, the liability shifts to the inspection agency.

In other words, it can work like any other engineered system that requires certifications.
posted by forforf at 2:56 PM on November 7, 2013 [1 favorite]


Why do people keep bringing up building codes? Most software is far more complicated than a typical building.

Besides, software is probably less like the building and more like building plans...but the plans have to describe everything starting with the electrodynamics of the atoms in the building materials!
posted by delicious-luncheon at 3:26 PM on November 7, 2013 [1 favorite]


Good code review proceeds at a maximum of about 150 lines per hour, and finds about 60% of defects. For a modest project of 200,000 lines, that's 1,300 person-hours of work. You'd need this done by suitable experts, probably making at least $100k / year, so figure $65,000 in labor bare minimum. That's if your software experts can work full-speed for 40 hours a week, and they can't.

If the software arrived at an extremely high quality already (perhaps automated technology given to us by aliens finds most bugs) and has only five remaining defects, the reviewers will only find all of them (0.6^5) = 7% of the time.

Given that 93% of your inspections will fail to find all defects, you'd need your average liability claim to be at most $4,500 just to break even against the bare minimum labor cost.

What company is going to run on that business model?
posted by 0xFCAF at 3:37 PM on November 7, 2013 [3 favorites]


If the software/architecture analogy held up in the real world, a homeowners bathroom light switch could be used to attack the Pentagon by convincing it to go Left instead of on/off. The hinge on the left kitchen cabinet could cause the garage door to open because of a defect in a piece of glass in the front door.

There is no rational basis for software now, it's all barely acceptable in that you trust all the code to play well, and never be wrong, and... amazingly enough, it works most of the time.

A rational system wouldn't ever trust code more than the core of the operating system. The idea of requiring an application to be as trustworthy as a real physical object with known engineered components isn't reasonable. You can case harden steel, anneal it, and all sorts of other things, but that piece of steel won't then affect anything outside of the object itself.

Operating systems that require trusting of code to enforce rules will never work when you really need security. Until things change, we're living in a world of wizards who can find magic spells (aka day 0 vulnerabilities) and use them against each other.

It doesn't have to be this way, we can get rid of the wizards and get around to engineering things, but that requires a new OS which doesn't trust any app, ever. This is a non-trivial challenge. Most people think the costs are unnecessary, and resist change. Eventually we'll be driven there, kicking and screaming. The closest approach so far lies in docker.io and zero.vm.
posted by MikeWarot at 3:37 PM on November 7, 2013 [3 favorites]


There are segments of the software industry that do have software process and quality regulations. Avionics, medical devices, & automotive ECUS all do. And while it does have some positive results, it also makes the development process a quite a bit slower and more expensive. And the reason these industries are so heavily regulated is that if companies cut corners people die. That's an entirely different level of risk compared to the danger that some credit card numbers will be leaked.

I think in this sense building codes are a fairly reasonable analogy. Building codes also are mostly about people's health and safety. You can't have fire hazards, but there's no law that your mailbox must have a lock, even though USPS will deliver your credit card and state-issued ID to it.
posted by aubilenon at 4:12 PM on November 7, 2013 [2 favorites]


The building code analogy falls down because unlike the real world, there are no discrete, enforced limits to the connectivity of a given piece of code. The possible side effects of any given piece of code running in a user account in Windows, Linux, OS-X are unlimited in scope. Any code could be used to subvert the operating system, no matter how well written.

With building codes, you can use a Circuit breaker rated to interrupt 10,000 amperes of fault current, and it's good enough. With the current trust models of software, you would get upset if it didn't work properly while struck by lightning called down by a hacker from Russia.

Until operating systems enforce a rational set of rules about resources a program can access at run time, we're doomed to suffer.
posted by MikeWarot at 4:43 PM on November 7, 2013


Just a few comments here.

You aren't paying for open source software so there isn't an implied contract - so changes in software liability wouldn't affect it (but IANAL).

Formal verification, Gödel's Incompleteness theorem and such don't really enter into this. For one thing, the majority of computer programs are in fact primitive recursive, so the Incompleteness Theorem doesn't apply. The reason we get bug is not because the programs are sufficiently pathological so that they are indeterministic, but because people make mistakes.

> Software that mostly works that hits the market in three months seems to be a lot more valuable than software that completely works and takes two years to hit the market.

What you are actually observing is that many companies who take short cuts root are better at getting financing and thus surviving longer.

I can take steroids to win the body building contest, but the result on my body will be worse in the long run.

> The possible side effects of any given piece of code running in a user account in Windows, Linux, OS-X are unlimited in scope.

I don't really understand Windows, but for *nix and OS/X it's just not the case. Users aren't allowed to touch many system things and can easily be set to have resource limits on disk, memory, or even CPU usage. Mavericks goes particularly far in that direction...

And through virtualization, you can make it impossible for even a superuser on a VM to directly affect the hardware or disk.
posted by lupus_yonderboy at 6:47 PM on November 7, 2013 [3 favorites]


Many software companies make a lot *A LOT * of money not [only] from the sale of licenses for a released product but from the "services" contracts that provide the purchaser with access to 'Product Bulletins' and 'Patches' to said products.

Any expectation that a corporation would invest in activities that would threaten that revenue, is unrealistic.
posted by armoir from antproof case at 7:06 PM on November 7, 2013


The reason we get bug is not because the programs are sufficiently pathological so that they are indeterministic, but because people make mistakes.

Right, but one major class of mistakes consists of inconsistencies between the spec and the implementation. These are theoretically eradicable with formal verification, which leaves you a few steps ahead of the current state of things at least.
posted by invitapriore at 7:22 PM on November 7, 2013 [1 favorite]


What you are actually observing is that many companies who take short cuts root are better at getting financing and thus surviving longer.

Sure, but it applies just as much outside the VC-soaked crazyland of Eternal Beta. If you take a year to ship an incomplete product, and take a year to fix the problems and add desired functionality, are you better off than someone who takes two years to ship it in the first place? Sometimes yes, sometimes no -- but it's not simply a matter of taking shortcuts and not caring about "the craft," especially once you get past basic brain-dead security bugs and start looking at ways to harden software against attackers.
posted by verb at 7:32 PM on November 7, 2013 [1 favorite]


I make software for fun and as my single biggest career activity. I'm not happy about how buggy software is, especially security flaws. And maybe we'd all be better off if there were more legal liability than we've got now.

But I want to emphasize a point that others have mentioned and that I think should make us really cautious about exactly how we might apply more law to this: software is so very insanely diverse. How? Let me count (some of) the ways:
  • Purpose: Text editor (e.g., notepad), air traffic control, washing machine controls, medical device, Minecraft mod, etc., etc.
  • Size: Suppose the smallest useful program is 10 lines. Really big modern software might be 300 million lines (for example). The size ratio here is 30 million.
  • Authors: From one 13-year-old (or even younger) to maybe hundreds of adult professionals.
  • Effort to create: 10 minutes, an hour? Up to, maybe 14,000 person-years, or 28 million hours (at 40 hours/week) (for example). Ratio: 28 million or more.
  • Sale price: $0 to ?
Size and effort ratios around 28-30 million sounds like a huge variance, but how does it compare to other human endeavors? Here's one somewhat silly example I thought of. The weight ratio between a bicycle and an aircraft carrier is little more than one third as much (about 11 million: 20 lbs vs. 220 million lbs).

As another example, building sizes come in at around a 2 million ratio (your 10 sq.ft. shed vs. this new 20 million sq.ft. edifice).

I might be willing to accept more rules, laws, regulations, etc. on software, but I really, really want us all to bear in mind that one size will not fit all.
posted by at home in my head at 9:36 PM on November 7, 2013 [7 favorites]


I'd agree that any attempt to regular software is likely to be will be ham fisted and harm everyone.

There is a related wrinkle around the "any competent professional should hash users passwords" noted above : Does your site actually need passwords? Not usually.

If you're running a community blog, selling concert tickets, etc., then your site has basically two real sources of financial liability : First, your financial transactions database, especially if it contained financial account numbers for users. Second, your user passwords table almost surely contains password used across multiple sites.

In most case, you should not require creating an account with a password just to use the site, maybe just emailing an "edit your post". Any attempts to legislate website security might accidentally prohibit such "insecure" systems though, actually forcing more sites to use passwords, making users less secure.

I suppose the "website spills a million passwords" issue can only really be addressed by integrating software like KeyPassX into our browsers.
posted by jeffburdges at 2:24 AM on November 8, 2013


"There are segments of the software industry that do have software process and quality regulations. Avionics, medical devices, & automotive ECUS all do."

Have you been following the large number of exploits coming out for most medical devices, or the "black box" 3rd party stuff toyota bought for their accelerator (as well as their own horrible code for the pedal) ?

Completeness or not - look, very very little code is written to cover all possible inputs. That's how you get 0days. Be it in the OS, or the application as a stepping stone into the OS. Look at the huge trust model Java's tried to implement that continually needs patching because some crafty person finds a way to get outside the trust model.

"not because it's indeterministic" - have you read many language specs ? Compilers are very often non-deterministic (among) in how one will implement something vs another. Add in architectures, and yes, I feel quite comfortable saying code is and often can be non-determinisitic. How was it compiled, for what arch, by what compiler/rec/spec version.

"And through virtualization, you can make it impossible for even a superuser on a VM to directly affect the hardware or disk."

MSFT and VM hypervisors have been exploited and I'm pretty sure are actively being investigated by folks looking for 0days.
posted by k5.user at 7:01 AM on November 8, 2013 [1 favorite]


Right, but one major class of mistakes consists of inconsistencies between the spec and the implementation. These are theoretically eradicable with formal verification, which leaves you a few steps ahead of the current state of things at least.

Formal verification can definitely help, but in some ways, it just pushes the problem up one level. How complete was your spec? It's definitely helpful, but it's not a silver bullet.

And as for all the talk about medical, automotive, and aerospace, you have to understand that when these systems are life critical they basically cheat. That is, they do everything they can up front to constrain the problem: The system is embedded, on one platform, the number of inputs and outputs are very well known, what the code is allowed to do is constrained, the specs have to very complete. Very little software has the effective luxury of constraining their problem space like this.

Let me give you an example. On some critical systems you can't dynamically allocate memory. That's right. All the memory you'll ever use has to be accounted for up-front so that you can never have an unexpected out of memory issue at runtime. While that's great for a tiny system that controls your car's brakes, it's not exactly conducive to creating interesting user-facing software.

And, of course, once the problem is suitably constrained, the code has to be constantly audited and verified, and the testing is very thorough and takes place over the course of years, including things like FDA and FAA approval processes. Very little software is allowed this level of scrutiny because it is incredibly expensive and time consuming.

Also, for those who think VM's are a cure-all, I have two words for you:VM escape.

All that said, more companies should be taking advantage of at least the current reasonable techniques available to them, and to be aware of the basic state of the art. Software in general can do better than it currently is, but let's not kid ourselves. While improvements in tools and processes will help to make things better, there is not likely to be some shangri-la where general purpose software has no bugs or exploits, only reduced bugs and exploits.
posted by delicious-luncheon at 8:06 AM on November 8, 2013 [1 favorite]


« Older "A physicist who never lost her humanity"   |   [wild nature sounds go here] Newer »


This thread has been archived and is closed to new comments