at least it's not a protocol bug
April 7, 2014 10:50 PM   Subscribe

The Heartbleed Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company's site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. All of the above is a direct quote and authored by the fine folks at heartbleed.com. It may be worth noting that one of the measures recommended (and indeed a good idea) - certificate revocation. Unfortunately, certificate revocation has some problems.

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software.

The combined market share of just those two out of the active sites on the Internet was over 66% according to Netcraft's April 2014 Web Server Survey. Furthermore OpenSSL is used to protect for example email servers (SMTP, POP and IMAP protocols), chat servers (XMPP protocol), virtual private networks (SSL VPNs), network appliances and wide variety of client side software.
...
posted by el io (194 comments total) 42 users marked this as a favorite
 
I meant to add the following sentiment to the post...

Unfortunately, the state of certificate revocation is not good. With certificate revocation arguably broken, the impact of a slew of compromised server keys for important network destinations could be devastating to the state of security. It is currently unknown if any 'actors' have exploited this bug and possess the server certificates for the impacted sites.
posted by el io at 10:53 PM on April 7, 2014


Well whaddayaknow. Sometimes there is an advantage in never having upgraded your clients server since 2009.
posted by PenDevil at 10:57 PM on April 7, 2014 [13 favorites]


Word, PenDevil. Once again the conservative, four-year-old-down-rev heavy portfolio pays off for the lazy sysadmin.

I read this earlier when my dev-box-as-a-service went down. It's seems like a big enough deal that I immediately e-mailed my partner to tell her 1) we're safe and 2) maybe our clients aren't and they should be addressing this first thing in the morning.
posted by ob1quixote at 11:03 PM on April 7, 2014 [4 favorites]




First two I tried (major properties) were okay. It was only the third try that said "metafilter.com IS VULNERABLE."

So, if I ever express some idiotic, poorly thought out, grammatically insane comment, you will all know the cause (some 'bad actor').
posted by el io at 11:18 PM on April 7, 2014


This is the kind of news which makes me glad that I'm not a sysadmin. It's also the kind of bug which makes me glad I'm not an OpenSSL developer.
posted by feloniousmonk at 11:27 PM on April 7, 2014


"OpenSSL was written by monkeys"

http://www.peereboom.us/assl/assl/html/openssl.html

I am completely unqualified to comment on the veracity of this opinion.

But I look forward to the discussion to be had by my betters.
posted by armoir from antproof case at 11:36 PM on April 7, 2014 [1 favorite]


AHAHAHAHA

The following is a snippet of chat from two weeks ago when I set up a web server for my own use. The machine is colocated in a public datacenter, but I'm the only one that needs access.

6:41:08 AM Ryan Rs: and I'm too lazy/dumb to know how to properly lock down apache
6:41:30 AM Ryan Rs: so I firewalled port 80 and 443 so you can only connect from localhost
6:41:44 AM Ryan Rs: then ran a ssh tunnel with port forwarding from my home machine
6:41:56 AM kalleysta: :cripes:
6:43:22 AM Ryan Rs: also I know I'm too lazy to keep up with patches and shit, so this greatly limits my exposure

Laziness triumphs over CVE-2014-0160.
posted by ryanrs at 11:37 PM on April 7, 2014 [2 favorites]


If you haven't updated in forever, you're probably vulnerable to BEAST, CRIME, Lucky13, and have an outdated cipher suite. I mean yeah, you're not vulnerable to this, but.. uh.. yeah.

This is a pretty hilarious bug with a lot of pretty nasty implications. Neither OCSP nor CRLs are well suited to the problem.

What the FPP doesn't mention is that all those certificates are going to have to be regenerated (which isn't that bad) but then resigned by the certificate authorities (which is very bad). This means buy stock in VeriSign, and other CA's because they're going to be printing money faster than normal.

If anyone wanted to get a fraudulent Extended Validation certificate, tomorrow (Tuesday) is probably the day to do it. Additionally, if you wanted to really mess up some peoples day, certain CA's only allow revocation via fax machine which should be fairly trivial to DoS. Considering the they can't mark old certificates as bad until they're revoked by the CA, and you can possibly obtain those private keys...

Lastly, it's interesting how ASLR potentially makes this even worse than it already is. Because you can repeat it as many times as you want, and because depending on how webservers various threads implement ASLR, you could can get a different, reasonable size of of a processes address space every iteration. This might not be the case in practice (I haven't looked), but with fork based mutli-threading would be the case.

Info leaks are cool.
posted by yeahwhatever at 11:49 PM on April 7, 2014 [2 favorites]


Laziness triumphs over CVE-2014-0160.

Um, aren't you still running OpenSSH then? That also uses OpenSSL, and is vulnerable.

Though if you set it up before December 2012, maybe you're using a version earlier than 1.0.1, which is when the bug was introduced.
posted by heathkit at 11:53 PM on April 7, 2014


SSH doesn't use TLS.
posted by ryanrs at 11:57 PM on April 7, 2014 [1 favorite]


Oh! Of course! Carry on then...
posted by heathkit at 12:09 AM on April 8, 2014


And yes, I do apply security patches. I'm just very pleased with myself that some cover-your-ass attack surface minimization I did two weeks ago paid off so quickly.
posted by ryanrs at 12:15 AM on April 8, 2014


It is currently unknown if any 'actors' have exploited this bug and possess the server certificates for the impacted sites.
The spokesperson for Half Glass Empty Industries was not available.
Fingers Crossed Inc. said they'll respond once all the facts were in.

I work for Yeah, Right Corp.
posted by fullerine at 1:01 AM on April 8, 2014 [11 favorites]


SSH doesn't use TLS.


Really? This makes me happy because I've spent today worrying that I have no idea how to revoke certificates etc. on my Ubuntu 12.04 VPS, on which SSH is the only relevent service running. Guess I dodged another not-really-a-sysadmin bullet.
posted by Jimbob at 1:16 AM on April 8, 2014


Toward the end of December last year there was a great talk about security problems in the Xserver. Want to take a guess where most of the low hanging fruit were? That's right: protocol parsing. In particular, blindly trusting the length/size fields sent by the client.

And here we are, 3 months later, falling down the same rabbit hole.
posted by sbutler at 1:36 AM on April 8, 2014


It was only the third try that said "metafilter.com IS VULNERABLE."

Going by the tool dilaudid linked it looks like pb's latest round of patching put MeFi back in the clear. That's as much as I know though, I'll ask him to weigh in if there are any specific issues to address.
posted by goodnewsfortheinsane at 2:04 AM on April 8, 2014 [1 favorite]


HN discussion for the more technically-inclined. The very first comment gives a good summary of the bug. Unfortunately halfway through the comments start devolving into a pointless discussion about Haskell vs C.

A copy of the actual patch can be found here.
posted by destrius at 2:08 AM on April 8, 2014


Strapline-o-matic: "HN: Unfortunately halfway through the comments start devolving into a pointless discussion about Haskell vs C."
posted by memebake at 3:03 AM on April 8, 2014 [8 favorites]


According to the changelog.txt for HTTPS Everywhere, version 1.0.0 was released on August 4th, 2011.

The heartbleed site says, "Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012."

It must be just a coincidence. Right?
posted by ob1quixote at 3:31 AM on April 8, 2014 [4 favorites]


Not knowing enough about TLS, OpenSSL, and knowing only the basics of C, does anyone know how the new version will respond to a heartbleed attack?

I would understand if it just returns the heartbeat equivalent of an error code or zero. But even if it was a violation of the spec, it might be nice if it returned 64k of random data, to make it harder to automatically detect the difference between a patched server and a vulnerable one.
posted by Riki tiki at 3:40 AM on April 8, 2014


I would understand if it just returns the heartbeat equivalent of an error code or zero.
+ if (1 + 2 + 16 > s->s3->rrec.length)
+ return 0; /* silently discard */
hbtype = *p++;
n2s(p, payload);
+ if (1 + 2 + payload + 16 > s->s3->rrec.length)
+ return 0; /* silently discard per RFC 6520 sec. 4 */

Looks like the spec says just throw it away, but I understand the urge to tar-pit these guys...
posted by mikelieman at 3:56 AM on April 8, 2014


Ha! We sign our own certificates and I've trained our users to just bypass the warnings browsers throw up for them. We are WAY ahead of the curve.
posted by charred husk at 5:39 AM on April 8, 2014 [6 favorites]


So what are the chances the NSA has known about this bug since it was introduced and has been quietly collecting secret keys for the past year?
posted by edheil at 6:34 AM on April 8, 2014 [3 favorites]


Cause there is pretty much *nothing* we can be confident is secure on the internet at this point unless everybody revokes all their SSL certificates and starts from scratch, right?....
posted by edheil at 6:35 AM on April 8, 2014


So, when I was at a friends house, I got a lot of expired cert warnings on a friends computer (I checked the dates on her system to make sure it wasn't just set wrong, and it wasn't).... Then when I got home, I had a few as well. Haven't seen in any in a while, but this was around Dec-Jan.

Was there a lot of news around this then? Why did I see a lot of expired certs at the time, has anyone else seen that happen recently?
posted by symbioid at 6:41 AM on April 8, 2014


I suppose the security community has grown more careful now that they know the NSA implants bugs in software, or if that suspicion just helps popularize the bug more. In any case, we're hearing about large issues like this and Apple's goto fail more widely now.
posted by jeffburdges at 7:02 AM on April 8, 2014


So what are the chances the NSA has known about this bug since it was introduced and has been quietly collecting secret keys for the past year?

Virtually none. They've been doing it for two years.
posted by RobotVoodooPower at 7:03 AM on April 8, 2014 [5 favorites]


Ha! The box I was most worried about is still running Centos 5 and never had this bug. Laziness!

Of course still many more to patch and revoke. Sigh
posted by Skorgu at 7:06 AM on April 8, 2014




If I don't have the choice of living in the world where this bug didn't exist in the first place, I'd rather live in the word where it was disclosed than the one where it wasn't.

Time to run upgrades on some machines. Then probably stick my head in the sand about the secrecy of my ssl certificate, which isn't used for commerce or anything.

ooh, that's nice: debian prompted me to restart possibly-vulnerable daemons. all done.
posted by jepler at 7:34 AM on April 8, 2014


Can someone explain to a reasonably tech-savvy but not sys-admin user what they need to do to get protected against this?
posted by thecaddy at 7:55 AM on April 8, 2014 [1 favorite]


I'm hearing a lot of complaints about the responsibility of the disclosure. As of the day this exploit was published with such a slick marketing site (complete with custom logo!) there were no fixes ready to go for any of the major Linux distributions. Even OpenSSL's own website is vulnerable.

Some folks were notified ahead of the disclosure. Hosting company CloudFlare was prebriefed and turned their privilege into a marketing blog post. I have reason to wonder if Apple was notified my March 31. But it appears Yahoo is vulnerable. Disclosure is always a bit of a random process but this seems to have gone worse than usual. I'm hoping someone does a bit of digging and writes up who knew about the bug and when.

This kind of bug is only possible because C is an outdated primitive language without even the most basic features to prevent mistakes like this. Pretty much any other programming language, even C++ (shudder), would be a better choice. FWIW it doesn't look like this bug was deliberately planted, it seems to be an ordinary bug introduced in 1.0.1 the same time the Heartbeat code was added. NSA so far as we know has been more careful in their backdoors, too.

This bug is a huge fiasco for Internet security. It really does affect over half the web servers in the world. I'm wondering how many people will end up having to buy new SSL certificates; this could be quite the windfall for folks issuing browser trusted certificates. Best practice is you don't use your expensive signing certificate live, but we're about to find out how many people understand that.
posted by Nelson at 8:02 AM on April 8, 2014 [16 favorites]


I just finished testing our servers with FiloSottile's tool, and it looks like we just one or two utility servers affected by this. The benefits of being a Windows shop is that we're safe this time around... though, of course, we get our own fun issues at other times.
posted by Nonsteroidal Anti-Inflammatory Drug at 8:03 AM on April 8, 2014


This is a system side issue, so if you aren't a system admin, there's nothing you need to do to protect yourself. Which isn't to say that you aren't vulnerable to unpatched systems leaking their keys, leading to the disclosure of data you communicate with them.

I suppose the best you can do is to make sure that you use different, secure credentials for each site. That way if one is breached, the others will remain secure.
posted by wotsac at 8:05 AM on April 8, 2014 [2 favorites]


If you don't run an SSL server, there's not much you can or need to do. It's possible Heartbleed will allow bad guys to attack servers you use, which could compromise you. Not much you can do about it other than hope the folks who run your email, banks, etc are honest about disclosures. In practice the threat to end users isn't so much that someone might eavesdrop on your network traffic as they might impersonate your server (a "man in the middle" attack) and SSL won't help you notice. In either case the risk is relatively small.

LastPass published a nice disclosure to users. They're vulnerable to Heartbleed, but their primary business is holding users' password stores. Fortunately that store is encrypted by the application itself, so the SSL vulnerability is not a huge problem.
posted by Nelson at 8:17 AM on April 8, 2014 [4 favorites]


One thing you can do as a normal user is, if you're logged into any web sites that you think may have been compromised by heartbleed, log out. This will expire your session cookie, which is one thing heartbleed can be used to extract from a web server's memory. I wouldn't worry about it on metafilter, but this is a good morning to log out of your bank's website, or coinbase, etc.

(And don't log back in until you know they're fixed..)

Looking into browser extensions that can warn if a certificate has changed is also a good idea. Both because you want to see lots of certificates changing now, and because it's best to assume that one or more entities have a copy of every certificate that existed yesterday, and can reuse them in man in the middle attacks at any point in the future to pretend to be any of those sites. Pity about that whole certificate revocation thing not working; this is a band-aid. Can anyone recommend good browser extensions to track known certificates?
posted by joeyh at 8:33 AM on April 8, 2014 [2 favorites]


joeyh: I'm a big fan of Certificate Patrol for Firefox.
posted by Nonsteroidal Anti-Inflammatory Drug at 8:37 AM on April 8, 2014


Yeah, the registration of a website just to promote the disclosure of the bug rubbed me the wrong way too.
posted by jepler at 9:06 AM on April 8, 2014 [1 favorite]


Does this mean I need to change my linkedin password again?
posted by 7segment at 9:10 AM on April 8, 2014 [3 favorites]


It's not completely a server-side issue, it's bi-directional. If your client software is running a vulnerable version of OpenSSL and connects to a bad guy, the bad guy can snarf memory dumps off of your computer. (I'm not sure if a man-in-the-middle could send the malicious heartbeat messages, but according to the website it can be done during the handshake, so possibly)

You are at risk if you have software compiled against openssl and haven't updated it yet, or if you have software that statically linked in a vulnerable version. Not sure how often the latter is done, but think closed-source executables, appliances, mobile apps, etc.

You can run apt-cache rdepends openssl on a Debian system to get an idea how many programs might be affected. None of the major web browsers use OpenSSL, AFAIK.
posted by RobotVoodooPower at 9:18 AM on April 8, 2014 [2 favorites]


Planting the seeds: maybe we need a network and crypto stack written in a reasonable language, such as Ada.
posted by Monday, stony Monday at 9:18 AM on April 8, 2014 [1 favorite]


That would certainly be a step in the right direction. The Rust folks are interested in attacking this problem.
posted by a snickering nuthatch at 9:28 AM on April 8, 2014


Hey, we could make the conversation dissolve into an Ada (old and trusted) vs. Rust (the new hotness) argument. That would show those HN folks!
posted by Monday, stony Monday at 9:45 AM on April 8, 2014 [2 favorites]


I've never heard anyone talking so much about Rust until today, in this very context. Sort of silly. If you're looking for C replacements, Go has its own TLS implementation that's several years old. The real problem is no one's going to rewrite all their application code in Go / Rust / Ada. Part of what makes OpenSSL so vital is that it's a library linked into a lot of other stuff.
posted by Nelson at 9:59 AM on April 8, 2014


ooh, that's nice: debian prompted me to restart possibly-vulnerable daemons. all done.

I take it you're running unstable, because it isn't showing up in testing yet.
posted by one more dead town's last parade at 10:01 AM on April 8, 2014 [1 favorite]


The Rust people make a lot of noise, as it always takes a lot if hype to get a language off the ground, but I wouldn't trust them until they have a proven, audited, library, that isn't immune to timing attacks and other common problems with new implementations by ambitious and naive programmers. I'm thinking that their attitude of floating point numbers would indicate that they need to learn lessons the hard way, rather than from history.

Also, the OpenSSL code is kind if mind boggling C, if you've ever had a chance to poke through it. Just implementing in a new language will not prevent the problems of bad code. It's possible, and not too hard, to write C without the common pitfalls.

The advantage of C is that it's easily usable in pretty much all other languages, it's the lingua franca of programming.
posted by Llama-Lime at 10:02 AM on April 8, 2014 [1 favorite]


Hey, we could make the conversation dissolve into an Ada (old and trusted) vs. Rust (the new hotness) argument. That would show those HN folks!

I didn't intend to day that Ada was bad or anything. I have heard that Rustaceans are interested in writing a TLS lib; I haven't heard any such noises from Ada people but if so then that's great.

The real problem is no one's going to rewrite all their application code in Go / Rust / Ada.

They don't have to? All of the languages you listed provide C FFIs, so an environment capable of using OpenSSL should be able to use a hypothetical OpenSSL replacement written in one of those langs.

The Rust people make a lot of noise, as it always takes a lot if hype to get a language off the ground, but I wouldn't trust them until they have a proven, audited, library, that isn't immune to timing attacks and other common problems with new implementations by ambitious and naive programmers.

Skepticism is appropriate. It should be noted that the Rust people are aware of the magnitude of the task.
posted by a snickering nuthatch at 10:21 AM on April 8, 2014


I'm thinking that their attitude of floating point numbers would indicate that they need to learn lessons the hard way, rather than from history.

Can you explain what you are referring to here? I seem to recall some issue with Rust and floating point numbers, but my memory is failing me.
posted by Monday, stony Monday at 11:41 AM on April 8, 2014


I'm thinking of this discussion of Java's mistakes.
posted by Llama-Lime at 11:49 AM on April 8, 2014


one more dead town's last parade: "

I take it you're running unstable, because it isn't showing up in testing yet.
"

Patched OpenSSL (1.0.1e-2+deb7u5) was notionally available for stable last night, but has apparently been superseded by version 1.0.1e-2+deb7u6. deb7u5 did not check for services in need of restart, while deb7u6 does, according to the changelog.
posted by wierdo at 12:10 PM on April 8, 2014


(previously)
posted by p3on at 12:13 PM on April 8, 2014 [1 favorite]


I want to agree with Kahan about floating point (after all, he's much more educated about it than me), but on the other hand more than a few "really weird" bugs that landed on my desk have turned out to be due to extended precision. For instance, it complicates comparing "a<" where one of the two intermediate values retains extended precision and the other doesn't; this sort of thing is difficult to avoid in C++ if you want to put things that contain double precision numbers in containers. In some operator< I had:
if(round_to_precision(this->attr, epsilon) <
         round_to_precision(that->attr, epsilon))
    return true;
When the phase of the moon was right, and the attributes were equal finite values, one of them retained extra precision and made the comparison return true when it ought to have been false (equal values).

That one I "resolved" by using a gcc extension to force the value into memory, shedding the extra precision. And I felt dirty about doing it...
posted by jepler at 12:18 PM on April 8, 2014


jepler: “When the phase of the moon was right, and the attributes were equal finite values, one of them retained extra precision and made the comparison return true when it ought to have been false (equal values).”
I just ran into a crappy floating point implementation that caused a bug of the form -0.00 != 0.00. That was a real treat.


I found a couple of relevent Stack Exchange threads for those wondering what they should actually do.

Heartbleed: What is it and what are options to mitigate it?

What should a website operator do about the Heartbleed OpenSSL exploit?

I was especially taken by this comment, "Judging by the active attempts being reported in the DMZ, the best thing now is STOPPING THE FRIKKIN SERVER ASAP. Sessions are being hijacked, passwords leaked, confidential business data revealed."
posted by ob1quixote at 12:28 PM on April 8, 2014


Patched OpenSSL (1.0.1e-2+deb7u5) was notionally available for stable last night, but has apparently been superseded by version 1.0.1e-2+deb7u6. deb7u5 did not check for services in need of restart, while deb7u6 does, according to the changelog.

1.0.1f-1 was still the newest version in the apt sources for jessie (testing) when I made that comment. I went ahead and installed 1.0.1g from the .deb file.

A small prelude to having to install Windows 7 today, I guess.
posted by one more dead town's last parade at 12:28 PM on April 8, 2014


Updates from some service providers:

Amazon AWS update: http://aws.amazon.com/security/security-bulletins/aws-services-updated-to-address-openssl-vulnerability/

Big IP F5 https://devcentral.f5.com/questions/openssl-and-heart-bleed-vuln
posted by poe at 1:38 PM on April 8, 2014


OK, so.. I guess I don't understand. I see that there's nothing for users to do?

My roomie uses Yahoo, and I see that is vulnerable, apparently... I think someone upthread said "log out of sessions and don't log in until an announcement of it being fixed" which is super impractical. Statistically, what is the likelihood of a regular end user actually getting caught up in this (i.e. their password or contents of messages leaked)?

Is there anything I can tell my roommate to do besides "log out and wait"?
posted by symbioid at 2:00 PM on April 8, 2014


If you are using TLS/SSL on a server, SSL Labs SSL Test is a great thing to run. Not only will it try to detect this problem, but it will do a comprehensive test of your TLS/SSL deployment (and grade you).

Also a quick way of finding if your favorite site is using TLS correctly (you'd be surprised how many poor grades for important sites there are).
posted by el io at 2:03 PM on April 8, 2014 [7 favorites]


There is a risk even if you do not login, symbioid : An attacker could hack the site, download their user database, and extract the passwords. Ain't so easy if the site keeps passwords properly hashed with per user seeds, but many major sites like LinkedIn failed on that front. Just change your passwords after sites fix this if you're worried. And watch your credit card statement over the next few months.

There are more risks if you do login before a site fixes this : An attacker hitting the same webhead as you might obtain your unencrypted password or session info. Avoid using online banking before your bank patches this, phone them if necessary. Avoid making any online purchases with debit cards too. And extend this care further if you're a journalist, activist, etc. obviously.
posted by jeffburdges at 3:47 PM on April 8, 2014 [1 favorite]


There is a risk even if you do not login, symbioid : An attacker could hack the site, download their user database, and extract the passwords.

It was my impression that the bug only exposed 64k of memory at a time, and that the attacker can't control what's exposed. So with repeated attacks they can probably get quite a bit, but nothing that isn't in memory and probably not much that isn't related to the HTTPS server. Isn't that right?
posted by heathkit at 5:53 PM on April 8, 2014 [1 favorite]


Yes, that's correct

They could in theory hijack an admin's session credentials and use hypothetical access that normal users don't have to dump things. Alternatively, db connection details might be in the httpd's memory footprint, but the db would have to be configured to access outside connections (which most aren't). This, of course, makes a lot of assumptions.

Thanks to the way most virtual memory systems works the worst an attacker could do is dump the entire memory space for whatever httpd the site is using. This would reveal the site's private key and session creds, but those would have to be leveraged into db access.

Short version, unless you're truly paranoid (journalist, CEO of major corp, etc) just don't log in until things get patched up and you'll probably be fine.

The interesting question, once the initial fire is put out is how do we invalidate the literally millions of certificates which are now unsafe. I haven't heard anyone saying anything constructive about this problem, other than "uh yeah, that's going to be a problem". This wont be a user facing problem, but is a very interesting technology problem.
posted by yeahwhatever at 6:03 PM on April 8, 2014


I've seen some people saying that people should immediately be changing all of their passwords. Is this an over-reaction?
posted by codacorolla at 6:14 PM on April 8, 2014


The interesting question, once the initial fire is put out is how do we invalidate the literally millions of certificates which are now unsafe. I haven't heard anyone saying anything constructive about this problem, other than "uh yeah, that's going to be a problem". This wont be a user facing problem, but is a very interesting technology problem.

Could you expound on this maybe? I think I can see the problem, but I'm just not familiar enough with how certificates work to see what the exact issue is. Is the problem that certificates have a built in expiration date, and beyond that there's no way for the server to invalidate them?
posted by heathkit at 6:15 PM on April 8, 2014


Avoid changing any passwords until after a site patches the bug, or if your account appears hacked, because the password change dialog itself exposes two plain-text passwords.
posted by jeffburdges at 6:22 PM on April 8, 2014 [9 favorites]


Yes, it's an over reaction, unless you fit into the specifically targeted groups (CEO, human rights worker, journalist, etc). Basically if you think someone would physically break into your house specifically looking for some piece of information, changing your password might not be a bad idea. Otherwise, ehhhhhh.

FWIW if you're not one of the above groups, something like LastPass or KeyPass (I think iOS/OSX has one built in now?) should make changing passwords a lot less of a bear. If you're not using a password manager, unless you have a good reason, you probably should be.

So, to outline the problem: you have a public and private key to keep communication secure online. As the name indicates, one is public and one is private. Let's say someone breaks into your server and steals your private key. Now an attacker has a public/private key pair and can effectively Man-In-The-Middle your traffic without any impediment.

To get around this problem we have to deal with the really thorny issue of revocation. Basically, we need a way to say "Hey, my server was compromised and this private key isnt private anymore. I need to let everyone know that this public key is bad". There are currently two ways this is acomplished: Certificate Revocation List (CRL) and Online Certificate Status Protocol (OCSP).

CRL's are pretty much what you'd expect: a bigass list that's maintained by Certificate authorities of bad public keys. These are problematic because a) they're very large as they have to go back a long time b) there is a lag time between the list being updated, and the client polling the list.

OCSP is basically an online version of a CRL. Instead of getting the entire list, you query information about a particular certificate. This is also problematic because a) it leaks your browsing history b) is very slow c) doesn't work in captive portals (think situations like airpor wireless where you have to pay -- if this seems like a weird one I'd agree, but people more in the know than me assure me that it's a big deal and I don't have any reason to think they're lying to me). Also, OCSP can be hilariously and trivially broken by intercepting the OCSP server traffic and just claiming to be busy

As a result of this shitiness, both Chrome and Firefox ship with a builtin list of of untrusted certificates. This is closer to a local CRL. This of course means, instead of updating your CRL you have to update your entire browser to get a reasonably safe list. Normally this isn't a huge deal, as private keys to Important Sites don't leak too frequently.

So now the situation is, we have a very broken revocation system and huge number of now-should-be-revoked certificates. Do we include a massive list of all certs in our next browser update? Do we try OCSP again? Do we revoke every certificate that's not signed before today? Are there other options people aren't considering?

I apologize if this is too much detail. In short, yes. Aside from expiry we have no good way to invalidate certificates, and especially no good way to invalidate a large number of them. Most of the focus of today is people hijacking session id's because that's visible to the sysadmins who are seeing all this stuff. If the private keys were stolen, the attacks will be much harder to detect but just as damaging.
posted by yeahwhatever at 6:32 PM on April 8, 2014 [15 favorites]


The NYTimes coverage cites multiple sources saying that everyone should change all their passwords immediately. That seems premature, to say the least, given that many sites haven't patched the bug yet.

Sources cited include the CEO of Codenomicon (which I believe was one of the groups that found the vulnerability) and the Tumblr security team which says in their blog post:
This might be a good day to call in sick and take some time to change your passwords everywhere—especially your high-security services like email, file storage, and banking, which may have been compromised by this bug.
David Chartier, the chief executive at Codenomicon is quoted as saying, "Companies need to get new encryption keys and users need to get new passwords immediately."
posted by alms at 7:01 PM on April 8, 2014


One other problem to keep in mind: anyone who might have— any time in the past— recorded encrypted traffic between you and a server that used RSA key exchange now has an easy avenue to decrypt it, if they can get access to the server's private key via this bug. This is a good example of why everyone should use cipher suites with forward secrecy. Thankfully, most of the big internet companies seem to be on the ball by now.
posted by caaaaaam at 8:33 PM on April 8, 2014 [3 favorites]


If you are using TLS/SSL on a server, SSL Labs SSL Test is a great thing to run.

Thanks for that link — this site says some domains fail the Heartbleed test, while http://filippo.io/Heartbleed says they don't. Is there another test site to help decide which of the two test results is actually correct?
posted by Blazecock Pileon at 10:39 PM on April 8, 2014 [1 favorite]


Can someone please tell me why when this vulnerability has been around for 2 years it is now a crisis to change passwords over night?
Something fishy there...
And how is it that this vulnerability was built into such a widely used open source implementation?

One wonders who might actually be behind that?
Not sure what is scarier: this whole thing being malicious or accidental....
posted by dougiedd at 12:29 AM on April 9, 2014 [1 favorite]


Maybe a bloom filter would help with the OCSP, as a first pass against the CRL.
posted by Pronoiac at 1:01 AM on April 9, 2014


At present, NYT article says "The most immediate advice from security experts to consumers was to wait or at least be cautious before changing passwords. Changing a password on a site that hasn’t been fixed could simply hand the new password over to hackers. Experts recommended that, before making any changes, users check a site for an announcement that it has dealt with the issue.", alms. Also, all the security experts were quoted saying to change passwords eventually, probably the journalist was just dumb and put words into their mouths with context.
posted by jeffburdges at 1:52 AM on April 9, 2014


Not sure what is scarier: this whole thing being malicious or accidental....

I just put it in the "There is no real security online" pile. There's "The local email admin can't read it", and there's basic obfuscation to pay lip service to the idea, but never believe that anything is actually secure.

THEN it all collapses into a singularity of risk-management in meatspace. Rotate credentials to minimize the effects of this stuff, change credit card numbers, too. Use different credit card/debit accounts for different tasks/roles. etc...
posted by mikelieman at 3:37 AM on April 9, 2014


mikelieman: “Rotate credentials to minimize the effects of this stuff, change credit card numbers, too. Use different credit card/debit accounts for different tasks/roles. etc...”
Christ that sounds exhausting.
posted by ob1quixote at 4:47 AM on April 9, 2014 [1 favorite]


Lastpass has its own Heartbleed checker. I can't vouch for its accuracy though.
posted by Cash4Lead at 5:37 AM on April 9, 2014


Can someone please tell me why when this vulnerability has been around for 2 years it is now a crisis to change passwords over night?

Because now the vulnerability is public, so a zillion bad guys can all run to exploit it. Security bugs are always like this. The bad guys who find an exploit generally try to keep them secret for as long as they can so they have an undefended attack. The good guys who find an exploit practice "responsible disclosure", they keep it secret for a short while and notify trusted people to prepare a fix. Then it gets released to the public and all hell breaks loose but hopefully there's a fix at the same time. The self-interested guys keep the exploit secret just long enough to design a logo and tip off one cloud hosting company, then release it to the public. And then it's a huge clusterfuck.

That being said, I don't know that it's a crisis to change passwords over night, and it's not clear changing your own passwords right now would even help you. My opinion is there's really nothing ordinary users can or should do right now.

And how is it that this vulnerability was built into such a widely used open source implementation?

A lot of people are asking that question. People have been complaining about the quality of OpenSSL for a long time; it's confusing code, and has had some wonky bugs, and generally isn't written up to the standard of quality that you'd hope for with software so critical. But implementing SSL is also really hard so few people want to do it again, so OpenSSL is the open source standard. There's actually more open source SSL implementations that I knew existed, and IIRC no web browser uses OpenSSL. So there's a little more diversity, but unfortunately many major web servers use OpenSSL.
posted by Nelson at 6:56 AM on April 9, 2014 [10 favorites]


Honestly, it's probably just as effective to logout then back in, as to change your passwords.

Your password would only be in memory on the web server when you login, which isn't very often. But your session cookies value is constantly sent to web server when you do anything. Logging out and in should reset your session cookie, and if the server is patched, no one should be able to login as you from then.
posted by smackfu at 9:03 AM on April 9, 2014 [1 favorite]


What Heartbleed Can Teach The OSS Community About Marketing

There exists a huge cultural undercurrent in the OSS community which suggests that marketing is something that vaguely disreputable Other People do which is opposed to all that is Good And Right With The World, like say open source software. Marketing is just a tool, and it can be used in the cause of truth and justice, too.

A very interesting look at how a catchy name, a tight and well-written description, a web presence, and a distinctive logo may have substantially contributed to awareness and rapid patching of this bug. It would probably be worth an FPP but this thread is still young...
posted by RedOrGreen at 9:05 AM on April 9, 2014 [2 favorites]


You know what else contributes to awareness and rapid patching? Responsible disclosure to software vendors before your marketing campaign. I've got no problem with a hacker team making all the hay they can out of finding an exploit. But the way this disclosure was handled has significantly weakened Internet security.
posted by Nelson at 9:08 AM on April 9, 2014 [1 favorite]


Here's Bruce Schneier's take on Heartbleed. He doesn't have a lot new to add, but he does note "odds are close to one that every target has had its private keys extracted by multiple intelligence agencies".
posted by Nelson at 9:14 AM on April 9, 2014 [4 favorites]


I've got no problem with a hacker team making all the hay they can out of finding an exploit. But the way this disclosure was handled has significantly weakened Internet security.

Yeah, I don't really understand what the team is getting out of this. Reputation building - "look at us, we're so cutting edge" - is that it? Do they expect to be showered with consulting gigs now?
posted by RedOrGreen at 9:14 AM on April 9, 2014


Your password would only be in memory on the web server when you login, which isn't very often.

The problem is that they've been in memory any time you had logged. We already know people were pulling Yahoo Mail credentials all day yesterday. It's possible that someone could have been doing that for up to two years prior.
posted by Nonsteroidal Anti-Inflammatory Drug at 9:16 AM on April 9, 2014


Some potentially optimistic news: someone did a scan of 28M servers last night and found only 600,000 vulnerable. Lots of caveats around this kind of measurement, but I would have guessed they'd found 15M, not 0.6M.
posted by Nelson at 9:37 AM on April 9, 2014 [1 favorite]


Not every web service keeps session cookies alive indefinitely. Google seems to demand reauthentication daily, amazon whenever you use an account-level functions.

Lastpass is nice, but it doesn't seem to recognize half of the password change forms.
posted by CBrachyrhynchos at 9:39 AM on April 9, 2014


I don't really understand what the team is getting out of this.

Motivations vary, but reputation is a big part of it. Also wanting to do the right thing when you find a serious security flaw.

For all my bitching about their disclosure I should be fair and say they basically did the right thing, disclosing this publicly. There's a significant grey market for exploits and a zero-day bug of this significance could be worth a lot of money, maybe millions of dollars. Publishing this exploit so widely burns it and makes everyone safer.
posted by Nelson at 9:48 AM on April 9, 2014 [1 favorite]


Your password would only be in memory on the web server when you login, which isn't very often.

Unless they were able to extract the site's private key, in which case all past traffic is now compromised. Like, everything.
posted by indubitable at 9:55 AM on April 9, 2014




As a slightly techie civilian, the message I'm getting is to change passwords once everything is patched, but not to overly worry (please correct me if I'm wrong).

Minor derail: I've been using KeePass for years, but haven't figured out a good system to integrate it across OSX, Android, Win7, iOS. This has resulted in password versioning issues as I keep separate databases on different machines. Any recommendations on a password manager that is secure and can integrate well across multiple OS?
posted by arcticseal at 10:12 AM on April 9, 2014 [1 favorite]


That's a fine take away.

The unknown part of this is if the Google security people were the first people to find it.

If they were, then the exploit window was Monday morning until patch time, which is fairly small and limits exposure. If they weren't , and it's been actively exploited the last two years then the number of things that could be broken is... quite large.

The vulnerability was built in and undiscovered for so long because TLS is massive, and notoriously hard to fuzz. Stream ciphers (which TLS is) are very difficult to get right, meaning the industry standard TLS library (OpenSSL) is fairly awful old code just because no one wants to touch it to improve it, and the experts who could write a new stack realize how much work it is. Add in things like FIPS certification, and it makes it even harder for it to be changed. Plus, to be honest, it's a pretty thankless job -- people only hear about you when you fuck up and then they just cream you.

What Im guessing happened is Apple's goto fail bug renewed interest in fuzzing SSL stacks, and this is a result of that effort.

As for past traffic being compromised, it's as others have mentioned, Perfect Forward Secrecy would prevent past traffic from becoming decrypted provided that the exploit wasn't being abused at the same time the traffic was collected. Said another way, if an attacker recovered the private key at any time and the ephemeral session key from memory while the stream was active, they would be able to decrypt past streams.

This is partially why the "how long has anyone known about this" question is important. As far as I've seen online, we have no evidence this was exploited in the past. However, if it were exploited in the past, it would likely be used very selectively and not have a large footprint, so the lack of evidence is not particularly telling.
posted by yeahwhatever at 10:36 AM on April 9, 2014 [2 favorites]


Perfect Forward Secrecy would prevent past traffic from becoming decrypted provided that the exploit wasn't being abused at the same time the traffic was collected.

Unfortunately, perfect forward secrecy has not been widely adopted. None of the financial institutions that I do business with implement it, just to give an example. So if someone's been collecting SSL traffic against the possibility that they might one day find an exploit and decrypt it, they've pretty much hit the jackpot.
posted by indubitable at 11:03 AM on April 9, 2014


I've been using KeePass for years, but haven't figured out a good system to integrate it across OSX, Android, Win7, iOS. This has resulted in password versioning issues as I keep separate databases on different machines. Any recommendations on a password manager that is secure and can integrate well across multiple OS?

KeePass on Windows, KeePassX on Mac, MiniKeePass on iOS and KeePassDroid on Android, with the authoritative version of the .kdb kept on DropBox, and a backup copy containing your current DropBox credentials on a micro SD card in an Elago Mobile Nano II reader attached to your keyring.
posted by flabdablet at 11:18 AM on April 9, 2014 [9 favorites]


I was confused about Perfect Forward Secrecy and Heartbleed at first so let me make it explicit. The attack is that someone recorded a bunch of SSL traffic from a target site a year or two ago. They can't decrypt the traffic and the attacker doesn't have the Heartbleed exploit, so all is safe. But the attacker kept the recorded traffic in an archive. Then on Monday Heartbleed hits and the attacker quickly uses the active exploit to steal the SSL private key from the target site. They can then take that private key and go back to their traffic archive and decrypt it.

Perfect Forward Secrecy is a neat trick where the keys used to sign traffic are changing all the time, so that even if an attacker steals a current key they can't go back in archives and decrypt old traffic. SSL can do this trick but it's awkward and still not in common deployment. Google made a big splash when they implemented it back in 2011. Twitter did it in late 2013, and Facebook said they were working on it but I'm not sure it's deployed. Amazon recently offered it as an AWS option and Cloudflare has had it for awhile. Here's EFF's push for perfect forward secrecy.
posted by Nelson at 11:27 AM on April 9, 2014 [5 favorites]


I'm seeing a lot of "fixed" sites, but nobody has replaced their certificates. Is it because they've determined that nobody has read their private key, or is it impossible to use the heartbleed vuln to read private keys?

Of course some sites were never vulnerable in the first place.
posted by Pruitt-Igoe at 12:06 PM on April 9, 2014


MeTa.
posted by homunculus at 12:41 PM on April 9, 2014


I'm seeing a lot of "fixed" sites, but nobody has replaced their certificates. Is it because they've determined that nobody has read their private key, or is it impossible to use the heartbleed vuln to read private keys?

No, they should fix their keys.

The nature of the exploit is that you get /some/ random data back, which may or may not include secrets. There's also a possibility the software will just crash; it depends on exactly where the memory it's trying to access winds up being.

In the case where it doesn't crash, there's no indication that anything happened in logs or anything like that, except plausibly some connections that dropped for no reason. So anybody who's "fixed" but not replaced their keys is in a state of sin.

It's also worth noting that if you're using user certificates with client programs that use OpenSSL for TLS, your user certs are vulnerable against hostile servers, too. Fortunately, user certs are not particularly widely deployed, and when they are it's generally for authentication with machines in closed environments, so orchestrating that attack is higher threshold.
posted by atbash at 12:53 PM on April 9, 2014


For example, pb says metafilter was patched, so presumably they had the bug and then fixed it. But metafilter.com's cert is from November 2013.
posted by Pruitt-Igoe at 1:05 PM on April 9, 2014


Yeah, I don't understand that either. Stackoverflow says they replaced their certificates but I still see them as being valid from last July. Is it a newly issued certificate with the same date range? If so, is there any way to see that it was recently installed?
posted by smackfu at 1:18 PM on April 9, 2014


"The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof."

Again, what is scarier, accident or intention?
The man behind open ssl has defense ties. NSA would be happy to have these keys would they not?
posted by dougiedd at 1:21 PM on April 9, 2014




Tell me like I'm somebody's mom. Does this affect possible vulnerabilities of files that were contained on C drives and never emailed or otherwise shared during the entire window of vulnerability? Could a hacker get those? I'm no journalist or human rights activist, just paranoid.
posted by Countess Elena at 2:30 PM on April 9, 2014


Tell me like I'm somebody's mom.

I will because I've been sitting on this metaphor since yesterday:


Remember the Cone of Silence from Get Smart? It never worked, but let's pretend it did. You and I can Request the Cone! and engage in secret conversations, safe from audio bugging. That's SSL.

The problem is that we just realized the cone is made out of transparent plastic. Everyone can read our lips! We don't know if any one has been reading our lips, but it's possible. First we're going to paint our cone with black paint so no one can read our lips (that's the bug fix), and then we're going to go over all of our secret plans and change them in case someone had been reading our lips (that's the new certificates people have been installing, and why you should change your password).

But! It turns out lip reading is the extent of KAOS' abilities. They can't read our minds. That means only things we spoke could have been intercepted. In the same way, only computer RAM memory was affected by this. The potential is there for anything transmitted to be intercepted (technically, anything in the OpenSSL process' memory), but not for anything that just sat in your head. Or the computer's hard drive.


And because you mentioned your C drive, you should know this is basically restricted to web servers. There are a few funky ways a home computer could have web traffic intercepted by a connection to a malicious server, but in the usual course of events, your home computer wouldn't have this particular sort of interceptable connection.
posted by Nonsteroidal Anti-Inflammatory Drug at 3:15 PM on April 9, 2014 [11 favorites]


Thank you for that. It made me smile. I have no reason to get all anxious about hackers wanting my Super Secret Diaries, but I did.
posted by Countess Elena at 3:17 PM on April 9, 2014 [1 favorite]


The attack works against users of OpenSSL, which are servers and non-browser clients. It could trigger crashes (not sure if there's a 'safe' range where the exploit is even less likely to be noticed) and was probably used against high-value targets, like the people who store your mail or patch your OS, the kind of things that affect people indirectly. It could have been used discreetly for two years, and will be used with less discretion against anything that remains unpatched now.
posted by Tobu at 4:32 PM on April 9, 2014


It also (AFAIK) lets attackers steal private keys, which lets them impersonate servers, which lets them do an undetectable man-in-the-middle attack, which lets them steal passwords, which gives them access to everything passwords provide - reading email, sending spam, making credit card purchases, transferring money from online banks, ...

My paranoid and partly-informed advice is to make sure that the sites you use are fixed before visiting them (not just before logging in, but before visiting them if you already have a logged-in session). Especially from open networks like free wi-fi and corporate or school networks.
posted by Pruitt-Igoe at 4:55 PM on April 9, 2014


Canada Revenue Service's very popular (about 85% of returns are efiled) income tax eFile service has been taken off line to patch the heartbleed exploit three weeks before the filing deadline.
posted by Mitheral at 6:06 PM on April 9, 2014 [2 favorites]


Glad I e-filed last week, I can't imagine the panic I'd be feeling if I hadn't. That said, 3 weeks is ample opportunity to print and mail it off.
posted by arcticseal at 6:35 PM on April 9, 2014


And because you mentioned your C drive, you should know this is basically restricted to web servers.

Although the risk is relatively low, this is not true. Anything that uses the OpenSSL library is vulnerable - on the server side, this also includes things like mail server. But things on the client side are vulnerable too. On my linux system, after I installed the security updates, I ran the following command, which lists the running processes still using the old OpenSSL library that would need to be restarted.

grep -l 'libssl.*deleted' /proc/*/maps | tr -cd 0-9\\n | xargs -r ps u

I ran it in my unprivileged user account, and it pointed out that both my IRC client, and my bittorrent client were relying on the buggy library.

If you connected to a malicious server using a client that was using the buggy library, the malicious server could send heartbeat requests to your client and read its memory.
posted by Jimbob at 8:15 PM on April 9, 2014 [2 favorites]


I'd argue that Linux, IRC, and BitTorrent count as sufficiently funky for the typical home user, but it's a good point that there is a lot of software out there affected by this.
posted by Nonsteroidal Anti-Inflammatory Drug at 8:37 PM on April 9, 2014


I'd argue that Linux, IRC, and BitTorrent count as sufficiently funky for the typical home user

Yeah they probably are not a typical home use case - but OpenSSL is used by a lot of equivalent Windows software people may be using, and in Windows it's not so easy to check if your software is using the guilty library.
posted by Jimbob at 8:42 PM on April 9, 2014


Which is really unfortunate. I had to track down some OpenVPN versions which involved finding the OpenSSL DLL file in a hidden folder (not the same hidden folder the documentation said it'd be in), click over to the third tab in the properties window, and then compare one character in a long version string. Not the type of thing many people are going to bother with, I think.

I'm morbidly curious to see if we ever hear of a big attack in the future that was enabled by negligent or ignorant system administrators or users.
posted by Nonsteroidal Anti-Inflammatory Drug at 8:47 PM on April 9, 2014


Jimbob: "but OpenSSL is used by a lot of equivalent Windows software people may be using, and in Windows it's not so easy to check if your software is using the guilty library."

WinSCP is a vulnerable client but "Note that OpenSSL is used with FTP over TLS/SSL only. Majority (about 98%) of WinSCP users use SSH (SFTP/SCP) and plain FTP only and are NOT affected!"
posted by Mitheral at 9:51 PM on April 9, 2014


A lot of the attention has been directed at web servers because OpenSSL is a part of the most common LAMP web-services stack. On the client side, a fair bit the traffic isn't going to be vulnerable:

Probably safe (at least from this bug):

* Desktop Chromium including Chrome and likely Opera 20: NSS
* Mozilla including Thunderbird, Iceweasel, probably Postbox, and other derivatives: NSS
* OS X: OpenSSL on my work laptop reports 0.98, a version before the bug was introduced. Safari and Mail.app may be using an Apple SSL stack.
* Internet Explorer: Probably a Microsoft library.
* AIM: Uses NSS according to Wikipedia.
* Anything Java.
* Dropbox: Claims to use 0.98. Unix dropbox clients might be dependent on installed SSL libraries.

Possibly vulnerable?
* Chrome for Android: Uses parts of OpenSSL but not for the full protocol.
* Opera Classic: Had problems with SSL bugs in the past.
posted by CBrachyrhynchos at 10:56 PM on April 9, 2014


It also (AFAIK) lets attackers steal private keys, which lets them impersonate servers, which lets them do an undetectable man-in-the-middle attack

The under-publicised bit is that you don't need to do a man-in-the-middle. Affected servers dump out whatever is in memory, which in my testing was the HTTP headers from the most recent request from another user, which can include that user's passwords, session cookies, tokens, etc, which can then undetectably be used to impersonate that user.

You don't need to be able to eavesdrop on the other user's network traffic. Heartbleed does it for you.
posted by grahamparks at 12:09 AM on April 10, 2014 [2 favorites]




Chrome for Android: Uses parts of OpenSSL but not for the full protocol.

It's probably only Android 4.1.1 that's vulnerable. Via Reddit:
Android 4.1.1_r1 upgraded OpenSSL to version 1.0.1:
https://android.googlesource.com/platform/external/openssl.git/+/android-4.1.1_r1

Android 4.1.2_r1 switched off heartbeats:
https://android.googlesource.com/platform/external/openssl.git/+/android-4.1.2_r1
4.1.1_r1 updated to OpenSSL 1.0.1, but then 4.1.2_r1 compiled it with OPENSSL_NO_HEARTBEATS
posted by Nonsteroidal Anti-Inflammatory Drug at 7:15 AM on April 10, 2014 [1 favorite]


I worry for the myriad sites out there whose owners won't even be aware that they have a problem - let alone have somebody who knows how to fix it for them. I am thinking of people who have a system set up for them and then have problems even updating an SSL certificate.
posted by rongorongo at 8:47 AM on April 10, 2014


There's always that one password that I can type the same way twice, but apparently never again.
posted by CBrachyrhynchos at 9:01 AM on April 10, 2014 [1 favorite]




More excitement: https://reverseheartbleed.com/, from Medium's blog post


In short, some services patched their web server hosts, but aren't patching the clients that extract data from links, etc, that users post.
posted by Nonsteroidal Anti-Inflammatory Drug at 1:42 PM on April 10, 2014 [3 favorites]




grahamparks: "You don't need to be able to eavesdrop on the other user's network traffic. Heartbleed does it for you."

Only until the server is patched, however. In that sense, it's not much worse than any other remote exploit. In some ways, better, since it doesn't (at least automatically) lead to a compromise of the server itself, so you can be fairly certain the server hasn't been rootkitted. The more pernicious issue is that the server's private key may have leaked, leaving future communications in a schroedinger's cat type situation until keys are regenerated and certificates reissued.

T'would be nice if there were a way to tell whether or not your key had been stolen...
posted by wierdo at 2:38 PM on April 10, 2014


Highly useful xkcd comic: How the Heartbleed Bug Works
posted by Rhaomi at 2:01 AM on April 11, 2014 [4 favorites]


How do you fix two-thirds of the web in secret?
When word of the Heartbleed bug first came out, news spread like a fire alarm — but it didn’t spread evenly. The vulnerability was spread across as many as two out of every three servers, which made a standard disclosure impossible. Some companies like Facebook got the news early, either from Google or OpenSSL itself, and were already patched when Monday’s news broke. Others, like Amazon and Yahoo, were left scrambling to protect themselves. But why did some companies have advanced warning while others got left in the cold? How did Facebook find out while Yahoo was left out of the loop?
posted by the man of twists and turns at 7:48 AM on April 11, 2014


EFF: Were Intelligence Agencies Using Heartbleed in November 2013? Archived server traffic reveals evidence of deliberate exploit of the bug. Attack came from 193.104.110.*, a suspected intelligence agency botnet.
posted by Nelson at 8:32 AM on April 11, 2014 [1 favorite]


Heartbleed seems to be quite tough to actually exploit: Heartbleed security flaw may not be as dangerous as thought. Interesting to see that folks are testing it.
posted by bonehead at 9:27 AM on April 11, 2014


EFF: Were Intelligence Agencies Using Heartbleed in November 2013?

Apparently, the answer is yes.
posted by Cash4Lead at 12:24 PM on April 11, 2014 [2 favorites]


The NSA says "no". Take that as you will.

Sorry for the image link, I haven't been able to find another source.
posted by Nonsteroidal Anti-Inflammatory Drug at 2:02 PM on April 11, 2014


Here's a text copy of the NSA denial. I can't find any completely reliable source for this announcement yet, but it seems unlikely to be a spoof.

I wish Bloomberg's sourcing for the NSA knowledge was better than "two people familiar with the matter". Historically one of NSA's missions has been securing government communications, and Heartbleed is serious enough you have to think they'd realize the harm to US interests was as big as the potential benefit to NSA eavesdropping. But since Snowden's revelations I don't know what to believe any more.
posted by Nelson at 2:09 PM on April 11, 2014


Heartbleed bug: Check which sites have been patched

We compiled a list of the top 100 sites across the Web, and checked to see if the Heartbleed bug was patched.
posted by Blazecock Pileon at 2:59 PM on April 11, 2014


Cloudflare: Answering the Critical Question: Can You Get Private SSL Keys Using Heartbleed?. Their answer, "perhaps not", at least for the NGINX they run. Article also notes they were notified about the bug March 31, about a week before the public disclosure.
posted by Nelson at 3:46 PM on April 11, 2014




You have to be mighty credulous to believe the ODNI when they say they believe in responsible disclosure. Surely the world's best cyberwarfare group has something more than zero 0-days (I guess this means we didn't do Stuxnet either?).
When Federal agencies discover a new vulnerability in commercial and open source software – a so-called “Zero day” vulnerability because the developers of the vulnerable software have had zero days to fix it – it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose.
So uh, yeah. The denial smells kind of funny IMO.
Also, seriously, Tumblr? I can't get over that.
posted by polyhedron at 9:20 PM on April 11, 2014 [2 favorites]


I'm seeing a lot of "fixed" sites, but nobody has replaced their certificates.

I just got a note from the fastmail.fm admins asking me to change my password; the cert you get when connecting to https://www.fastmail.fm was issued 8-Apr-2014. Unfortunately I don't have a copy of their older cert, so I can't check whether they've actually changed their keys as well. They seem pretty on-the-ball, so I expect they would have done.

Their current public key info is
PKCS #1 RSA Encryption

Modulus (2048 bits):
c5 db a7 65 de 1a 6a 68 ea 30 bf a3 e1 f5 b5 7d 
9c bb 05 37 07 d0 f3 fa c3 4e cb 0b 5e f4 00 00 
3e e6 b0 5e 86 a8 e4 77 50 b7 a9 1e 45 30 00 3d 
32 a5 05 6b 78 e4 d0 af ab 3e 23 8e 2d 6e c1 0c 
b2 f6 c5 55 e1 02 e7 35 1c 2d fc 2b f4 c4 be 6f 
e6 4c 04 de 5e 64 d9 12 1c ad 8c 9d 77 83 bb da 
dd 5c 0e 11 a2 33 32 e2 b4 51 35 48 32 93 4d c9 
e2 f7 b6 61 04 d9 60 b5 1a eb c4 84 d9 2a 44 24 
94 5f 56 53 93 a7 f8 28 42 22 f3 b8 29 11 5d 1e 
bf 40 d4 3b 4b 2e dc ce f8 ec 5c c3 0c cb a1 9b 
9e 74 5a e0 1b 0c e2 02 ee 1f 15 2a 7a 01 49 7a 
fe d6 72 86 ab 46 7c 9b 94 1e 9d 98 5f 50 28 26 
2f ec 10 77 f5 70 f3 34 ee 6d b9 31 b0 bb 53 95 
85 38 93 6b 10 8a 81 6b 83 9f 23 59 f3 61 b5 74 
c0 1e 80 26 12 90 6a ab 9b 7a 87 28 a1 6b 06 69 
a2 9c 90 ec 72 57 c5 2b b1 a6 cd ed cf a6 bc 51 

Exponent (24 bits):
65537
and if anybody does have an older Fastmail cert lying around somewhere it might be interesting to compare.
posted by flabdablet at 9:34 PM on April 11, 2014


Pruitt-Igoe: "I'm seeing a lot of "fixed" sites, but nobody has replaced their certificates. Is it because they've determined that nobody has read their private key, or is it impossible to use the heartbleed vuln to read private keys?

Of course some sites were never vulnerable in the first place.
"

Keep in mind that issuers don't appear to bump the 'issued on' date. You should check the serial number to verify it changed. And the Revoked Cert list to see if it's been revoked.
posted by pwnguin at 10:15 PM on April 11, 2014


Is that really the only way? Compare the serial number with the one on a cert I don't have a copy of?
posted by Pruitt-Igoe at 12:55 AM on April 12, 2014 [1 favorite]


So uh, yeah. The denial smells kind of funny IMO.

I think this is the more relevant line, since it's pretty easy to use anything for national security:

"Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities."
posted by smackfu at 5:49 AM on April 12, 2014


One thing I'm not getting: why could the code request arbitrarily-sized messages? Wouldn't security code have a strictly-enforced limit for all messages so they don't go looking where they shouldn't? Basic bounds-checking can't be that expensive …
posted by scruss at 7:46 AM on April 12, 2014


scruss: Essentially because the tools used to build these fundamental software stacks are still conceptually rooted in the 1970s where certainly all that memory security was expected to be worked out in the software engineer's head.
posted by whittaker at 8:14 AM on April 12, 2014 [1 favorite]


There's no elegant way to signal the end of a variable-length field over a noisy channel. If your termination byte(s) (usually NUL in c-based languages) appear in the middle of the field as data or noise, the reader puts bad data into the next field. If the termination byte gets dropped in transit, the reader will just keep on going. This isn't just a problem for computer languages, DNA shares similar problems with truncated genes or genes that double in size because the terminator gets mangled or dropped.

There's a couple of alternatives. You can use escape sequences, which add another layer of complexity onto simple data. You can just not use variable-length fields and take the bandwidth hit. Or you can prepend a few bytes to specify the length of the field. It's my understanding that languages with memory management take care of counting and tracking the bytes automatically. But internet packets don't have that luxury.
posted by CBrachyrhynchos at 9:54 AM on April 12, 2014 [1 favorite]


The problem was not caused by the use of a length field per se; the problem was caused by the failure to do a sanity check on that length field's value.

The length field in question is supposed to identify the number of payload bytes that a heartbeat request contains, so that the server knows how many bytes to copy from the incoming heartbeat request to the outgoing heartbeat response. But the server also knows the total length of the request packet, and it ought to check that the specified payload length doesn't extend the putative payload beyond the length of that request; if it just blindly trusts a specified payload length, as the buggy OpenSSL code was doing, then what ends up getting echoed back in the heartbeat response can include data that was never present in the heartbeat request packet, but instead comes from whatever happens to follow it in server memory.

Many people have said that a language with as few inbuilt safety checks as C is unsuitable for writing this kind of code; a "proper" language would presumably throw exceptions when asked to copy 65535 bytes from a request packet field that is in fact only 2 bytes long. But it seems to me that to take this view is to sweep a genuine problem under the rug to some extent, because at some point you're still going to need some kind of request packet parser to make sense of what's just arrived over the wire. Extracting meaning from an incoming chunk of request data is just something that's always going to need to be done with care, and code that does this is always going to want thorough review before it goes live regardless of what language it's implemented in.
posted by flabdablet at 12:29 PM on April 12, 2014


Theo de Raadt of OpenBSD pointed out that the standard c libraries that OpenSSL uses have protection against this kind of memory access error, however OpenSSL uses their own memory allocation by default because of performance problems on "some platforms." If accurate it looks like OpenSSL potentially has a bigger problem than a single bug.

Granted, de Raadt isn't one to shy away from a flamewar when it comes to promoting OpenBSD.
posted by CBrachyrhynchos at 12:56 PM on April 12, 2014 [1 favorite]




Preventing heartbleed bugs with safe programming languages: an experiment replacing the buggy function in OpenSSL with code written in ATS. It's kinda wonky, but interesting.

I asked on Twitter a few days ago what languages other than C one could write a library liken OpenSSL in. The problem is you need to be able to compile to linkable object code so you can embed the library in C processes like Apache. That rules out most popular modern languages; something like Python has way too much runtime baggage. The list of languages I collected was C++, Rust, Ada, Haskell, Lua, D, Fortran, Julia. It's not clear if all of those qualify as low overhead and most of those are pretty fringe languages. I'd argue C++ is the only realistic option there, and it doesn't really solve the goal of a memory-safe language unless you are very careful in how you use it. It's a bummer we don't have more system-level programming languages.
posted by Nelson at 4:24 PM on April 12, 2014


I love lua, but lua in lieu of C trades one set of bugs for another. This space is what Rust is designed for- safety and zero-cost abstractions are key design goals- but admittedly it's fringey for now. Julia is pretty squarely fringe.

I would say that Ada is both low overhead and not too fringe- there's lots of Ada expertise among defense contractors.
posted by a snickering nuthatch at 5:42 PM on April 12, 2014


Nelson: "I asked on Twitter a few days ago what languages other than C one could write a library liken OpenSSL in."

I bet you could implement SSL entirely in Java, and it throws exceptions if you try hinky things with the heap...
posted by pwnguin at 5:59 PM on April 12, 2014


Yeah Rust is clearly aimed at this kind of purpose, it's just too new for me to trust it. pwnguin; you can't link Java effectively into C processes like Apache. It is a memory safe language though, so there's that.
posted by Nelson at 6:15 PM on April 12, 2014


Nelson, have a look at this (preventing OpenSSL bugs with strong typing and ATS). ATS is interesting because it provides a very modern and rigorous type system and interoperates with C at a very deep level (compiling to C and having no runtime).

edit: oh never mind, I had skipped over the first line of your comment. You could add OCaml to your list, if enough runtime for a GC is acceptable.
posted by Tobu at 6:54 PM on April 12, 2014


LastPass Heartbleed checker
posted by nickyskye at 8:12 PM on April 12, 2014 [1 favorite]


Nelson: "wnguin; you can't link Java effectively into C processes like Apache. It is a memory safe language though, so there's that."

Ah, I totally skipped over that part of your comment. I agree that requirement is certainly hapmering. But I have an idea. Lots of "webscale" deployments use lots of layers. SSL termination with haproxy, pound, etc. and forward requests to Varnish -> JBoss/Apache -> fcgi -> wtfelse using UNIX sockets and other IPC. Maybe instead of a C library, we just make SSL a small program with just enough extra code to write to a UNIX socket. Instead of communicating to SSL via function calls, your wrapper sends messages via socket. It should be pretty lightweight; stud implements an SSL unwrapper in 3.5KLoC.

It certainly fits the UNIX philosophy of 'lots of small programs working together," in the same fashion as Varnish. And IPC has come a long ways, and is probably efficient enough these days.
posted by pwnguin at 10:24 PM on April 12, 2014 [1 favorite]


That ATS stuff pretty much proves my point. It looks like the author of that has gone to a lot of trouble to force his compiler to insert a sanity check in the one place where the original C code should have had one put in by hand - and would have had, if the original C coder's mindset was similar to that of the ATS coder to begin with.

I can't see how getting an ATS type specification 100% right is any easier, or any more likely not to be screwed up in practice, than writing careful C code to validate untrustworthy fields in request packets.

Maybe that's just me being the kind of obsessive-compulsive who believes in having several competent reviewers go over security-critical code with a fine-tooth comb before it's allowed to go live. Or maybe it's because I'm the kind of dinosaur coder who still expects people to want to understand exactly what they code they're writing actually does, and who prefers working with compilers that are simple enough that this is in fact an achievable aim.

Sloppy coding is not by any means the only source of this kind of error; the Law of Leaky Abstractions causes at least as many, and attempting to avoid all conceivable classes of error by inserting an ever-increasing number of layers of abstraction between coder and CPU has more than a faint whiff of Architecture Astronautics.
posted by flabdablet at 4:19 AM on April 13, 2014 [1 favorite]


I suggest wating Yaron Minsky's Caml Trading and Effective ML videos, flabdablet, especially when he talks about code review around 25m30s :
- "You basically cannot pay people enough to code review dull [boiler plate] code"
- "Any [mental tools] that let you offload [the informal proofs involved in reading code] is a huge win"
posted by jeffburdges at 5:14 AM on April 13, 2014


It's worth noting that the Heartbleed bug checkin did have a code review noted at the time it was checked in. I don't know how thorough OpenSSL reviews are (obviously, not enough) but there were at least two pairs of eyes on the code when it went in. Both missed the bug. And while code review is nice and all, it sure would be nice if security critical code were written in a language with fucking bounds checking so you don't have to worry about something as primitive as whether data[i] is going to go off the end of the data array or not like some savage. If the tools catch the simple bugs it lets reviewers focus on the hard bugs.

pwnguin; federated deployment of stuff is a great idea and in practice a lot of big datacenters do run SSL in a separate server from the appservers. Often in dedicated hardware. But that server still has to implement SSL in some language and I'm dismayed at how few practical choices there are instead of C. Also SSL is arguably the one thing you actually want to run in the appserver providing the application. End to end encryption afterall. And we know now that NSA is actively attacking services like Google inside their networks, in their datacenters, so you actually need encryption all the way to the appserver.
posted by Nelson at 7:30 AM on April 13, 2014 [2 favorites]


Passwords are Obsolete, and they make Heartbleed a thousand times worse. Provocative essay arguing SMS/email challenges and cookies are sufficient authentication for websites. I suspect the author is half-trolling but it's an interesting take on the password problem.
posted by Nelson at 11:18 AM on April 13, 2014 [2 favorites]


Nelson: "End to end encryption afterall. And we know now that NSA is actively attacking services like Google inside their networks, in their datacenters, so you actually need encryption all the way to the appserver."

I don't see the problem with UNIX domain sockets; if the NSA can read communication going between two processes on the same server, it's already game over.

Although, as I think about it, the client side could be pretty tricky.
posted by pwnguin at 4:50 PM on April 13, 2014


Akamai stated on Friday that they used a patched version of OpenSSL with a special memory allocator for the private keys, which had protected against divulging the customer's keys. They released a version of the patch for inclusion in OpenSSL. Reviewing of the patch lead to the realization that, actually, enough private key material is allocated using the normal memory allocator and vulnerable to heartbleed. Akamai will now rotate all certs/keys.

This is a nice writeup of demonstrating that the private keys in OpenSSL can be extracted using heart beat messages. I liked that there are apparently now trolls sending fake '--BEGIN RSA PRIVATE KEY---- ....' blocks to the Cloudflare challenge server, which then turn up in the heart beat messages.
posted by ltl at 12:12 AM on April 14, 2014 [2 favorites]


Yaron Minsky's Caml Trading and Effective ML videos

repeatedly stress the point that code review is fundamental to code correctness, and also stress the point that code that's easy to read is essential to making code review work properly.

His point about boilerplate code being likely to hide errors during review is fair and reasonable - but so is what he says about making sure that what the code is doing is obvious and the predictably awful consequences of hiring bad programmers.

C is about as close to the machine as any non-assembly language can possibly get. It isn't a high-level language, despite its superficial resemblances to one and the effort expended by successive ANSI committees to make those resemblances appear less superficial over time. My point is that this should not automatically make it unsuitable for writing security-critical code in - especially code like an SSL layer that absolutely needs to behave demonstrably, provably and obviously as-designed.

C's lack of safety features - particularly its lack of bounds checking - is a well-known characteristic of that language. If you're coding in C, you do need to be careful with it, and you ought to know that; a competent C programmer will have excellent and reliable instincts about what the kind of care that's required looks like, and exercise it as a matter of course.

There's nothing in either of the videos you linked that has changed my belief that the kind of programmer who doesn't instinctively treat a client-supplied length value with due suspicion while working in C is the same kind of programmer likely to generate bug-ridden code in any programming language. At some point, you really do have to be a good enough rider not to need the training wheels.

On a side note, and pushing the bicycle analogy a little harder than it possibly should go: object oriented languages in general, and C++ in particular, have long struck me as misguided attempts to make the training wheels into a complete substitute for the primary drive train. I don't think you'd get a ciggie paper between me and Yaron Minsky on this point.
posted by flabdablet at 1:08 AM on April 14, 2014 [1 favorite]


SMS/email challenges and cookies are sufficient authentication for websites

Abuse of cookies to use them as assumed-secure substitutes for actual shared secrets: still abuse.

Assuming that SMS and email constitute channels secure enough to stand alone as single factor authenticators: GTFO.
posted by flabdablet at 2:55 AM on April 14, 2014 [2 favorites]


Forcing customers to wait to use your service until they receive a message via non-guaranteed delivery: Good luck with that.
posted by Nonsteroidal Anti-Inflammatory Drug at 6:36 AM on April 14, 2014


Jacob Appelbaum mentioned that OpenSSL appears excessively complicated in his LibrePlanet 2014 keynote "Free software for freedom, surveillance and you", flabdablet, but..

In principle, I'd agree with your position that OpenSSL could be well written and reviewed in C. Jane St's trading software is far more mathematically complicated, and changes faster, than a protocol layer like OpenSSL.

It's still true however that C requires considerable boilerplate, just like Java, C++, etc., and the type system is fairly weak. We could therefore review code more widely if we made code review less boring.

Also, there is not really any advantage to manual bounds checking in mid-level languages today, the CPUs are so blazingly fast relative to memory that you'll never gain anything, and the compiler is potentially better at minimizing unanticipated branches.
posted by jeffburdges at 8:41 AM on April 14, 2014


It's still true however that C requires considerable boilerplate, just like Java, C++, etc., and the type system is fairly weak. We could therefore review code more widely if we made code review less boring.

Sure. And C has a macro preprocessor which is actually pretty good at generating much of the necessary boilerplate for you, if you're brave or foolish enough to have a crack at employing it for that purpose. Your code reviewer will swear at you for forcing them to get their head around what you've done, but they won't be bored.

Last time I wrote anything seriously stream-translatey in C, I did it using some fairly horrendous abuse of the preprocessor. Instead of specifying structs directly in C, my little library required you to specify them in an xdrtypes.h header file using macros. Then there were another three .h and two .c files, each of which #included that same xdrtypes.h after doing its own set of #defines for each of the macro names you were allowed to use inside xdrtypes.h.

In genxdrtypes.h, the macros you put in xdrtypes.h got expanded to native C struct and array definitions for the types that xdrtypes.h was written to specify. In getxdr.c, the very same macros got expanded into functions that read and validated a stream and returned structs and/or arrays of the appropriate types; and in putxdr.c they got expanded into functions that would serialize the C types to a stream. Getxdr.h and putxdr.h generated prototypes for the corresponding functions in getxdr.c and putxdr.c.

Code review for all the back-and-forth-to-the-wire stuff involved only a few hundred lines of my code, none of which was repetitous boilerplate, plus the library user's own xdrtypes.h to make sure the right macros were called in the right sequences to match the types needed for the application. The actual functions that ended up in the final application were guaranteed to match the types they serialized and deserialized correctly, because it was actually the preprocessor that wrote all that code.

I was quite pleased with the final result, which reduced hand-written boilerplate very nearly to zero and automatically inserted bounds and sanity checking everywhere it needed to be.
posted by flabdablet at 10:32 AM on April 14, 2014 [2 favorites]


Yes, you can use C's horrid macro facility. It doesn't make it a good idea. And once you introduce that much complexity, you might as well use a more complex/sane language. I mean, you're using the preprocessor to expand your code. Are you sure it does the right thing? Did you check the expanded code to make sure it does?
posted by Monday, stony Monday at 12:00 PM on April 14, 2014


C's macro facility isn't horrid, merely a little limited. And as long as you use it within those limits, and do sensible things like make the name of any macro that does anything non-trivial fully uppercase to warn the reader that it is a macro, it's far less troublesome than something more general like m4.

And yes, of course I did sanity checks on the generated code, and yes both I and my code reviewers did become sure it would always do the right thing. The macro preprocessor is, after all, just another coding language.
posted by flabdablet at 9:53 PM on April 14, 2014


I'm fine with sensible macro usage to limit boilerplate, heck I write in LaTeX regularly, but..

All that sounds far easier to do in languages with established, readable, and consistent features for abstraction, like higher order functions, algebraic data types, and parametric polymorphism so long as it's bounded, predicative, etc., rather than ad hoc like C++ templates.

We could discuss wether the particular enhancements of languages like ML, ATS, Haskell, Scala, Idris, etc. around parametric polymorphism and dependent types make serious mistakes more or less likely, become distracting, etc., but basically these type systems all provide some framework that helps the programmer think about correctness. Type systems are a tool for the programmer, not really the compiler.
posted by jeffburdges at 5:29 AM on April 15, 2014


Again, I have no beef with languages that have stronger type systems than C.

What I still believe to be true:

(a) C does provide facilities to let a competent coder generate robust and reliable workarounds for its lack of type safety and generate code tidy enough to make code review painless enough to work properly, meaning that a blanket dismissal of C as unsuitable for secure work is unjustifiable.

(b) Programmers not strongly motivated to make good use of those facilities when circumstances require them to code in C are the same people most likely to fail to take best advantage of the facilities of other languages also.

Careless coders will write careless code in any language, and it really does seem to me that trying to fix that problem by adding cool safety features to computer languages is a bandaids over bullet wounds kind of story. By all means design computer languages with cool features, to make them more pleasant for competent coders to use - that's certainly worthwhile, as Yaron Minsky so eloquently demonstrates. But you'll have your work cut out convincing me that any computer language can be so safe as to be inherently dickhead-proof.

I've looked over the OpenSSL source code, and it's pretty horrid. I don't think that's primarily because it's written in C. It has the smell of code that started out smallish and then suffered some fairly extreme scope creep, that nobody has really been game to refactor for fear of introducing unacceptable regressions. To me, that looks far more like "poorly coordinated dev team" than "poor choice of coding language"; then again, CADT.

Programming is hard.
posted by flabdablet at 6:39 AM on April 15, 2014


Eh, I'd rather not be one of the millions of people best-case inconvenienced as fall out from this bounds error because we must uphold the principle that good coders should be able to overcome all possible perils in any language.

OpenSSL's code quality is orthogonal to the language it's encoded in. If it's going to be sloppy, could it at least be sloppily done in a language where the number one security related vulnerability source is less likely to occur, please?
posted by whittaker at 7:22 AM on April 15, 2014 [1 favorite]


Remember, bounds checking as provided by the c-library was explicitly disabled by the OpenSSL devs, for reasons they obviously thought sufficient.
posted by mikelieman at 7:27 AM on April 15, 2014 [1 favorite]


mikelieman: Bounds checking that existed in general or in OpenBSD?
posted by whittaker at 7:55 AM on April 15, 2014


libc, apparently.
posted by mikelieman at 7:58 AM on April 15, 2014 [1 favorite]


In genxdrtypes.h, the macros you put in xdrtypes.h got expanded to native C struct and array definitions for the types that xdrtypes.h was written to specify. In getxdr.c, the very same macros got expanded into functions that read and validated a stream and returned structs and/or arrays of the appropriate types; and in putxdr.c they got expanded into functions that would serialize the C types to a stream.

So your argument is that your one-off, ad-hoc, informally-specified type safety and bounds-checking system is less vulnerable to the law of leaky abstractions than the widely-deployed, formally-specified, well-understood type safety and bounds-checking systems with multiple interoperable implementations that higher-level languages incorporate?

Careless coders will write careless code in any language

Yes, and some drivers will lose control at a sharp curve regardless of the presence or absence of a guardrail—but the guardrail will have a major effect on the result.
posted by enn at 8:27 AM on April 15, 2014 [3 favorites]


Reported from the OpenSSL source code: "On some platforms, malloc() performance is bad enough that you can't just free() and malloc() buffers all the time, so we need to use freelists from unused buffers."

Here's some more analysis:
What if the previous contents weren’t interesting? For example, what if malloc overwrote the previous contents of memory before reusing it? As in, exactly what malloc.conf does with the J option. Then the attacker would get a buffer full of 0xd0, which is decidedly uninteresting. But...

There’s always a but. Unless libssl was compiled with the OPENSSL_NO_BUF_FREELISTS option (it wasn’t), libssl will maintain its own freelist, rendering any possible mitigation strategy performed by malloc useless. Yes, OpenSSL includes its own builtin exploit mitigation mitigation. Of course, you could compile your own libssl with that option, but...
My interpretation is that in using their own memory management, OpenSSL didn't benefit from developments in detecting or preventing these errors built into c libraries over the last decade.
posted by CBrachyrhynchos at 9:43 AM on April 15, 2014 [3 favorites]




So your argument is that your one-off, ad-hoc, informally-specified type safety and bounds-checking system is less vulnerable to the law of leaky abstractions than the widely-deployed, formally-specified, well-understood type safety and bounds-checking systems with multiple interoperable implementations that higher-level languages incorporate?

No, my argument is that it is demonstrably possible to write robust C code incorporating necessary bounds and sanity checks in ways that lend themselves to both maintainability and ease of code review, which means that blanket pronouncements about C being inherently unsuitable for security-critical code are an overreach.

If you're going to write security-critical code in C, you will probably not get that code as secure as it needs to be as quickly or easily as you could in a higher-level language. Same applies to doing most things in a lower-level language compared to a higher-level one; that's kind of the point of high-level languages.

It is wrong to treat C as if it were a high-level language. I'm not advocating that anybody should do that. But given competent designers and coders, and adequate time, and adequate review, the mere fact that a security suite is written in C is neither reason nor excuse for security failures once deployed.

There are always tradeoffs between ease and speed of development, required developer competence, development cost, and required product performance. Sometimes C will end up being a better choice than the available alternatives. No one-size-fits-all computer language exists.

some drivers will lose control at a sharp curve regardless of the presence or absence of a guardrail—but the guardrail will have a major effect on the result.

Sure, and making guardrails standard on all sharp curves is a good idea.

An even better idea is to hire road designers good enough to find a way from A to B that doesn't involve sharp curves in the first place.
posted by flabdablet at 11:42 PM on April 15, 2014 [1 favorite]


Sometimes C will end up being a better choice than the available alternatives.

The argument here is that it won't, in most cases, for security critical code. Computers are really good at checking tedious proofs. We have extremely powerful computers. We should, were possible, use them to tediously prove that our code doesn't contain various types of errors. You can metaprogram your way out of boilerplate, but then you make it more difficult to use automated checking tools. So instead of using a language were metaprogramming is necessary, you might use one where it isn't. Automated proofs (including ordinary type-checking) aren't the end-all be-all, but when security is paramount, it doesn't seem optimal to avoid them when they're available.
posted by Monday, stony Monday at 5:48 AM on April 16, 2014 [3 favorites]


"Optimal" is a sharp curve in desperate need of guardrails.

This looks like some pretty optimal stuff right here, which makes the tradeoffs involved in choosing what to do about SSL in an Ada web server framework quite interesting to think about.
posted by flabdablet at 8:36 AM on April 16, 2014


Replace "optimal" with "a good practice".

Ada is a relatively unpopular language. The small team behind AWS simply doesn't have the resources to write its own SSL implementation.
posted by Monday, stony Monday at 9:05 AM on April 16, 2014


Any tool, if used improperly, can lead to significant risks. But sometimes you're in a context where you just have to juggle chainsaws. There are things you can do to make juggling chainsaws safer, but at the end of the day simply NOT juggling chainsaws has a obvious appeal.

( None of this would have happened if they had just used Perl, of course... )
posted by mikelieman at 10:29 AM on April 16, 2014


I don't know. From my reading, many of those memory-protection proofs have already been implemented in standard C libraries, compilers, and operating systems. A memory access error should crash the program when it detects a read out of bounds in many cases.

The bigger problem from my read is that OpenSSL's independently developed memory management reuses globally allocated memory in local functions. I suspect that if Heartbleed can get information from previous function calls, so can many other functions in the library. I'll also point out that you can abuse globally scoped data structures in higher-level languages with mutability as well.

OpenBSD has started a full refactor/fork of OpenSSL. They previously produced OpenSSH.
posted by CBrachyrhynchos at 4:03 PM on April 16, 2014 [3 favorites]


OpenSSL Rampage
this is basically a blog of amusing checking comments
posted by ryanrs at 2:26 PM on April 17, 2014 [1 favorite]




oh god, my eyes are rolling so hard at the openbsd effort on openssl. For instance, they've repeatedly commited code that doesn't even compile, then sheepishly fixed it later:
Author: deraadt <deraadt>  2014-04-19 08:09:11
 
    oops, typo got into change
 
------------------------ lib/libssl/src/apps/s_socket.c ------------------------
index d52714c..b77cb90 100644
@@ -302,7 +302,7 @@ redoit:
                *host = NULL;
                /* return(0); */
        } else {
-               if ((*host = strdup(h1->h_name) == NULL) {
+               if ((*host = strdup(h1->h_name)) == NULL) {
                        perror("strdup");
                        return (0);
                }
They've accidentally lost code (is this a security check? who knows), but at least some of the times they've figured out to put it back:
Author: tedu <tedu>  2014-04-19 14:40:11
 
    release buffers fix was lost in merge. put it back.
 
------------------------- lib/libssl/src/ssl/s3_pkt.c -------------------------
index 52c48e9..60c5114 100644
@@ -986,7 +986,8 @@ start:
                        if (rr->length == 0) {
                                s->rstate = SSL_ST_READ_HEADER;
                                rr->off = 0;
-                               if (s->mode & SSL_MODE_RELEASE_BUFFERS)
+                               if (s->mode & SSL_MODE_RELEASE_BUFFERS &&
+                                   s->s3->rbuf.left == 0)
                                        ssl3_release_read_buffer(s);
                        }
                }
One of their big "accomplishements" is to reformat the code to a style called "KNF"; one feature of KNF is a space after a comma which separates arguments. Unfortunately, they are using a totally brain-dead tool do that: it even changes a lone comma in a string literal:
Author: guenther <guenther>  2014-04-17 05:50:36

    Fix for ", " issue in jsing's knf script

--------------------------- lib/libssl/src/apps/ca.c ---------------------------
index cf6015b..50cf81b 100644
@@ -2543,11 +2543,11 @@ make_revocation_str(int rev_type, char *rev_arg)
 
        BUF_strlcpy(str, (char *)revtm->data, i);
        if (reason) {
-               BUF_strlcat(str, ", ", i);
+               BUF_strlcat(str, ",", i);
                BUF_strlcat(str, reason, i);
        }
        if (other) {
-               BUF_strlcat(str, ", ", i);
+               BUF_strlcat(str, ",", i);
                BUF_strlcat(str, other, i);
        }
        ASN1_UTCTIME_free(revtm); 

They seem to be adding zero tests; one test they deleted because they didn't like that it deeply touched the RNG in order to generate a deterministic (and thus testable) result.
Author: miod <miod>  2014-04-18 15:23:42

    ECDSA signature computation involves a random number. Remove the test trying to
    force what RAND_bytes() will return and comparing it against known values -
    I can't let you do this, Dave.
If they haven't introduced a real bug amidst all this crap, I'd be really surprised.
posted by jepler at 8:32 AM on April 20, 2014 [6 favorites]


Yeah this OpenBSD meddling seems pretty obnoxious, although many of them do seem to improve the codes. My impression from OpenSSL is no amount of patching will make it better, really someone needs to start anew.
posted by Nelson at 8:42 AM on April 20, 2014


Any thoughts on GnuTLS vs OpenSSL? BSD folk would avoid GnuTLS for merely personal reasons.
posted by jeffburdges at 10:54 AM on April 20, 2014


If they haven't introduced a real bug amidst all this crap, I'd be really surprised.

OpenBSD lets you see the sausage being made - it's ugly. There will probably be multiple bugs introduced into the codebase... during development. The end result is auditable and audited, bugs caught and quashed, and the project usually winds up with solid code.

That said, this is more of a stunt than a serious effort - calling attention to large mistakes and neglect with project maintenance and highlighting the importance of modernization for security, stability and performance. That in and of itself makes this worthwhile.

Yeah this OpenBSD meddling seems pretty obnoxious

There's a reason OpenBSD happened, and it wasn't because Theo was cool with leaving NetBSD on his own in an amicable parting. OpenBSD is pretty much obnoxious 24/7 - it's baked into the project's DNA.
posted by Slap*Happy at 4:15 PM on April 20, 2014 [5 favorites]


Obnoxious or not, they get results. You can argue with Theo but it's hard to argue with OpenBSD's track record on security.
posted by flabdablet at 3:52 AM on April 21, 2014 [1 favorite]


I've hung out on linux-kernel and git mailing lists, and I prefer their version of sausage-making to what I see in the openbsd commit history. But it may just be a matter of taste. Theo is on record as thinking that git development workflows with branches and rebased history are simply bad; I think it makes a much better-looking sausage, and I think the end product is probably more auditable and in practice more audited since lots of branches (well, patch sets really) get discussed to death before they enter the main tree.

Wow, it just occurred to me to count the size of git and openssl and their tests. I used David A. Wheeler's 'SLOCCount' to count versions from Debian Jessie. Not only is git actually smaller than openssl(!) it has about 30x as many lines devoted to testing.
OpenSSL (1.0.2~beta1-1 from debian jessie):
    total: 406,056 SLOC
    test/:   3,284 SLOC [plus symlinked files]
            <1% code devoted to testing

git (2.0~next.20140415-1 from debian jessie):
    total: 340,830 SLOC
    t/:    108,158 SLOC
          >30% code devoted to testing
posted by jepler at 5:53 AM on April 21, 2014


Looking at the OpenSSL Valhalla rampage reminded me of how grateful I am to have left C far behind.
posted by Zed at 1:22 PM on April 21, 2014




I'd love it if the only trusted SSL toolkit was GPL v3, but sadly not happening.
posted by jeffburdges at 11:37 AM on April 22, 2014


"This page scientifically designed to annoy web hipsters. Donate now to stop the Comic Sans and Blink Tags" - LOL.
posted by Slap*Happy at 12:24 PM on April 22, 2014 [1 favorite]


Companies Back Initiative to Support OpenSSL and Other Open-Source Projects. "Amazon, Cisco, Dell, Facebook, Fujitsu, Google, IBM, Intel, Microsoft, NetApp, Rackspace, Qualcomm and VMWare have each pledged $100,000 a year over the next three years to the Core Infrastructure Initiative, the effort organized by the Linux Foundation ... The Core Infrastructure Initiative will start with OpenSSL."
posted by Nelson at 1:33 PM on April 24, 2014 [2 favorites]


Megabucks vs. Megaegos - begun, the OpenSSL Forkwars have.

(I'd bet on LibreSSL, simply because Google will want to code it in Go, Microsoft will insist on C#, IBM will demand C++, Amazon says Java or nothing, and everyone will try not to laugh when Facebook says maybe they should all use Hack. While they're sorting that out, Theo will be going all leather-daddy on his team to finish up the manpages before the twice-yearly release date.)
posted by Slap*Happy at 7:40 PM on April 24, 2014


But security experts and even the open-source movement’s biggest advocates acknowledge that Heartbleed revealed that some crucial open-source systems are underfunded and suffering from a lack of resources.

I'd bet on LibreSSL as well, because funding is not actually the problem. Funding has never fixed and will never fix an incompetence issue. It takes good coders to make good code, and no amount of ISO9001 everybody-is-an-interchangeable-cog bullshit is going to change that.
posted by flabdablet at 10:51 PM on April 24, 2014


iPad Fever Is Officially Cooling woot! Good riddence! :)
posted by jeffburdges at 2:18 PM on April 25, 2014


They made a mistake in expecting the iPad to sell like the iPhone... A tablet is a PC replacement or peripheral, and will likely be on the same upgrade cycle... Or worse, on a Mac upgrade cycle. Mac users keep their rigs forever.

Significant new features may break that cycle - a test of Apple's ability to innovate post-Jobs.
posted by Slap*Happy at 4:33 PM on April 25, 2014


...um, did you mean that for a different thread maybe, jeffburdges?
posted by Zed at 4:46 PM on April 25, 2014


« Older Four live sets from Antony and (some of) the...   |   Horses nuzzling cats Newer »


This thread has been archived and is closed to new comments