The Guardian's Readers' Editor is finally back from vacation
July 8, 2017 7:31 PM   Subscribe

Over six months ago The Guardian published a story purporting to expose a "backdoor" in WhatsApp. The EFF wrote that "it's inaccurate to the point of irresponsibility to call this behavior a backdoor", and over seventy cryptographers signed a scathing open letter explaining the huge flaws in the Guardian's story, but they were met with almost total silence. Meanwhile, even Teen Vogue managed to get the story right. MeFi's own Maciej Cegłowski (fomenter of online drama, who runs the bookmarking site Pinboard on the side) wouldn't let it go, and now finally the Guardian has issued a non-retraction retraction where they decided to "amend" the article instead. Maciej is still not impressed, but unfortunately the damage has already been done. Much like the debunked vaccine/autism link, this story could have literal life-and-death consequences by casting doubt on the Signal protocol and pushing dissidents and leakers to use much more insecure communications like web forms and email.
posted by karlshea (43 comments total) 26 users marked this as a favorite
 
This was so frustrating to watch happen. I'm glad they finally said something, but it's completely ridiculous that the entire security community told them they were wrong and it took six months to "correct."
posted by karlshea at 7:39 PM on July 8 [7 favorites]


No need to "even" on Teen Vogue, they're one of the best thing going right now.
posted by The Hamms Bear at 7:44 PM on July 8 [57 favorites]


Why not Jabber? /zoidberg
posted by anthill at 7:45 PM on July 8


Attention Guardian: this is why I do not pay you.
posted by aramaic at 8:06 PM on July 8 [10 favorites]


No need to "even" on Teen Vogue, they're one of the best thing going right now.
That's true, I waffled on putting it there but I'm not sure how well recognized that is yet. And it felt more dramatic. OTOH, the fact that Teen Vogue can check their stories better than The Guardian is kind of a problem.
posted by karlshea at 8:07 PM on July 8 [10 favorites]


The proposition is that this condition: backed up messages, combined with someone colluding with Facebook, WhatsApp to ‘fake’ the ‘person has a new phone’ condition, can lead to the backed-up messages being re-encrypted and sent to the new, fake or colluded phone.” Basically, what the Guardian is reporting as a “backdoor” is actually an already well-known way to exploit encrypted messaging systems that is extremely difficult to pull off.
Faking the 'person has a new phone' condition doesn't sound particularly challenging for, say, government agencies. Or random teenager with a telephone, a grudge, and some free time.

"You lost your key, so your don't get to decrypt messages that were sent to that key" seems like one of the most basic and fundamental expectations one could have for an encryption system. At the very least, doing anything else ought to be an opt-in choice. That whatsapp defaults to sending old messages to new keys is, at very least, well worth pointing out.
posted by eotvos at 8:36 PM on July 8 [2 favorites]


Teen Vogue has oddly become something that they entirely were not even maybe 5 years ago. I don't go to their website unless someone links an article, but what I've read when I do go there has been high quality and well done journalism.

It kind of gives me hope for the future. Especially because the girls being well-informed today are going to be the women leading us in the future.
posted by hippybear at 8:37 PM on July 8 [6 favorites]


Faking the 'person has a new phone' condition doesn't sound particularly challenging for, say, government agencies.
If evading gov't agencies is your threat model you'd probably turn two-step authentication on and it wouldn't be quite so easy.
"You lost your key, so your don't get to decrypt messages that were sent to that key" seems like one of the most basic and fundamental expectations one could have for an encryption system. At the very least, doing anything else ought to be an opt-in choice. That whatsapp defaults to sending old messages to new keys is, at very least, well worth pointing out.
It's worth pointing out, but I think this is a good response to that issue from moxie (co-author of the Signal protocol) on Hacker News:
Key change notifications are off by default in WhatsApp. That's probably going to be a fundamental limit of any application that serves billions of people from many different demographics all over the world.

Even if they were on by default, a fact of life is that the majority of users will probably not verify keys. That is our reality. Given that reality, the most important thing is to design your product so that the server has no knowledge of who has verified keys or who has enabled a setting to see key change notifications. That way the server has no knowledge of who it can MITM without getting caught. I've been impressed with the level of care that WhatsApp has given to that requirement.

I think we should all remain open to ideas about how we can improve this UX within the limits a mass market product has to operate within, but that's very different from labeling this a "backdoor."
And it remains true that WhatsApp/Signal are the best option at this point for end-to-end encrypted communication for their target market, the alternatives aren't even close.
posted by karlshea at 8:53 PM on July 8 [6 favorites]


Between this and delicious, Ceglowski may have the highest success rate in history among quixotic Poles.
posted by praemunire at 9:02 PM on July 8 [14 favorites]


This is what I don't get. So your iOS device has a secure enclave in it which basically has a random 256-bit key in it that cannot be accessed by anything but the Secure Enclave. If you have an iPhone and an iPad you can basically have the Signal key encrypted with a UID derived key on each device. After some sort of mutual authentication you could move the Signal key from one device to another and reentangle the key on the new device with the new UID. As long as you had one working device you could move the Signal key to a new device. No reencryption required.

The only responses the authorities would have would be to effectively break AES-256 or brute forcing a PIN when the Secure Enclave has a rather punishing backoff timer.
posted by Talez at 9:11 PM on July 8 [1 favorite]


And on Android devices you can fall back to PBKDF2 but it won't be nearly as secure as having a full blown cryptoprocessor guarding the key. But then again if you're looking for secure you won't be on Android to begin with.
posted by Talez at 9:12 PM on July 8 [3 favorites]


What I get from the row, is that people do a terribly terrible job in brining PKI to the common user, even in the absence of a programming error. I don't mean they're terrible people. I mean the job is hard and it doesn't pay.

Mix in some bugs (for example, using a key before verification, in the WhatsApp case), and it's ripe for confusion and conspiracy theories.

Just how many Signal users verify their key fingerprints? And how many of them know about the reason why it is necessary to verify the fingerprints? We don't even have a satisfactory solution to that, which kinda renders everything else not so relevant. Those WhatsApp users have never had much security to begin with. Of course, there's TOFU, but then again how many makes the conscious and free decision that they're using the TOFU model, accepting its limitations, before they do the full key verification?

Moxie got it right. Using PKI is cumbersome. There is a cost of the learning curve, and users don't want to get it right. They're not lazy or stupid. They have limited attention and time, and their (implied) threat model doesn't warrant that kind of mental investment. That's part of the reason he hates OpenPGP, because PGP works on the assumptions of perfectly informed users. It's self-selecting and self-limiting.
posted by runcifex at 9:22 PM on July 8 [3 favorites]


The non-retraction retraction was actually pretty good until the chaff at the end about Facebook's servers being a black box.

The entire point of properly designed end-to-end encryption is that having an arbitrary set of attackers in the path between the ends shouldn't matter. The Signal protocol that WhatsApp is built on has been given enough auditing to convince me that it does indeed amount to properly designed end-to-end encryption. The key management tradeoff beaten up by the Guardian doesn't change that.

This is what I don't get. So your iOS device has a secure enclave in it which basically has a random 256-bit key in it that cannot be accessed by anything but the Secure Enclave. If you have an iPhone and an iPad you can basically have the Signal key encrypted with a UID derived key on each device. After some sort of mutual authentication you could move the Signal key from one device to another and reentangle the key on the new device with the new UID. As long as you had one working device you could move the Signal key to a new device. No reencryption required.

The critical thing here is the "some sort of mutual authentication". Apple's secure enclave offers a level of security for direct attacks against the endpoints that devices without comparable features obviously don't have, but from the point of view of attacks against messages in transit, or against key management protocols that don't involve direct communication between physically adjacent endpoints, it makes no difference.

Mix in some bugs (for example, using a key before verification, in the WhatsApp case), and it's ripe for confusion and conspiracy theories.

Calling that a "bug", to my way of thinking, is as unhelpful as calling it a "backdoor". It's a security vs usability tradeoff. And as multiple people (including you!) have already pointed out, making the default settings favour usability is part of what's driven the uptake of WhatsApp to the point where reliable end-to-end encryption that can be further hardened if desired is now the new normal. That's an overall security win.
posted by flabdablet at 9:34 PM on July 8


The critical thing here is the "some sort of mutual authentication". Apple's secure enclave offers a level of security for direct attacks against the endpoints that devices without comparable features obviously don't have, but from the point of view of attacks against messages in transit, or against key management protocols that don't involve direct communication between physically adjacent endpoints, it makes no difference.

That's why you perform the mutual authentication on the devices themselves. For instance when I have to setup iCloud on a new device I have to authorize the new device on an existing device and generate a second factor PIN to be used on the new device to mutually authenticate. That authorization must be done on an unlocked device which means you need to break a six digit PIN with a backoff timer that geometrically increases.

This is what Signal implementations on iOS need.
posted by Talez at 9:43 PM on July 8 [1 favorite]


So let's say you drop your iPhone in a lake. You get a new iPhone. You pull out your iPad, you open Whatsapp, Whatsapp says "hey a new device! to transfer keys tap authorize" and it gives you a number to put into the iPhone's Whatsapp. This uses UID derived public and private keys on each device to transfer the Signal master key to the new device and the new device immediately entangles the Signal master key with its UID key.

If the government steals your SIM card, it still pops up on Whatsapp on your iPad saying "new device!" at which point you go "what the fuck?" and click the button labelled "NOT ME NOT ME NOT ME!" and change all your passwords to long lines from Shakespeare plays.
posted by Talez at 9:49 PM on July 8 [1 favorite]


The Guardian's stubborness on this issue is totally bizarre. It's not as though the original article was a piece of prizewinning investigative journalism. What were they protecting by digging their heels in?
posted by cichlid ceilidh at 9:56 PM on July 8 [4 favorites]


and change all your passwords to long lines from Shakespeare plays.


that's the second corpus to go into the table.

First was king James bible.
posted by the man of twists and turns at 10:08 PM on July 8 [5 favorites]


I think it's worth quoting the following part of OWS's commentary in full, because of its exceptional relevance.
The only question it might be reasonable to ask is whether these safety number change notifications should be "blocking" or "non-blocking." In other words, when a contact's key changes, should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.

Given the size and scope of WhatsApp's user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience. The choice to make these notifications "blocking" would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully.

Even if others disagree about the details of the UX, under no circumstances is it reasonable to call this a "backdoor," as key changes are immediately detected by the sender and can be verified.
I completely agree with the highlighted parts. What I don't understand is why the blocking notification, cited in the 2nd paragraph, must be known to the server.

IF the block-and-notify mechanism can be implemented completely on the client-side, but disactivated by default, that's kinda like Firefox now showing an exception for self-signed certs before you order it to go on. That's pretty surprising, considering that everyone else's approach is to block and alert first, and then decrypt only under explicit user command. Examples: OpenSSH, Firefox. That's why I believe the design was an error. But I do admit this is my opinion based on an IF.

Off topic: Strangely the original Gruadain report (and the security researcher's blog cited) seems to be condemned to Internet damnatio memoriae. Must search harder.
posted by runcifex at 10:12 PM on July 8


"Long lines from Shakespeare plays" aren't strong passphrases.
posted by runcifex at 10:14 PM on July 8 [1 favorite]


eotvos: ""You lost your key, so your don't get to decrypt messages that were sent to that key" seems like one of the most basic and fundamental expectations one could have for an encryption system. "

And then you get the uptake of PGP. It doesn't matter how secure your system is against these sort of attacks if no one is using it.

Talez: "So let's say you drop your iPhone in a lake. You get a new iPhone. You pull out your iPad, you open Whatsapp, Whatsapp says "hey a new device! to transfer keys tap authorize" and it gives you a number to put into the iPhone's Whatsapp. This uses UID derived public and private keys on each device to transfer the Signal master key to the new device and the new device immediately entangles the Signal master key with its UID key."

Which is great if you currently have multiple Apple devices that have compatible security features; not so much otherwise.
posted by Mitheral at 11:17 PM on July 8 [2 favorites]


What I don't understand is why the blocking notification, cited in the 2nd paragraph, must be known to the server.

I'm not familiar enough with Signal to have worked through the protocol in detail for a definitive answer, but the usual reason for this kind of consideration is timing-based side channel attacks.

So let's say you drop your iPhone in a lake. You get a new iPhone. You pull out your iPad, you open Whatsapp, Whatsapp says "hey a new device! to transfer keys tap authorize" and it gives you a number to put into the iPhone's Whatsapp. This uses UID derived public and private keys on each device to transfer the Signal master key to the new device and the new device immediately entangles the Signal master key with its UID key.

This scenario assumes the existence of a necessary and sufficient master key stored somewhere other than on the iPhone that got dropped in the lake, which defeats the whole purpose of end-to-end public-key encryption.
posted by flabdablet at 11:52 PM on July 8


What I don't understand is why the blocking notification, cited in the 2nd paragraph, must be known to the server.

Because while the server can't read the messages, it can see message transits, and knows about both message backlogs and "re-send message with new key" notifications.

The police arrest X and make a show of smashing their phone, holding them overnight, then letting them go on their way. In the time it takes X to get a new SIM/phone and get back online, there are 2 messages pending from Y and 3 messages pending from Z. A new device comes online saying "I'm X". Server notifies Y and Z to re-send the messages with the new X key. The next time Y comes online it has 3 messages for new-X, the next time Z comes online it has 1 message for new-X.

If key-change notifications are off by default, but are blocking when turned on, then it can be inferred that Y probably does not have notifications enabled and can be safely MITM'd, but Z does have notifications enabled and can not be safely MITM'd.
posted by russm at 12:50 AM on July 9 [2 favorites]


(took me a while to figure out that MITM = man in the middle- posting in case anyone else is wondering!)

Really interesting topic, thanks for the post!
posted by freethefeet at 12:55 AM on July 9


> Because while the server can't read the messages, it can see message transits, and knows about both message backlogs and "re-send message with new key" notifications.
Thanks for the explanation. Which also explains why the comparison with HTTPS or SSH doesn't hold. For the Signal IM protocol, there is the intermediate server managing the sessions, while for HTTPS or SSH this doesn't apply. In this like it appears sane to have a non-blocking notification inserted like a message box.
posted by runcifex at 1:09 AM on July 9 [1 favorite]


This scenario assumes the existence of a necessary and sufficient master key stored somewhere other than on the iPhone that got dropped in the lake, which defeats the whole purpose of end-to-end public-key encryption.

No it ensures that the Signal master key is only controlled by the secure enclave of devices under your control. Entangling the master key with the UID key on an iOS device will make getting the master key virtually impossible without the device being unlocked.
posted by Talez at 4:12 AM on July 9


No it ensures that the Signal master key is only controlled by the secure enclave of devices under your control.

And the function of that master key would be what?
posted by flabdablet at 5:06 AM on July 9


The encryption and decryption of the encrypted messages?
posted by Talez at 5:20 AM on July 9


It's entirely possible I'm missing something basic - in which case I welcome corrections - but after a bit of reflection, it seems to me there's a huge gulf between actual key-verification on one hand, and believing an external source that someone has changed keys without any user interaction on the other hand. Saying that whatsapp is the only viable, popular solution is entirely true, but also presents a choice between extremes that don't span anything like the range of possibilities whatsapp could employ if they wanted to.

Expecting the average person who wants to arrange a meetup at a bar in Mexico City to attend a key signing party with their contacts is nuts. (And has, undoubtedly, been part of the downfall of previous encrypted communications schemes.)

But, trusting a third party to verify that a new key is in use for future communications, much less past re-transmissions of old messages, also seems nuts. Among the realistic threat model that an encrypted communications device should consider, someone who has access to more than one of your devices and the ability to convince an online service rep that you've got a new phone doesn't seem terribly far fetched.

Saying, "I met this person casually and have no reason not to believe she isn't who she says she is, so I accept her key without verification" is one thing. It makes perfect sense. I do it all the time.

But if, six months later, her key changes without notice, being given the opportunity to phone her up and ask, "is this something you did?" before sending messages seems like a pretty basic security precaution. A "this key has changed, something nasty might be going on, are you sure you want to trust the new key" message isn't exactly hard to implement.
posted by eotvos at 8:01 AM on July 9


But if, six months later, her key changes without notice, being given the opportunity to phone her up and ask, "is this something you did?" before sending messages seems like a pretty basic security precaution. A "this key has changed, something nasty might be going on, are you sure you want to trust the new key" message isn't exactly hard to implement.

The problem that The Guardian was talking about was that the key change was being accepted without prompting or question. So for instance if the authorities took Alice's phone number, put it on a new SIM card, put that SIM card into a handset, and then installed Whatsapp, Whatsapp would send the new public key to Bob, tell Bob's copy of Whatsapp to re-encrypt everything with the new public key, and Bob's Whatsapp would happily oblige, dropping new copies of everything in flight, or worse, in history, down to the now compromised phone number.
posted by Talez at 8:08 AM on July 9


"Long lines from Shakespeare plays" aren't strong passphrases.

Great. Now we all have to start reading fucking David Mamet.
posted by Big Al 8000 at 8:17 AM on July 9 [4 favorites]


But if, six months later, her key changes without notice, being given the opportunity to phone her up and ask, "is this something you did?" before sending messages seems like a pretty basic security precaution. A "this key has changed, something nasty might be going on, are you sure you want to trust the new key" message isn't exactly hard to implement.

Yes, and this is there, as an opt-in. It doesn't automatically drop in-flight messages, as that leaks to the service provider which accounts have opted-in to key change notifications.

dropping new copies of everything in flight, or worse, in history, down to the now compromised phone number

No, not in history, just messages that had been in flight and were as-yet undelivered.


WhatsApp have explicitly made tradeoffs to drive bulk adoption (hence increasing the size of the user's anonymity set) and reduce the avenues by which a malicious server can glean info about endpoints, at the cost of making some targeted attacks easier. Unfortunately there's no perfect design that protects every use case, and this is where they've decided to stick the pin.
posted by russm at 8:22 AM on July 9 [2 favorites]


Great. Now we all have to start reading fucking David Mamet.

I'm fairly certain he prefers "David fucking Mamet".
posted by hippybear at 9:02 AM on July 9 [3 favorites]


And the function of that master key would be what?
posted by flabdablet at 5:06 on July 9 [+] [!]


The encryption and decryption of the encrypted messages?
posted by Talez at 5:20 on July 9 [+] [!]



The defining feature of public key cryptography is that there is no master key. Sending an encrypted message to Alice's device needs only that device's public key. Public keys are not secrets, at least not for the purpose of securing encrypted messages. Bob storing Alice's device's public key in his own Apple device's secure enclave, rather than in plain text, is protective only against somebody with physical access to Bob's device finding out who Bob has probably been communicating with.

Alice's device's private key is a secret and should itself be stored on Alice's device and on that device alone; in particular, it should not be part of any cross-device sync facility. Precautions should be taken to make access to that key outside that device vanishingly unlikely; if it's an Apple device, the secure enclave would certainly be a good place to keep it.

In the scenario you initially outlined, Alice's phone gets dropped in the lake. For what follows I'm going to assume that the WhatsApp private key stored on that phone is now permanently lost.

Bob now tries to send a message to Alice's phone. His phone encrypts the outgoing message with Alice's phone's public key before handing it off to server for delivery. This is end-to-end encryption, so the server has no way to inspect the contents of that message; its plaintext exists only inside Bob's phone.

But Alice's phone is at the bottom of the lake and is never going to receive another message. The delivery server can't know that; all it knows is that the destination device is offline. So it queues Bob's encrypted message until such time as it can deliver it (and notify Bob that it's been delivered).

Meanwhile, Alice buys a new phone. Being a new device, it gets a completely new pair of keys generated when WhatsApp is initially installed on it. There is nothing to transfer from her old phone to the new one, which is good because the old one is sunk.

Alice's new phone now contacts the WhatsApp servers and uploads its new public key. The delivery server sees that Bob's phone has got messages queued for delivery to Alice's, and it also knows that those were encrypted with a different public key from the one Alice's phone now has, meaning that Alice's phone will not be able to decrypt them with its new private key. So it contacts Bob's phone and sends it Alice's phone's new public key. Bob's phone then re-encrypts the plain text of the original message (which it's still holding, never having received a delivery notification) and sends the result back to the server, which is now able to deliver it to Alice's new phone.

The contentious point here is all about exactly what Bob's phone should do at the point of finding out that Alice has a new public key. It has nothing to do with any transfer of secret keys from Alice's old device to a new one because that never happens.
posted by flabdablet at 9:02 PM on July 9 [1 favorite]


The contentious point here is all about exactly what Bob's phone should do at the point of finding out that Alice has a new public key. It has nothing to do with any transfer of secret keys from Alice's old device to a new one because that never happens.

If I understand Talez correctly, they're suggesting that keypairs should be per-user not per-device, and shared "securely, somehow" between all of a user's devices. This makes enrolling a new device easy as long as you still have at least one device with the secret key in it, but also opens a bunch of new attack vectors around the fact that key rotation never happens.
posted by russm at 9:26 PM on July 9


It's worth bearing in mind that WhatsApp was designed as a reliable general-purpose message carrier first and foremost, and that end-to-end encryption is an added feature that most people really just don't care about. WhatsApp is an alternative to SMS and email, neither of which have any meaningful inbuilt privacy whatsoever.

Given that the majority of WhatsApp users for some decades to come will be people who use it purely because it's what their friends use and neither think nor care about privacy, the default settings need to make the underlying privacy protection mechanisms work completely invisibly. Because if WhatsApp were to start refusing to deliver messages until the users had done a little endpoint verification dance, this would annoy enough people to dent its uptake quite severely.

That means that the only parts of the privacy mechanism that can possibly be on by default are encryption and key distribution. Key verification has to be off by default, or it's going to cause interruptions for people that will simply be annoyed by them.

But all that privacy infrastructure is now built in and available for people who do care about privacy. Turn on the key verification stuff, and all of a sudden you're doing messaging with some very robust security guarantees. Not only that, but you retain the option of setting up your key verification in a way that renders your traffic indistinguishable from that belonging to the vast majority who never cared to set it up at all, at the cost of some relaxation in those guarantees.

If I understand Talez correctly, they're suggesting that keypairs should be per-user not per-device, and shared "securely, somehow" between all of a user's devices.

That means that endpoints become able to spoof each other, which is a considerable reduction in the strength of the security guarantees that the protocol can provide.
posted by flabdablet at 9:31 PM on July 9 [1 favorite]


Incidentally: as I said above, I haven't dug into the Signal protocol to find out exactly how it does any particular thing. But there are standard ways of doing standard crypto tasks, and I would expect that the way Signal handles a requirement to deliver any given message to some arbitrary collection of Alice's devices, rather than to one of them in particular, is done in the usual way - which doesn't require an entire message to be duplicated in order to be decryptable via multiple private keys, and doesn't require any of Alice's devices to share private keys.

How this works is that the message itself is encrypted just once, using a secret message encryption key (MEK) generated by a cryptographically secure pseudo-random number generator on the sending device, for the purpose of encrypting that message and that message only.

The MEK is a symmetric key, meaning that the same key is required for decryption as was used for encryption. Symmetric encryption and decryption are way less computationally expensive than comparably attack-resistant public-key encryption for messages of any decent length.

After adding some self-checking overhead allowing a decrypted MEK to be identified as valid, and perhaps some more random data to frustrate possible attacks against multiply-encrypted identical plaintexts, the MEK is then public-key encrypted multiple times: once with each of the public keys whose corresponding private keys will eventually be used to decrypt it.

The encrypted MEKs are then sent bundled with the MEK-encrypted message. If the receiver holds a private key allowing it to decrypt at least one encryption of the MEK, it can then use that to decrypt the message proper.

MEKs are much shorter than typical messages even including all the associated overhead, so bundling quite a lot of them with any given message will not generally make it bigger by an amount that actually matters.
posted by flabdablet at 2:53 AM on July 10


Having now read a little bit more about Signal, it turns out that MEKs don't actually get sent along with the messages they encrypt; there's a more complicated method used for establishing a shared MEK to be used by any given pair of senders and receivers, in order to allow for perfect forward secrecy (where an attacker who actually does manage to retrieve a private key will still not be able to decrypt any messages received by that key's owner in the past).

The central point, though, remains: it's completely feasible to send messages that are decryptable via any of several private keys, without any requirement that all the receivers share the same private key, and without requiring duplication of message payloads during encryption.
posted by flabdablet at 3:31 AM on July 10


  What were they protecting by digging their heels in?

Since they don't have crypto experts on staff, and as their initial not-quite-getting-it correction didn't quiet the shouty tech folks, they just moved on to other things and quietly hoped it would go away. There are also stories that are more important to the majority of their readers in the last six months, like “Hey, our country is completely falling apart politically”.
posted by scruss at 6:58 AM on July 10 [1 favorite]


I only understand about 20% of what's been posted so far so maybe I've missed something, but why is the notification off by default? The arguments for sending the message regardless I can understand, but I don't see why it at least doesn't notify you. After all it notifies by default when you start using encryption.
posted by the long dark teatime of the soul at 8:34 AM on July 10


Again, this is to keep things palatable for the vast majority of the user base for whom message privacy is just not a concern. If all you want to do is be able to remind your spouse to pick up milk on the way home, the last thing you'll want to use for that is an app that occasionally pops up vaguely threatening but completely non-actionable (given your present level of understanding) warnings.

The underlying privacy mechanisms are in place for everybody whether they've actually chosen to turn on the settings required to make best use of them or not, in order that people for whom privacy is important can't be selectively filtered out for extra scrutiny simply on the basis of having turned them on in WhatsApp (or even more easily by having chosen a more security-oriented app like Signal instead).
posted by flabdablet at 8:43 AM on July 10


I guess what I was getting at is that you could make precisely the same argument for not showing the original notification when encryption turns on. Why notify when it turns on if you're not going to warn when it's potentially compromised? Silent for both or silent for neither I can understand, but it seems pointless to tell someone it's been turned on if you don't tell them it isn't working anymore.
posted by the long dark teatime of the soul at 9:40 AM on July 10


Fundamentally different messages. "Hey look here is a shiny new feature that makes our product better aren't we clever yes we are" appearing once is not anything like "Hey so something bad you don't understand might have just happened to you" appearing at random.
posted by flabdablet at 10:10 AM on July 10


"Hey so something bad you don't understand might have just happened to you" appearing at random.
This is exactly it. 99% of WhatsApp users have no idea what any of this all means and if they saw those messages they would be confused. That's why it's been a success.

The same is also true of iMessage, which is also end-to-end encrypted but in a different way (and after having the details explained in a talk at Blackhat the mechanism seems pretty trustworthy). None of the users have to know anything about encryption at all but it's always there working for them.
posted by karlshea at 11:09 AM on July 10


« Older We have the technology. We can rebuild the lost...   |   badddiiie 🌈🎱🦋⛈ stealing your man since 1928 Newer »


You are not currently logged in. Log in or create a new account to post comments.