Circumvention of Bell's Throttle Monster: three alternatives.
January 17, 2009 11:01 AM   Subscribe

On November 20th, the CTRC made a landmark ruling that defeated the CAIP's plea to stop Bell's conjuration of the Deep Packet Throttle Monster. However all was not lost, as consumers of Bell's copper pipes can take solace in three recent developments that aim to reclaim the pipes for We, the little guy. hooray!

What is currently known about how to beat the beast:

1) In a letter from research organization PerVices to the CTRC, details surfaced as to how to bypass the throttle, and have been put into concise instructions, with many users reporting success.

2) Developers of the uTorrent/BitTorrent clients have been, in the interim, working on a vast changeover in the way Torrenting operates: by switching the protocol from primarily TCP to entirely UDP (called UDP Torrent Protocol or uTP) which in effect evades the traffic-shaping process entirely. With this, manysuccess stories abound. (You can download a copy of uTorrent 1.9a here)

3) Finally, and as perhaps the most intensive last-ditch effort, users can obtain a router capable of being flashed with the Linux-based, modified "Tomato" firmware which uses the MLPPP protocol in order to circumvent the deep-packet inspection process. (note: you don't actually need to run Linux to use it, and also note: your ISP must support MLPPP).
posted by tybeet (28 comments total) 11 users marked this as a favorite
 
Is there any reason for internet traffic not to be encrypted in this day and age?
posted by delmoi at 11:14 AM on January 17, 2009


Isn't uTorrent the default BT client now?
posted by Pope Guilty at 11:42 AM on January 17, 2009


Isn't uTorrent the default BT client now?

BitTorrent Inc bought out uTorrent, but they remain separate clients (with very similar functionality). There is still Azureus which is pretty popular (much less so now that it is Vuze), but I believe uTorrent is the reigning champion of popularity if that's what you mean by default.
posted by tybeet at 11:51 AM on January 17, 2009


delmoi- because of the processing overhead cost is still pretty steep. And there is no practical end to end encryption solution that could encompass all the needs for everyone.

And since things such as Deep Packet Inspection are happening by the man in the middle, quite literally, it would not be too entirely hard to now spoof an SSL certificate of the website you think you are going to, and then give you a fake proxy in between.

The box in the middle appears to you as https://yourbank.com, where it decrypts your session, logs it, re-encrypts it as a proper session to the actual bank. And since the same people control the routers you are connecting to the internet using, they don't even need to hack dns, spoof an ssl certificate or anything. They can just route your traffic to a server that looks like and feels like yourbanks, but actually isn't.

So at the basic level, encryption and privacy on the internet boils down to trust. Do you really trust that your internet traffic is being routed properly? Do you trust that your DNS doesn't have a poisoned cache. And then what authority moderates the trust? Commercial or non commercial entities?

You could use synchronized real time authentication tokens (like these) as only holders of the tokens at both ends of the communication would know the magical number to use. But do you want to have a unique keyfob that gives you a generated number for every website you visit, or every server you communicate with?
posted by mrzarquon at 11:57 AM on January 17, 2009 [1 favorite]


delmoi writes "Is there any reason for internet traffic not to be encrypted in this day and age?"

Besides the processing costs at both ends, there's the consideration that you can encrypt or compress, but not both. Compression of text data can cut bandwidth by 90%, decreasing cost and increasing transmission speed. During the election, I put together something to transmit large text files (html files 10 or 20 MB in length); compression was the only way to get this stuff out given my bandwidth window constraints, which ruled out encrypting.
posted by orthogonality at 12:23 PM on January 17, 2009


Isn't uTorrent the default BT client now?

Funnily enough, I'm running my torrent client as I type this. Giving my peer connections a cursory look shows nearly everybody using either uTorrent, Azureus or Vuze (formerly Azureus), from a sample size of about 400 peers.

uTorrent is fine, from what I hear, but I would need to run another application (Wine) to run it. My experience with Azureus was pretty poor. At the moment, I use Transmission and have been delighted - fast, lightweight, and full configurable.

I'm pretty sure my ISP did something, although I'm not sure what. All I know is, the first month I forwarded my ports and began downloading every title I could think of, I was looking at download speeds of about 150-200Kb/s from around 20 peers. After that first month, this dropped to 20-50Kb/s from the same number of peers. I don't know if this is because I'd burned through all the popular titles in the first month and was now in more obscure territory, or if because my ISP saw a huge spike in traffic and took action.

However, my way around this is to cast a wide net. I've opened the config file and set the maximum number of connected peers for downloading to 3000. In this way, if I'm connected to 150 peers, each giving me 1 or 2 Kb/s, I'm still getting a good download speed. It seems to have remedied my previously slow download speeds, anyway.

All the news in the post sounds very interesting. Thanks much for this!
posted by Marisa Stole the Precious Thing at 12:34 PM on January 17, 2009


orthogonality-

blasdelf actually showed a pretty neat one liner to gzip and scp a file at the same time, but again, depending on your situation, that may not be the most practical.

Real time encryption systems still can't keep up with the rate internet speeds are going. Do you want 100Mb/sec internet connection at your house, or do you want 20Mb/sec encrypted traffic, and a $2,000 dedicated VPN concentrator that can provide that (and you need every location you connect to have a similar concentrator at the other end also).

And it still doesn't cover the fact that I know who you are talking too, I just don't know what it is you are talking about.
posted by mrzarquon at 12:39 PM on January 17, 2009


I'd take the 20Mb/sec encrypted thanks considering that's an order of magnitude faster than the average residential down pipe, forget for uploads.

Sure encrypting LAN speeds is nontrivial but hey, more cores mean more spare cycles to encrypt stuff!
posted by Skorgu at 12:49 PM on January 17, 2009


I wonder if control by monopoly-like ISPs is ever going to get bad enough for people to setup their own alternative net, maybe via wireless. Of course, to be useful, it'd still need to interface with the main Internet somehow. Hmm.
posted by wastelands at 12:54 PM on January 17, 2009


orthogonality: you can encrypt or compress, but not both.

I don't understand why not. Why can't I compress and then encrypt? Isn't encryption content-neutral? And for that matter, if the compression is lossless why can't I encrypt and then compress (other than maybe that the encyption adds detail to redundant data)?
posted by StickyCarpet at 1:02 PM on January 17, 2009


Skorgu- The point is to get 20Mb/sec encrypted, you might need something like 40-100Mb/sec unencrypted to handle the actual overhead. So to get 3mb/sec encrypted, you'd have to pay for 6mb/sec level of service.

So take your current internet speed right now, cut it in half, and pay the same price + added cost of dedicated encryption hardware. Which is why most people aren't bothering with it for full time all the time applications.

I'll take 100Mb/sec all the time, and then take reduced speed for the specific instances I would like encryption (ssh/sftp/scp/ssl for that).

To get full time encryption would require a massive standardization of platforms and general agreement on how to deploy it. Which is why the only places that you see serious encrypted infrastructures are configurations deployed by large organizations and corporations. There they can have someone say "ok, instead of dedicated private T1s / DS3s running back to corporate, we are just going to use really beefy cisco ASA's at each location, drop in two 10Mb/sec DS3 circuits for internet connectivity, and then each ASA will be configured to one another and cross office traffic will go over that." This is also why every cisco tech I have met is: ultra paranoid and a little be strange. While not exactly a recent development, such solutions are picking up speed because the cost/performance ratio is now at an acceptable level, and provides much greater flexibility than all roads going to rome. (usually all traffic would be funneled back to headquarters, and then that location would have to have the other end of the dedicated circuit, plus an internet connection that could sufficiently provide connectivity for every single computer in every office connected to it.)
posted by mrzarquon at 1:13 PM on January 17, 2009


You can encrypt and compress, you just have to compress first. Compression also makes it harder to break encryption in some cases, though that's not terribly significant at the moment. All encryption protocols that I know of offhand do include a (usually optional) compression stage within the encryption stage.
And for that matter, if the compression is lossless why can't I encrypt and then compress (other than maybe that the encyption adds detail to redundant data)?
This, on the other hand, doesn't work because encrypted data is indistinguishable from random noise, and you can't compress that.

Anyway. The practical barriers to end-to-end encryption are lack of adoption and NAT. Adoption is the real barrier; you can't encrypt your connection to Metafilter until Metafilter supports encryption; multiply by the number of websites you visit. Remember how much of an uphill slog it was to get banks and ecommerce sites to encrypt just the credit-card submissions?
posted by hattifattener at 1:18 PM on January 17, 2009


Correct me if I'm wrong, there are already ISPs that will use QoS on their networks, right? Back when I was researching this, I found a couple of ISPs that would offer QOS clauses in their service level agreement. This was quite a bit more expensive than the vanilla offerings, but is essential for voice traffic between branch offices and things like that. I don't want my NY and LA branch office to go through 12 devices before hitting each other, sometimes you need guaranteed latency and such.

But, here's what I think is more realistic than a huge, all knowing computer. Big Telco companies realized in the 90s that they could leverage their competitive advantage of large scale networking to providing WAN connections to the home. This is great, they can charge 60% profit margins on a monthly (!) product and they don't even have to provide content for it. Right now we're at the point where people are using this network, and not paying fees to the other networks telcos are running. This sucks for the telcos because before, they were not strictly competing on price. DirecTV and TWC were completely different technologies, with different lineups. Now these companies are seeing what they fear most, a race to the bottom. Who can deliver the best speed at the lowest price? This is not good for them at all. So what do they do? Mark my words, within the next several years a big Internet provider will have a very public fight with a media agency. Whether it is YouTube mysteriously going down for Verizon customers or severe caps by TWC. Whether they get resolved or not will not matter as the message will be clear to consumers, that for traditional services like cable and telephone access, shell out the big bucks because the scary WAN is a place where all kinds of things happen, a real wild west, but pay us $120/mo. and we'll make sure you don't hit usage limits during the Super Bowl.

This really has a chance to kill the fledgling industry of low cost media providers like NetFlix or Vudu. This is bad, these are legitimate companies making money. If they go down, suddenly the rallying cry by big companies are the pirates and extra-legal entities that are abusing the Internet by not sticking to e-mail. I'm convinced that most people don't want to uTorrent and RSS their favorite shows. They want to turn it on and forget it. What we see now is a huge price difference between free and a lot of money. You can beat free, but no at the price these companies want.

So I got off topic a little, but the idea that a shadow government will be planting devices inside networks is a lot less scary than a death by a thousand cuts by large corporations. The latter being a much, much more likely scenario.
posted by geoff. at 1:22 PM on January 17, 2009 [1 favorite]


mrzarquon: Encryption can take a lot more CPU, but it doesn't multiply required bandwidth like that - 5 or 10% maybe, but not an additional 100%.
posted by Pronoiac at 2:06 PM on January 17, 2009


Besides the processing costs at both ends, there's the consideration that you can encrypt or compress, but not both.

You can't encrypt, then compress. But you can certainly compress then encrypt. Look at all the encrypted zip files out there, for example. Or try putting a zip file in a compressed volume with truecrypt. It works fine, and there's no reason it wouldn't.

The processing cost on the client would minuscule compared to the processing cost of trying to do a man in the middle attack.

As far as forging SSL certificates, that only works with MD5 signed certificates. I think there are other options for signing. And the computational time is still pretty extreme, I doubt it could be done in real time and certainly not for millions of connections (although I suppose you could cache the certs.

But anyway, there is technology out there that can't be forged and it is being used. It certainly not true that you can spoof any SSL certificate.

And without spoofing, the fingerprint of the SSL cert would be wrong, and hopefully your wouldn't match up with root certificates that come with your computer.
posted by delmoi at 2:31 PM on January 17, 2009


I was exaggerating a bit, but it depends greatly on the type of encryption involved and the equipment. To minimize the overhead and get more close to the full throughput of the connections involved, costs more money. Then there is the question of how secure the encryption actually is.

As Hattifattener pointed out, you would also need universal adoption of a standard way to do it, and we still haven't gotten our new sexy IPv6 accepted fully yet. I do think the bittorrent / udp torrent developments are great, because it showing a grass roots response to the corporate attempts to maximize profit while not actually increasing their service levels.

And while specifically for bit torrent, the encryption they are using is adequate to defeat the deep packet inspection, in the US they've just found a simpler solution (because DPI traffic shaping was just overturned in court): throttle back speeds of high usage users. They aren't discriminating based on what you are downloading, just how much.

In the US, internet providers are desperately trying to prove they are more than just a utility service. Hopefully it looks like the FCC will finally have someone with a spine in charge, and they will start asking the telcos where all that money went that we gave them to provide 20Mb/sec service by 2006.
posted by mrzarquon at 2:43 PM on January 17, 2009


>In the US, internet providers are desperately trying to prove they are more than just a utility service.

Ah ha, therein lies the rub. I would love it if my ISP would stop trying to be a "content portal" or what-have-you. Verizon has either definitely grokked this or is not sufficiently enough along with FiOS' Internet side to care, but either way they do exactly what I would expect. A distribution system has every right to try to sell me other services, but the one I do buy with the expectation of reaching other networks MUST perform as expected. Amazingly, XOHM (err, Clear) also works similarly; their product is very good about getting out of the way and letting me use The Tubes.

Hey, big ISPs: Take the money you would have spent on "content" and "portals," pile it all up, then spend it on infrastructure and support (US-based, plzkthx?) to allow you to sustain a higher market price for your now super-reliable and easily-contacted service. Or, don't spend that money at all. Either way, you're out no additional cash and your customers love you.
posted by fireoyster at 3:03 PM on January 17, 2009


Besides the processing costs at both ends, there's the consideration that you can encrypt or compress, but not both...

Why? Can't you compress the clear, and then encrypt the compressed bit stream? As far as the encryption is concerned, it's all just bits in a row.
posted by Chocolate Pickle at 3:23 PM on January 17, 2009


there's the consideration that you can encrypt or compress, but not both.

I dont think thats true. You can certainly encrypt something thats compressed. A modern encryption algorithm wont care if there's less entropy in the original file. Cracking a file thats just a bunch of zeros compared to random charaters should have the exact same level of difficultly.

A lot of encryption implementations do compression at the same time. gpg does. You need to put in -z 0 to get it to not compress, because it does it by default.
posted by damn dirty ape at 3:58 PM on January 17, 2009


As Hattifattener pointed out, you would also need universal adoption of a standard way to do it, and we still haven't gotten our new sexy IPv6 accepted fully yet
IPv6 and IPsec are completely unrelated to each other, mrzarquon. IPsec works just fine across the v4 internet.
posted by hattifattener at 7:26 PM on January 17, 2009


wastelands: "I wonder if control by monopoly-like ISPs is ever going to get bad enough for people to setup their own alternative net, maybe via wireless. Of course, to be useful, it'd still need to interface with the main Internet somehow. Hmm."

Outside of Cory Doctorow novels, this doesn't work too well. There are pretty serious scaling issues that crop up when you try to build mesh networks out past a handful of nodes. To avoid this you have to redesign your protocols to avoid 'supernodes' or similar chokepoints, basically make all nodes as equal as possible, and you really can't effectively use it for wide-area broadband access (since the 'downlink' nodes and their immediate peers get saturated very quickly, you have to have ties into the wired network every few hops). Protocols designed for the modern Internet typically fare pretty poorly.

That's not to say that you couldn't, in theory, design a file-sharing protocol that was optimized for a mesh, but it would be tough given the tendency of such networks to have a small number of super-seeders involved in disproportionate numbers of transactions, which is at least my experience with them. (The only exception I can think of to this is Freenet, which was very aggressive about decentralization and pushing content out equally to edge nodes, but it always had terrible performance as a result. I haven't looked into it recently though.)

You can do some really interesting networking with wireless mesh topologies, but it's not well suited to bandwidth-intensive file sharing. The biggest successes I'm aware of (packet radio systems) mostly do asynchronous store-and-forward stuff, like email and messaging, and handle relatively little bandwidth.
posted by Kadin2048 at 9:03 PM on January 17, 2009 [4 favorites]


Hatti- meant to illustrate that we are having problems getting any new standard adopted, and in the case of ipv6 that probably needs to happen first (running out of ips, leading to extensive natting which ipsec may not work with), getting the same parties to also adopt ipsec or similar is also going to be difficult.

Same as dnssec. The first standards worked because no one was really using the Internet as intensively as we are now. They were able to make changes in alpha and beta stages. Now it is a fully shipped 1.x project, and we are looking at a more painful 2.0 update.
posted by mrzarquon at 10:59 PM on January 17, 2009


mrzarquon: blasdelf actually showed a pretty neat one liner to gzip and scp a file at the same time, but again, depending on your situation, that may not be the most practical.

You misunderstand, in that case you want the backup you're transferring to be compressed on the receiving end when it's written out.

If you just want compression over the wire, ssh (and thus scp) has had a -C option to do that since time immemorial.
posted by blasdelf at 3:04 AM on January 18, 2009


I love Tomato ML/PPP.
posted by chunking express at 8:21 AM on January 18, 2009


And yeah, you can totally compress a file then encrypt it. In fact, this is a common first step for a lot of crpyto implementations.
posted by chunking express at 8:23 AM on January 18, 2009


Just because nobody has explicitly laid it out, the reason you compress first then encrypt is that compression relies on finding patterns in your data and encryption relies on removing those patterns. Encrypted data should look just like line noise, and that's basically incompressible.

Once you've got the cycles to encrypt at line speed (AES in hardware) you're only talking about decrypt latency and per-packet overhead which according to the spec is between 54 and 61 octets per packet, invariant. Granted it's not nothing but certainly not half your bandwidth unless you're imitating a 300 baud modem by whistling maybe.
posted by Skorgu at 9:47 AM on January 18, 2009


blasdelf- i know about the -C option, but your code was about actually compressing a file and then sending (so it would show up compressed at the other end), depending on what you wanted to do, I found it a nifty one liner.
posted by mrzarquon at 12:40 PM on January 18, 2009


mrzarquon writes "Then there is the question of how secure the encryption actually is."

In this case it doesn't really matter. Even if dedicated hardware could decrypt it in a couple minutes and the ISP was willing to spend that kind of money on end user it still wouldn't be fast enough to packet shape the traffic. Of course they can just degrade your line but that is perceived as being less acceptable, at least up here.
posted by Mitheral at 5:29 PM on January 18, 2009


« Older Exponential, what it do?   |   The Great Pyramid of Giza was a Pulse Pump Newer »


This thread has been archived and is closed to new comments