The Other Net
July 1, 2014 7:48 AM   Subscribe

 
Hey, those people are using a 100 Gbps network and here I am using my 7.5 Mbps DSL line like a sucker.
posted by tommasz at 8:05 AM on July 1, 2014


I was all ready to make fun of OSCARS as a silly acronym but it actually seems neat. An SDN use case in the real world? Unpossible!

If only there were more implementation details.
posted by Skorgu at 8:05 AM on July 1, 2014


Here I am wondering why you'd ever need 100Gbps and at the same time realizing soon that will sound as dated as "640K is more memory than anyone will ever need on a computer"

I welcome ARPANET 2.0.
posted by leotrotsky at 8:15 AM on July 1, 2014


ESNET computing helped find the Higgs Boson.
posted by crazy_yeti at 8:20 AM on July 1, 2014


Hey, those people are using a 100 Gbps network and here I am using my 7.5 Mbps DSL line like a sucker.

I suspect they have almost exactly the same internet experience that you do. There's quite a bit of bandwidth restriction that goes on at the server end. And of course a ton of 'lag' comes from the servers themselves.
posted by Tell Me No Lies at 8:39 AM on July 1, 2014


And they accomplished all this without the use of pants.
posted by arcticseal at 8:46 AM on July 1, 2014 [1 favorite]


The important related point here is the point at which local storage becomes unnecessary: 100Gb/s ~= 10GB/s. SATA3 =~ 600MB/s. So this test was more than an order of magnitude higher bandwidth than a local hard drive.

My "Blast" internet from Comcast is ~105Mb/s or ~10MB/s. So I'm still an order of magnitude below what I would need for throughput in the range of local disk.

Looks like we're still a technological generation away from Amazon running everyone's hard drives in the cloud.
posted by mikewebkist at 8:58 AM on July 1, 2014 [2 favorites]


The important related point here is the point at which local storage becomes unnecessary:

Because you can trust remote storage and the remote storage firms to secure your data and fight far harder than you would if a warrant was to be issued for that data.
posted by rough ashlar at 9:36 AM on July 1, 2014


I run a scientific imaging lab, and we now have cameras that can stream data at 1 GB/s. As pointed out above, this actually puts severe demands on disk infrastructure to deal with transfer rates; we routinely run multiple SSDs in RAID0 to get the required speed. But it also means that our gigabit ethernet connections are woefully inadequate for moving data around, so we would love to have 10 Gb/s connections to something like AWS so we could use cloud computing to do data analysis.

Of course, if we want to have multiple machines streaming at 1 GB/s to the cloud the backbone needs to be a lot faster. I suspect that's a big part of the motivation for this - not that you need 100 Gb/s for a single project, but that you have multiple projects that need 1-10 Gb/s.
posted by pombe at 9:39 AM on July 1, 2014 [3 favorites]


Full disk encryption works with local hard drives and could surely work with remote storage as well.

If I were a cloud storage company I'd WANT customers to encrypt all of their data BEFORE sending it to me for that very reason: if the cloud vendor can't decrypt the data, then they are not in a position where they have to decide if or when to give the FBI a customer's files.
posted by mikewebkist at 9:44 AM on July 1, 2014


Not sure how much of a "meh" to make of the 100 Gbps press release. Dark fiber + DWDM + inverse muxing techniques have made these kinds of single-stream WAN transfer rates possible for at least a decade. Am I missing something?
posted by ZenMasterThis at 10:22 AM on July 1, 2014


ZenMasterThis - this is on a switched multi-node network. Yes, you could get full speed to a remote SAN using dark fiber, but point to point on a "public" internet is an order of magnitude more difficult. ESnet and "Internet2" is dedicated to finding the bottlenecks that pop up as they ramp up the speeds. This is just another milestone to that end.
posted by PissOnYourParade at 10:29 AM on July 1, 2014 [3 favorites]


It's always fun to be reminded that the network technology I'm watching the World Cup on was invented for sharing Physics data.
posted by benito.strauss at 11:15 AM on July 1, 2014 [1 favorite]


The important related point here is the point at which local storage becomes unnecessary

The latency is always the killer. Its why 1Mbps, 10Mbps and 100Mbps internet produce mostly-similar surfing experiences, its why SSDs are such tremendous upgrades, etc. There are plenty of clever tricks to improve bandwidth or disguise poor bandwidth or avoid the need for bandwidth, but solving the latency issue is much harder.
posted by Western Infidels at 2:27 PM on July 1, 2014


I welcome ARPANET 2.0.

Internet2, an (ooo scary) "shadow Internet" exploring (and implementing) next gen Internet tech has been around since 1997.

If I were a cloud storage company I'd WANT customers to encrypt all of their data BEFORE sending it to me

No, you wouldn't. That means all of the block-level storage optimizations that keep you from blowing your I/O budget go out the window. There's no way for me to say "user changed one byte 12,342 bytes into the file (or disk), just replace that. In fact, depending on how you encrypt the data I might not be able to avoid writing the entire file or even entire disk image because of a single bit change. And Bruce points out another reason why you might not want to do that.

This is interesting, but as PissOnYourParade points out, it's an incremental achievement. I've got 10Gbps connections into my hosting centers, and all of my providers hosting centers are connected by 100Gbps connections already (although there's plenty of pokey 10G & 40G connections scattered about). So, other than the driving that down the storage level, the Internet is pretty much doing that already.
posted by kjs3 at 7:20 PM on July 1, 2014


100GBps links are fairly common. This was always the problem with Internet2 - the announcements focused on bandwidth but the gear was off the shelf - Cisco, Juniper, etc. The interesting research around making TCP work on such channels was interesting but not terribly exciting.
posted by rr at 8:29 PM on July 1, 2014


« Older Justice and identity   |   Insuring the Dead Newer »


This thread has been archived and is closed to new comments