Don't let it get you down, the singularity is as near as it's ever been.
February 10, 2016 9:23 PM   Subscribe

Moore's law is dead, for real this time?
posted by sfenders (35 comments total) 9 users marked this as a favorite
 
.
posted by fredludd at 9:45 PM on February 10, 2016 [2 favorites]


Anyone watching the SSD market knows silicon tech is still improving at a rapid clip.
posted by ryanrs at 10:15 PM on February 10, 2016 [2 favorites]


Also, utility computing is getting really cheap for your average hobbyist. Think Amazon AWS / EC2, but at cut-rate prices out of cheap European datacenters. I lease a quad-core i7 with 16GB ram, 6TB disk, and 20TB data transfer for under $30/month out of a datacenter in Nuremberg.

That's a serious server on a gigabit connection for under $1/day.

This stuff really is getting faster and cheaper, and importantly, more accessible every year. It's great.
posted by ryanrs at 10:27 PM on February 10, 2016 [5 favorites]


There's a joker in the deck: diamond semiconductor. There are two ways to use it. First, you dope the diamond with greater or lesser amounts of boron to create conductive paths and/or junctions.

And the biggest deal is that as long as there isn't any oxygen in the case it can handle temperatures which would melt silicon.

But an interesting alternative is to give up on electric currents entirely and switch laser beams. The technology of light gates is still in its infancy and by comparison to silicon the device density is terrible but switching a light gate is something like a thousand times faster than switching a FET.

I don't think this has gotten very far, and maybe it's been abandoned entirely, but if it can be made to work, we might end up with terahertz CPUs.
posted by Chocolate Pickle at 10:32 PM on February 10, 2016 [1 favorite]


I actually wrote an academic history of Moore's Law a decade ago, and got to interview Gordon Moore (super nice guy!). The basic idea was that Moore's Law has changed in its definition over the past 50 or so years, measuring different things (components on a chip versus transistors versus speed) and has changed in its predicted doubling time as well.

There were even some errors with the initial graphs that later got smoothed out in future generations. For example, Moore thought bubble memory would be a thing in the late 1960s, and as a result there were a few years where no chips approached the Moore's Law predicted growth curve, though it later got back on track.

The idea that we have hit the limits of Moore's Law has been raised many times. It might finally be true, as it will be eventually, but each time so far the barrier has fallen or the nature of the Law has been adjusted to fit the new data....
posted by blahblahblah at 10:53 PM on February 10, 2016 [12 favorites]


I mean, we're still only using two dimensions.
posted by ryanrs at 10:56 PM on February 10, 2016


So we're still not getting anywhere with optical transistors?
posted by Apocryphon at 11:09 PM on February 10, 2016


If we're going to list the bits of computer tech that are crap, or not improving fast enough, then we really should be talking about residential internet speeds and mobile data caps (at least in the US). I feel those are wayyy more restrictive than just about anything else.

For a very long time, disk access speeds were brutal, and hardly improved over decades. Can't blame silicon for that though, since the problem was mechanical. But SSDs have cured that.

Um, 10G ethernet stuff costs way too much. I think everyone was hoping it would get cheaper a lot faster. And I don't think it's a sales volume issue, since datacenters are paved with 10G ports. So that's disappointing.
posted by ryanrs at 12:06 AM on February 11, 2016 [2 favorites]


You know, I just realized that I'm typing this on a 7 year old(!) laptop. And it's not crap. It doesn't feel like I'm using an ancient computer.

Maybe we're getting to the point where it's reasonable to expect personal computers to last 10 years? That's sort of neat.
posted by ryanrs at 12:14 AM on February 11, 2016 [13 favorites]


If we're going to list the bits of computer tech that are crap, or not improving fast enough, then we really should be talking about residential internet speeds and mobile data caps (at least in the US). I feel those are wayyy more restrictive than just about anything else.

That's a whole other thing, though. The articles in the FPP are talking about ways to build better trains, while you're talking about building a better railroad. There's lots of incentive to keep on improving technology when that hardware is owned directly by your many consumers and needs to be replaced frequently. Data networks, unfortunately, are giant, super-expensive things owned by a few corporations. The incentive for them is to do as much as they can with what they already have.

If the Internet was built on some other kind of technology where signals passed peer to peer directly between users, and there were no huge companies in the background keeping everything running, then there'd be lots of incentive to keep improving speeds.
posted by Kevin Street at 12:34 AM on February 11, 2016


As for the FPP itself, it's interesting that the semiconductor industry is no longer officially planning it's development to coincide with Moore's Law. No doubt some breakthrough will come along sooner or later, like the carbon nanotubes in the third link - but that kind of thing can't be planned for with a spreadsheet. The existing technology of transistor miniaturization has gone almost as far as it can go, apparently.
posted by Kevin Street at 12:55 AM on February 11, 2016


So here's a thought: Does a slowing Moore's law make computers cheaper because they can stay in service longer? Computers have historically had dramatic depreciation, far faster than other types of machinery. Maybe if new computers aren't getting much faster, then old computers will be used longer?

For example, a top-of-the-line Mac Pro from 2010 had 12 cores @ 2.7 GHz. Today Apple's top-of-the-line Mac Pro has... 12 cores @ 2.7 GHz. The new Mac Pro is better than a six year old machine in many ways, but would you really consider paying $7,000 to get, what, thunderbolt ports?

Maybe we should be pricing computers not in $/compute/second, but $/compute. That is, look at cost of computation, not cost of rate of computation. Like buying a car based on expected lifetime mileage, rather than top speed.
posted by ryanrs at 1:34 AM on February 11, 2016 [5 favorites]


Changing definitions aside, the original idea of Moore's law was the exponential increase of the number of structures (transistors) per chip surface area.

Structures on chips are made by lithography, i.e. by irradiating the chip surface with light. To make smaller structures, you need light with a shorter wavelength. The next step will be EUV light, which means extreme ultraviolet light, which has a wavelength of about 13 nm. EUV light is generated by using a pulsed laser to make a plasma. This turned out to be very difficult and is the real bottleneck at the moment. Basically, the world is waiting for someone to come up with an EUV lightsource that is stable enough at high energies that it can be used in commercial chip-making equipment.

‘These machines have to run for thousands of hours 24/7,’ he continued. ‘When we talk about HVM readiness, it is mainly to do with serviceability, uptime, spare parts, etc. It is not so much about the science of the technology – that has been demonstrated – but how does that technology integrate with the inspection machine and how do complete systems operate 24/7? This is what we call HVM readiness.’
posted by sour cream at 2:53 AM on February 11, 2016 [5 favorites]


I recently installed Photoshop CC on a five year old Lenovo laptop just because it was lying around and Adobe lets you install on two computers. I was worried that it would be a dog but on that beat up machine that wasn't even very high-end to begin with but it runs just fine.
posted by octothorpe at 3:35 AM on February 11, 2016


Do people actually believe that this is a "law" in the sense that they believe there are "laws of nature" such as the conservation of energy?
posted by thelonius at 3:40 AM on February 11, 2016 [1 favorite]


It's really more of a guideline ...
posted by Chitownfats at 3:49 AM on February 11, 2016 [3 favorites]


OK so, making a more complex chip just to make it meet an artificial threshold us the same thing as making a dial that turns up to 11. The extra computing power is window dressing if it isn't used. It is the banana hammock of computing - nice specs and if you can pull it off - I guess you get a very marginal improvement... but for the vast majority of applications, speeding up a computer while the limiting factor is the network it is on is like buying a Ferrari in a town with only dirt roads connecting it elsewhere. Sure you can speed down to the market, but seriously - you won't ever really get to take advantage of the ride.
posted by Nanukthedog at 4:03 AM on February 11, 2016


Has anyone noticed that system on a chip (smartphones etc) are merely increasing caches instead of making processors faster?

At least battery life is improving, I guess? IANAEE
posted by Yowser at 4:14 AM on February 11, 2016


OK so, making a more complex chip just to make it meet an artificial threshold us the same thing as making a dial that turns up to 11.

Not really. Often, the applications come only after the technology is there. For example, 50 or 60 years ago, some not-so-stupid people predicted that the computing power that you have in your mobile phone right now will be enough for all computing needs of the entire world. Of course, thats's ridiculous in hindsight, but it didn't seem so ridiculous 50 years ago.

So who knows what can be done with that additional computing power? Or more generally speaking, who knows where the ability to make ever smaller structures will take us? Maybe the true benefit will not be computing power but some other currently unforseeable technological advance.
posted by sour cream at 4:46 AM on February 11, 2016 [4 favorites]


Could there be an alternative moores law where the physical space between cpu's is shrinking? Meaning, in 1975 CPUs were about 1 kilometer apart (a random guess), by 1985 CPUs were 100 meters apart. Year 2000 CPUs are like 10 meters apart. Or something like that? The planets total processing power is getting denser?
posted by ian1977 at 4:56 AM on February 11, 2016 [4 favorites]


For example, a top-of-the-line Mac Pro from 2010 had 12 cores @ 2.7 GHz. Today Apple's top-of-the-line Mac Pro has... 12 cores @ 2.7 GHz. The new Mac Pro is better than a six year old machine in many ways, but would you really consider paying $7,000 to get, what, thunderbolt ports?

Well, this is actually a good example of how computing design has been changing instead of just chasing faster processors. A new Mac Pro is a completely different machine then the 2010 version. (And if you're spending $7,000 then you've put a LOT of add ons on the order.) The fastest processor is 3.7 GHz if I remember correctly. But the new Pro is designed around flash storage, fast memory, dual fast graphics cards, pretty much anything outside of the CPU that can make a system process tasks faster. In short, they are designed to remove as many of the system bottlenecks as possible that occur outside of the processor. Laptops are another example of the changing mindset. They used to be designed to be able to pack as much computing power as possible into a laptop sized package. Nowadays you're actually seeing slower processors designed around energy efficiency instead of pure speed. Of course, you can still get i7 laptops with 17" screens and lots of memory if you need a lot of computing oomph on the go.

Since the advent of microchips Moore's Law has always been there, but it seems at times it's been a bystander along for the ride as the technology progresses.
posted by azpenguin at 5:48 AM on February 11, 2016 [3 favorites]


Does this mean that at some point software designers will have to place a higher premium on efficient code to eliminate bloat and improve performance, because they can no longer take for granted that the hardware will compensate?
posted by echocollate at 5:48 AM on February 11, 2016 [3 favorites]


Um, 10G ethernet stuff costs way too much. I think everyone was hoping it would get cheaper a lot faster. And I don't think it's a sales volume issue, since datacenters are paved with 10G ports.

The thing really killing 10G is cabling. Yes, there's a standard for 10G over twisted pair, but it demands Cat 6a cable that tests out to 500MHz, and if you've tested out so-called Cat 6 cables, you know just how few make that spec, and 6a/500MHz is vastly harder to do. Everything else to connect it is even more expensive. Worse, compared to other media, the latency on 10G-T is much larger.

You're not going to see cheap 10G when the cables are $100 a throw -- and actual rated Cat 6A cables cost that because they are a bitch to terminate. For the same price, you can get SPF+ cables with much lower latency.
posted by eriko at 5:53 AM on February 11, 2016 [3 favorites]


Does this mean that at some point software designers will have to place a higher premium on efficient code to eliminate bloat and improve performance, because they can no longer take for granted that the hardware will compensate?

You're already seeing that, esp. in the Apple world, because most Mac/iOS devices spend most of their time running on battery, and the less CPU you use, the longer the battery lasts, and on portable devices, battery is everything.

And every single time I've had to look at why somebody was getting horrible battery life on an iOS device, it was because there was an application hammering the CPU for no reason.

As noted before, that's why there's a market for slower processors made with current technology. They use much less power, that means you either get better battery life *or* can have the same battery life using a smaller battery.

This is also why virtualization won, when people realized that most of the CPUs in the typical corporate datacenter weren't really doing much. So, rather than having 10 machines running at 10% CPU, you have two running at 50%.

If we're going to list the bits of computer tech that are crap, or not improving fast enough, then we really should be talking about residential internet speeds and mobile data caps (at least in the US). I feel those are wayyy more restrictive than just about anything else.

That's a market problem. The tech is there to fix that -- the people controlling the pipes don't want to spend to implement it and since they control the last mile, you deal with the crap they give you or you don't get Internet.
posted by eriko at 5:59 AM on February 11, 2016 [4 favorites]


So what?

As many comments above show, the state of the art more than meets the needs of every consumer except for immersive virtual reality gamers, since what most people need starts and ends with the collection, storage, processing and presentation of text, audio, pictures and video. Consumers don't need more speed. They need more battery life, uptime, security, and longevity in their electronics. Moore's Law isn't the solution for any of these. Moore's Law was the problem. It allowed engineers to kick the can down the road for those concerns. Now that Moore's Law is done, the real work is beginning.
posted by ocschwar at 7:12 AM on February 11, 2016


Does this mean that at some point software designers will have to place a higher premium on efficient code to eliminate bloat and improve performance, because they can no longer take for granted that the hardware will compensate?

They're coming around to this realization, as Amdahl's Law is also a thing. When the RISC vendors mostly got out of the hi-po game (only IBM and Oracle/Fujitsu are still doing anything interesting here), there was a collective shrug, as clustering commodity PC hardware was clearly the future of everything forever. Performance issue? MOAR CORES!

Right now, the trend is to try to move everything to a functional language paradigm, as it's much easier to handle concurrency. It's rapidly approaching its limits, and Google and Facebook are moving away from white-box generic PC servers to application-specific bespoke servers interconnected with specialized networking hardware, also bespoke. (The dreams of SDN enabling generic top-of-the-rack equipment to replace pricey Cisco or Juniper gear have given way to SDN enabling custom networking systems designed from the chips up for a specific application.)

There's not a lot of agreement as to where to move next - Apache, Mozilla, Google and Apple are all devleoping frankenstein languages to try to tackle performance(which means efficient parallel processing) and robustness(which is always tied to security). There's not a tremendous lot to differentiate them, evolutionary rather than revolutionary, don't believe the hype, but the differences are enough to put people in intractable opposing camps depending on what they want out of a modern language.

Worse, there's a limit to the benefits of parallelization - Amdahl's Law - so you must either optimize your code for single-threaded performance, even if that means developing better compilers/optimizers/JITs/VMs for it to run on, or develop hardware that will process your code faster. Or both.

We're seeing an interesting trend, in that for a long while microprocessor design has been aiming to put as much of the computer in a single chip as is possible. Memory controllers, floating point math coprocessors, DSPs, etc. That trend is reversing, as special-purpose chips, GPUs in particular, tho IBM has some more exotic stuff, are used for the kinds of computations they do best, even in non-graphics programs. Banks and banks of them, each having a zillion cores - parallel processing meets optimized silicon for specific applications. There indications this trend will continue or even accelerate as manufacturing custom hardware keeps getting cheaper.

So, code optimization, automated and otherwise, will be increasingly important, as will advances in hardware, even if it's not just stuffing more transistors on a chip a'la Moore's Law.
posted by Slap*Happy at 8:34 AM on February 11, 2016 [1 favorite]


And the biggest deal is that as long as there isn't any oxygen in the case it can handle temperatures which would melt silicon.

For reference, that's anything over me.
posted by Celsius1414 at 10:03 AM on February 11, 2016 [5 favorites]


You know, I just realized that I'm typing this on a 7 year old(!) laptop. And it's not crap. It doesn't feel like I'm using an ancient computer.

Not only that, but for the last few years I’ve had the stunning realization that my computers do everything I want them to do, now. I’m not sitting around thinking "I wish it would do this or that" I’m not waiting for the next generation of computing. I haven’t even kept up on OS upgrades, and really 95% of the software and hardware upgrades I’ve made have been for compatibility, forced upgrades. It’s not something I would have ever expected, as someone who lived for this stuff years ago. Most new tech seems to be about better ad delivery and tracking, I don’t see what I need from it.
posted by bongo_x at 11:06 AM on February 11, 2016 [2 favorites]


That's what they've been saying, though. Demand follows capability. If there's some big breakthrough with carbon nanotubes, 3D processor blocks, topological insulators or whatever, then new applications will be developed to use that new processing power, and your computers won't seem as satisfactory as they used to. None of us know what we "need" until we see what's now possible, and our expectations change to fit.
posted by Kevin Street at 1:50 PM on February 11, 2016 [1 favorite]



That's what they've been saying, though. Demand follows capability.


I don't buy it. Capabilities have improved all along, but people for the most part have stopped buying new electronics except to replace ones that die. Maybe if holography becomes consumer-ready, it will drive up more demand for computing capacity. Otherwise, it's text, pics, sound and video, and those needs are already met by what's at hand today.
posted by ocschwar at 5:31 PM on February 11, 2016


people for the most part have stopped buying new electronics except to replace ones that die

This isn't true in the mobile phone industry - part of the issue is that desktop/laptop PC's are no longer getting the lion's share of R&D. The yearning need for more horsepower and the sore awareness of its lack is keenly felt on the iPhone or Android platforms, where people do eagerly await new OS revs, and the cool new features they bring.

It's beginning to feel like the end of the workstation era all over again.
posted by Slap*Happy at 11:16 PM on February 11, 2016


That's what they've been saying, though. Demand follows capability.

I’m not talking about that though. I literally just want to stay with what I have, software wise. I have, and again am going to be forced to upgrade again only to keep current on software I have to have for work. And I don’t even want that.

I wouldn’t mind new computers, but I will probably not get new ones at this point. I want ones from a couple of years ago.
posted by bongo_x at 11:43 PM on February 11, 2016


Well, this is actually a good example of how computing design has been changing instead of just chasing faster processors. A new Mac Pro is a completely different machine then the 2010 version.

No it's not. Compared to my 2010 Mac Pro, the current Mac Pro is round instead of square, and the color is different.

I'm going to go through this point by point. Sorry if that seems argue-y and aggressive, I don't mean it that way. But the differences between these two machines are so slight, that I need to discuss each point.

if you're spending $7,000 then you've put a LOT of add ons on the order.

That's just from selecting the biggest, baddest processor option, which has more cores, but a lower clock. Granted that is probably non-optimal for most desktop use, but if you're encoding video or such, it's the fastest choice. Besides, if we're comparing two generations, compare best vs. best, right?

The fastest processor is 3.7 GHz if I remember correctly.

The lower core count cpus are faster, but that was true back in 2010 too (though not up to 3.7 GHz I think, more like 2.8 or 3.0 GHz). But the fastest option, the 12-core cpu in 2010 has the same clock as the 12-core today.

the new Pro is designed around flash storage

The 2010 Mac Pro came with HDD as standard, but has an option for a 512GB SSD, which is actually larger than the base model SSD in today's Mac Pro. And of course, upgrading the HDD to an SSD is super easy.

fast memory

Nope, the 2010 is faster. Both machines use DDR3 ECC RAM. The 2010 at 1333 MHz, the current model at 1866 MHz. But the 2010 model has 6 memory channels (dual socket triple-channel Westmere CPUs) whereas the current one has a single socket Ivy Bridge-EP with only 4 channels. The max RAM capacity is the same for both models, 64GB.

dual fast graphics cards

Yes. This is the one thing that has improved a ton in the last six years. The dual FirePro D600s are a lot faster than my Radeon HD 5870, and they have a ton more VRAM. As a consolation prize though, I can upgrade my GPU but the new Mac Pros can't.

pretty much anything outside of the CPU that can make a system process tasks faster.

Nah, man. It just ain't so. Six years later you get a newly styled case and a better GPU. You also get Thunderbolt, but my old Mac Pro has a pile of PCI Express slots and four drive bays, which I would argue is more useful. Also I got a DVD drive, heh.
posted by ryanrs at 12:09 AM on February 12, 2016


[having written all that out, it now occurs to me that maybe the new trash can mac was hurried out the door and then not updated for the last few years, making this not actually valid a comparison of six years of computer innovation. oh well, it's still a good story]
posted by ryanrs at 12:13 AM on February 12, 2016


Do people actually believe that this is a "law" in the sense that they believe there are "laws of nature" such as the conservation of energy?

I think everyone realizes it's more of a theory or general guideline like Murphy's Law or highway speed limits rather than an actual, practical law like highway speed limits when you're being followed by a cop.
posted by dances with hamsters at 8:50 AM on February 12, 2016 [3 favorites]


« Older A Son Rises in the West   |   [E]verything I have done will be held against me... Newer »


This thread has been archived and is closed to new comments