Visualizing Moore’s Law
December 10, 2019 1:47 AM   Subscribe

In 1965, [Gordon] Moore wrote that the number of components in a dense integrated circuit (i.e., transistors, resistors, diodes, or capacitors) had been doubling with every year of research, and he predicted that this would continue for another decade. Later on in 1975, he revised his prediction to the doubling occurring every two years. Today’s animation comes to us from DataGrapha, and it compares the predictions of Moore’s Law with data from actual computer chip innovations occurring between 1971 to 2019. Visualizing Moore’s Law in Action (1971-2019) posted by chavenet (37 comments total) 21 users marked this as a favorite
 
Nice but...
Where’s the chips from 1965-1971? Moore’s Law started with the 4004?
And the chips from before 1965, that he used to generate the law?
posted by MtDewd at 3:26 AM on December 10, 2019 [3 favorites]


Nice but...

Do what thou wilt shall be the whole of the Law.
posted by thelonius at 4:20 AM on December 10, 2019 [8 favorites]


Is the sudden increase in transistor count around 1999-2001 due to some technology breakthrough, or is it just what data are picked for the visualization?
posted by They sucked his brains out! at 4:56 AM on December 10, 2019 [2 favorites]


I noticed that big millennial jump as well; I know about Moore’s law but never really looked at how processors were evolving in that much detail. The 1990’s were not such a good decade in terms of keeping up with the prediction, but the early 2000’s more than made up for it. I wonder if something similar will happen around 2025-2030.
posted by TedW at 6:31 AM on December 10, 2019 [1 favorite]


As I understand it, Moore’s Law is limited by how small the circuits can be printed and the size of the chip. Given these parameters, what is the practical limit for the number of transistors we will see on a chip?
posted by Big Al 8000 at 6:48 AM on December 10, 2019 [1 favorite]


Moore's Law describes technology that has not been invented yet. Given the context, discussions of practical limits tend to be more pessimistic than reality.
posted by ryanrs at 6:56 AM on December 10, 2019 [1 favorite]


The big jump around 2000 appears to be a couple things:
  • The rise of GPUs, which are parallel-processing type integrated circuits which benefit significantly from increased numbers of transitors
  • The jump to 64-bit computing
  • Multi-core processors
posted by AzraelBrown at 7:02 AM on December 10, 2019 [1 favorite]


Throughout the 90s we didn't see drastic increases in transistor counts, but clock speeds increased about 20x over 10 years (~50Mhz to 1GHz). Clock speeds have been pretty flat since about 2005.
posted by Foosnark at 7:12 AM on December 10, 2019 [6 favorites]


The precise nature of what 'moore's law' is seems to be a bit nebulous, too. Once upon a time I had understood it to mean per fixed unit area, but the linked to chart just uses total transistor count. But even then it seems to be picky - does it mean total transistors per reticle? If not, it seems like there should be the odd wafer-scale integration project which should blow all the others out of the water. If not that, 3D packaging techniques also seem like they distort the original meaning of the term.

That all said, it's fun to see progress in the industry, even if what counts is a little arbitrary.
posted by Kikujiro's Summer at 7:16 AM on December 10, 2019 [2 favorites]


There were a ton of innovations in integrated circuit manufacture between about 1997 and 2001 - the height of Dot Com boosterism, along with heavy government investment in tech, pushed a ton of multi-processor manufacturing techniques.

CMOS manufacture is probably the big one, but it was incremental; there were just a ton of breakthroughs in scaling between 1996 and 1999.
posted by aspersioncast at 7:16 AM on December 10, 2019 [2 favorites]


Obviously they all waited until I bought a computer in 1998, and then made the big jumps in performance after that.
posted by Huffy Puffy at 7:43 AM on December 10, 2019 [10 favorites]


The current roadmap assumes we're going to hit hard limits on transistor size and cooling efficiency very soon. We're going to have to reach deep into that secret stash of Venusian technology.
posted by RobotVoodooPower at 8:53 AM on December 10, 2019 [2 favorites]


Or just write better software.
posted by howfar at 9:03 AM on December 10, 2019 [6 favorites]


Useful companion reading: List of semiconductor scale examples. Moore's law is a simplification of the general art of optimizing semiconductor manufacture. You hear about this a lot in terms of "process", which refers to how small the transistors and spacing are on the chip.

For instance most Intel CPUs in use now are based on a 14nm process. They were supposed to switch to a 10nm process back in 2013, which gives you roughly twice as many transistors in the same area (14*14 = 10*10*2). Famously, that engineering went poorly and Intel has been stuck at 14nm. Meanwhile AMD caught up and is selling their own 14nm processors (the Ryzen). The current cutting edge of consumer parts is 7nm. Mostly that's RAM, but also Apple's A12 system-on-a-chip is 7nm. These terms are somewhat marketing fluff, and the 7nm parts being sold now may have densities more like "real" 10nm parts.

Scaling stuff smaller doesn't just save space. Smaller stuff also runs faster and cooler. That's why Moore's law often is framed in terms of computer speed. There's a physical limit on just how small things can get before you literally don't have enough room to contain the electrons. But every time I read about that limit it's how it's being broken with some new innovations in how to build transistors. There's definitely a limit, but there's some disagreement on what it is. Particularly in practice; some folks argue that below 3nm won't be commercially viable.
posted by Nelson at 9:08 AM on December 10, 2019 [5 favorites]


It sounds like everyone's worried about Moore's Law only being good through 2025, but, based on the animation, the last chip to actually meet its prediction was in 2017, and by the end of 2019 it looks like even instantly doubling the Graphcore GC2 would only barely meet it, so it kind of seems like we're already in the space when it no longer applies.
posted by Copronymus at 9:23 AM on December 10, 2019 [2 favorites]


On the topic of better software, input lag is a fun little attempt to quantify how shitty user interfaces are. Specifically he's measuring the latency between when a key is pressed and the character shows up on the screen (in a terminal or console). In general the best latencies, 30-60ms, are from 1980s computers like an Apple //e. A 2014 Macbook Pro is 100ms. Basically computer speed has no effect on this measure of latency.

It's a somewhat facile comparison; console windows aren't typical user experiences. And certainly latency of important things like "make a backup of my 100 megabyte movie" have improved enormously over the years. But it's still a telling comparison of how poorly user interfaces are optimized. There's so many sources of latency in a modern computer; wireless keyboards, task switching, screen lag...

The second half of the article measures mobile device screen scrolling, a slightly more realistic UI feature. Only in this case iOS is just way better than Android, presumably because of a good choice in software Apple made.
posted by Nelson at 9:36 AM on December 10, 2019 [2 favorites]


the last chip to actually meet its prediction was in 2017, and by the end of 2019 it looks like even instantly doubling the Graphcore GC2 would only barely meet it, so it kind of seems like we're already in the space when it no longer applies.

The theoretical Moore's Law chip outpaced all development throughout the 1990s, too. Betting against science is a good way to lose money.
posted by Etrigan at 9:43 AM on December 10, 2019 [1 favorite]


I was surprised that the pentium processors seemed so far behind the predicted density, I guess it was a triumph of marketing?
posted by TwoWordReview at 10:00 AM on December 10, 2019 [1 favorite]


I may be wrong, but it appears they completely ignored the PowerPC.   Seems a pretty glaring omission considering how important that line remains even today.
posted by los pantalones del muerte at 10:13 AM on December 10, 2019 [2 favorites]


The next generation of photolithography is Extreme Ultraviolet Lithography, which is a hell of a thing.

This is where we are getting ~10nm features and pushing Moore's Law, at least as far as density/size.

A quick recap - the process of chip making involves using light and masks to expose and pattern photoresist chemicals on the silicon wafer for etching and doping processes, to turn that raw silicon wafer into patterns of "doped" PNP or NPN semiconductor junctions - AKA, transistors and other useful features.

And light is very complicated. The wavelength of the light determines the smallest features you can reliably image due to inherent - and quantum mechanical - properties of light. One of these challenges and limitations is diffraction.

There's complicated math involved, but there's a limit to how fine a detail or feature that a given wavelength of light can image or stencil, because if the wavelength is too large it will either not pass through the smaller features of the stencil or the focus/crispness of the image will blow out and degrade due to diffraction. See also: Wave particle duality and the double slit experiment, among other challenges.

So, what's next? Previously it was lasers and other high quality light sources, recently in the more normal UV regime of light. UV is a very short wavelength and packs more energy per photon than, say, infrared. But UV is still too big for features in the 10 nanometer and under range, which is what we're aiming for now.

Well, it turns out that even generating consistent, repeatable EUV light is in itself not easy. Worse, you can't really pass EUV through regular glass/transmissive optical lens and components - or even air. Basically all matter absorbs or blocks UV light, including optically clear lenses.

So how are they doing it? They're vaporizing droplets of liquid tin - in mid air no less - with lasers, giving off a huge burst of EUV light. Which is then handled, focused and directed entirely with reflective - not transmissive - lenses. Which is then focused on the silicon wafer in a film or pool of water right in contact with the wafer to aid in focusing clarity and resolution.

And they have to do this for each step in the die. On every wafer, each active region or "die" is exposed individually, with N+1 number of steps for each die multiplied by however many dies are on a wafer. Each modern die usually has as many as dozens of imaging and etching steps, so each wafer has hundreds/thousands of exposure steps.

And they need to be able to repeat this with nanometer positioning and registration tolerances to be able to reliably image and etch ~10nm features.

The machine they're using to do this is the size of a large bus and is fantastically expensive and has so many moving parts it's an effort up there with... oh, developing the first nuclear bombs. It involves an incredible amount of the state of the art in machine control, motion control, optics, lasers and EUV light generation.

Right now there's only a handful of EUV photolithography tools online and in production because it's really that expensive, that difficult and really just that bonkers.

The first time I dived into this and tried to wrap my poor non-engineer brain around it I think I sprained something. It's totally mad.
posted by loquacious at 10:24 AM on December 10, 2019 [17 favorites]


Wow, there's more EUV tools online this year than I thought. They're up to a whole 50 or so, but apparently not all of them are capable of HVM - high volume manufacturing, which is where you see commodity chips.

Here's a fairly recent article, but on the scale of Moore's law right now it's probably already outdated.
posted by loquacious at 10:38 AM on December 10, 2019 [3 favorites]




Err, small correction, the current tools for EUV/EUVL don't seem to have water in contact with the wafer, but seems to hold or float it on a frame of some kind over the wafer.

The Immersion Lithography / ArF process is what I was remembering, which does use a water lens in contact with the wafer/photoresist layer.

EUV/EUVL is considered to be the next step beyond Imerrsion Lithography and ArF laser exposure.
posted by loquacious at 10:56 AM on December 10, 2019 [2 favorites]


It's interesting to look at the overall Moore's Law chart (with log scale on the Y axis) and see what products fall above or below the X=Y diagonal line, i.e. which products outperform what we'd expect given Moore's Law in a particular year.

Some of them are not particularly surprising because they were notable and considered significant advancements when they were introduced. E.g. the Motorola 68000, the Lisp Machine chip, the Itanium series generally.

But what's surprising is the commercial success of some significantly underperforming examples. ARM architecture products look like real dogs on the chart, sitting well under the line—the ARM 6 and ARM 9 are basically islands unto themselves in the years they were introduced—but yet we know now that ARM is one of the most successful architectures ever developed, and some people predict it will eventually overtake x86 in the desktop market where the latter has so far held out. (Apple is frequently predicted to be switching to ARM, which is believable since they use it in their iPhone/iPad silicon, and it would allow them to eliminate their dependence on Intel or AMD.)

It looks a bit like a sort of a hare/tortoise situation. ARM chips lagged in terms of transistor count for decades, presumably because they were targeted at less-expensive, less-complex devices, but in the last 10 years or so they have really caught up.
posted by Kadin2048 at 11:51 AM on December 10, 2019 [1 favorite]


I would argue that because that chart is just transistor count vs year, it poorly captures a whole host of considerations Transistor count per core might be more interesting, but that would hard to effectively distinguish (how would you count shared caches? Or cell-style/big-little asymmetric cores?). Otherwise, this chart just turns into process evolution * 'how many cores is it economical to stick in a single package at different points in time'

I sort of feel like what would be most interesting would be single threaded performance / watt across time, but I don't really have the data to put such a thing together.
posted by Kikujiro's Summer at 12:08 PM on December 10, 2019 [2 favorites]


Yeah the bit (ha!) that isn’t captured here is the die size. Most of the giant transistor counts are on huge server and special purpose supercomputer parts, not the chips you want for a low power/long battery life/portable application like a laptop or tablet or phone.

Transistor density is in many ways a better metric, but you are also confounded by the relationships between transistor layout/size and the interconnect density (the wiring that brings power and signal else this is all pointless). That’s where the fluffy numbers come in from various manufacturers.

Scaling the manufacturing is hard work, kids.
posted by janell at 7:29 PM on December 10, 2019 [2 favorites]


Moore's Law is basically over and has been since 2015. The industry leadership, known as the ITRS, which everyone treated as the oracle, literally fell apart, because it wasn't (economically) interesting anymore. The famous log graph is effectively a flat line for the forseeable future.

Also, the real harbinger was the end of Dennard scaling, which was observation that as transistor density improved, voltage (V = IR) scaled along with it. As feature size shrank, voltage hit a nonlinear wall due to fundamental limits of physics, not technological limits. Any further transistor scaling would result in physically absurd power densities. And if you tried to scale voltage down, you would introduce computational errors. Thus ushered the era of Dark Silicon (this is actually the term you'll see in today's papers), which refers to vast tracts of a actual chips having to be dynamically shut down because it would otherwise run too hot.

The newest edition of the textbook that everyone uses in computer architecture goes over this in the first chapter; it talks about some of the newer approaches and trends (machine learning, clouds, open source RISC-V) in order to prepare engineers for a new and less certain/lucky future. I don't work in the field and it wasn't an issue during when I did my stuff in it, but it is a time of fascinating change and crisis.
posted by polymodus at 8:21 PM on December 10, 2019 [3 favorites]


Lol CPUs. Have TSMC print you an ARM core and move on with your life.

I deal with power devices more than CPUs, and the amount of new devices in this sector is booming. Not just silicon, but SiC and GaN power transistors, too. From mobile devices at the low end, to electric vehicles at the high end, there's a ton of research being done in improving power density and efficiency. It's a very exciting time to be designing power electronics.

Hobbyist RF is also experiencing a resurgence in interest with the development of SDR. This makes many state-of-the-art digital modulation schemes available to radio amateurs. Hopefully we'll see people upgrading from the 1200 baud AFSK that dominates VHF ham comms today (yeah, we're that far behind, it's fucking embarrassing).

We're also about to get some really cool commodity sensors from the car makers. Not just LIDAR from the autonomous car researchers, but new radars from normal modern car features like adaptive cruise control and emergency braking. Remember the cool millimeter wave radar tech from Snow Crash? 77 GHz car safety radar is that.

I'm not saying we're necessarily getting Snow Crash Smartwheel(tm) skateboards and electromagnetic harpoons, but there do seem to be an awful lot of people zipping around on crazy electric skateboards these days.
posted by ryanrs at 9:39 PM on December 10, 2019 [3 favorites]


The part of the ITRS that got spun off was new devices and systems, IRDS. While transistors are used in devices, by Moore's law the typical
meaning has been that the transistors are a metric for computation, and so from a computer science perspective there are big open questions about the future of realizable computation. The fact that TSMC is one of only four companies left in the market when there used to be dozens, shows how pathological the situation has gotten.
posted by polymodus at 11:10 PM on December 10, 2019


One factor that isn't mentioned is yield, because chip makers never talk about that. But I was doing a bit of research on the Cerebras CS-1 recently - i'ts a computer built around a single 8" square wafer-scale CPU (the biggest you can get out of a 12" fab). The thing's insane - 1.2 trillion transistors, 404,000 CPUs, 9 PB/s internal memory bandwidth (to 16 GB fast on-chip RAM).

But the thing that makes it possible, unlike all previous wafer-scale devices, is that they're getting 99.95 percent processor yield out of the 16nm TSMC process that it's built on. That's 150-200 defects on average per wafer. In the mid-90s, CMOS CPU industry average line-end yield was 70 percent (one of the few firm figures I could find).

I had no idea that yield had got so good. I'm sure that's a product of that process being relatively mature by now, and it's going to be very small production runs for that product, but even so.
posted by Devonian at 4:10 AM on December 11, 2019 [2 favorites]


Just from the numbers you listed, it sounds like Cerebras is paving their wafer with tiny 3M transistor CPU cores. Any core that has defects can be disabled and routed around, so you don't need to trash the whole wafer.

Every modern big CPU does something similar with rows of cache memory. If a cache row is defective, just disable it and map another one in its place. Coincidentally, cache memory is where most CPU transistors are used these days, so this powerful technique works quite well to improve modern CPU yields.

Mid-90s CPUs didn't have these huge on-die caches, so they couldn't use this trick.

This yield improvement is more about CPU architecture trends over the decades.
posted by ryanrs at 5:04 PM on December 11, 2019 [2 favorites]


Intel's Manufacturing Roadmap, showing an orderly progression of their tick-tock model to release chips on a new process every two years. Make one simple obvious assumption and that implies Intel is planning on having 1.4nm chips in 2029. Of course plans aren't reality, as their troubles with 10nm have shown.
posted by Nelson at 7:49 AM on December 12, 2019


A Bright Future for Moore's Law - "While FinFETs still have plenty of life, at some point in the near future the industry will transition to a new type of transistor architecture: Gate-All-Around (GAA) FETs, in which the gate wraps around the channel on all sides."
posted by kliuless at 4:48 AM on December 13, 2019


IIUC physics dictates a minimum amount of energy needed for any computation. At some point increases in density will be undesirable, because even if the whole thing doesn't melt, waste heat will drown out the processor's results.
posted by Joe in Australia at 8:11 PM on December 14, 2019


Yeahbut, the human brain using around 20 watts says that limit is pretty low.
posted by GCU Sweet and Full of Grace at 8:37 PM on December 14, 2019


Well, you would say that.
posted by Joe in Australia at 2:46 AM on December 15, 2019




« Older "Are you an alien?" "The jury's still out."   |   “You want the true version?” he joked. “I have... Newer »


This thread has been archived and is closed to new comments