generative chip design
May 3, 2020 7:56 AM   Subscribe

AI Designs Computer Chips for More Powerful AI - "The designs are as good as, or even better than, what humans can manage... a machine learning algorithm that can turn a massive, complex netlist into an optimized physical chip design in about six hours. By comparison, conventional chip design, which is already highly automated but requires a human in the loop, takes several weeks."

Chip Design with Deep Reinforcement Learning - "While we show that we can generate optimized placements for Google accelerator chips (TPUs), our methods are applicable to any kind of chip (ASIC)... After all that hardware has done for machine learning, we believe that it is time for machine learning to return the favor."

Google Teaches AI To Play The Game Of Chip Design - "So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far."

also btw...
Jim Keller: Moore's Law, Microprocessors, Abstractions, and First Principles - "Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He's known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect."
posted by kliuless (29 comments total) 17 users marked this as a favorite
 
The designs are as good as, or even better than, what humans can manage...

Do you want Skynet? Because this is how you get Skynet.

Seriously, it's like we make these movies for nothing.
posted by The Bellman at 8:10 AM on May 3, 2020 [22 favorites]


~The designs are as good as, or even better than, what humans can manage...
~Do you want Skynet? Because this is how you get Skynet.


It certainly is a strong step down the path to self-awareness. ~How do I improve upon myself?
posted by Thorzdad at 8:13 AM on May 3, 2020 [3 favorites]


Really it's a long long way to worry about AGI (which is the current acronym for 'generalized' AI) or some kind of computer that can think for itself. The Machine Learning, Big Data, buzzwords are human optimized linear algebra. Nothing would happen without an engineer tuning and tweaking the parameters to a very specific problem.

Incredibly powerful tools, amazing results, but just that, results to a very carefully defined problem. Ask the most advanced chess/go hero computer to hand you a pencil and there is not just a mistake, there is no response, computers just can not do the understanding trick us super apes work out at about 3 years old.
posted by sammyo at 8:18 AM on May 3, 2020 [12 favorites]


sammyo, if AI wants to finish us of, it's pretty clear they don't need to understand the physical world much at all... just understand and tinker with our supply lines
posted by kokaku at 8:21 AM on May 3, 2020 [5 favorites]


Really cool, but also "just" another optimization problem. Before you start the procedure, there is nothing, after you stop it, there is still nothing. The machine knows no frustration, no desire to stop, no will to keep going.
posted by dmh at 8:38 AM on May 3, 2020 [4 favorites]


Totally with you there kokaku, why would an AI want to through up a bunch of corrosive material into the atmosphere? Dust messes with robot arms as much as human. And they've been trained with games so much they'll likely keep around tribes just for the fun of the hunt. We are training them for 'fun', right?
posted by sammyo at 8:41 AM on May 3, 2020 [1 favorite]


The Machine Learning, Big Data, buzzwords are human optimized linear algebra.

The joke for years has been, it’s AI when you’re pitching it, ML when you’re hiring for it and linear algebra when you’re implementing it.

(I’ve also heard it referred to a “the LAMT stack” - Linear Algebra and Mechanical Turk.)
posted by mhoye at 8:51 AM on May 3, 2020 [40 favorites]


The right way to think about it, imo, is that any time there’s room to use a heuristic to guide your optimization solver somehow, there’s a good chance you can drop in a neural network to learn the heuristic instead of defining it yourself.
posted by vogon_poet at 8:57 AM on May 3, 2020 [4 favorites]


You know how this all started, right? Paul Simon's "You Can Call Me Al"
posted by NoMich at 9:27 AM on May 3, 2020 [4 favorites]


It begins.
posted by ricochet biscuit at 9:43 AM on May 3, 2020


Came for the "do you want Skynets?" joke. Left satisfied.
posted by notoriety public at 9:49 AM on May 3, 2020 [3 favorites]


I'm guessing this kind of thing will gain importance when chip makers start going 3D and the existing placement heuristics go out the window.
posted by RobotVoodooPower at 10:24 AM on May 3, 2020


We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI with each fueling advances in the other,” say Mirhoseini, Goldie and colleagues.

LOL, It's like they said this to get a rise out of Metafilter...
posted by Increase at 11:08 AM on May 3, 2020 [2 favorites]


Surprised no one's written a sci fi on the thesis that "Russian Bot Nets" are actually an AI/ML that is already in the wild and it's decided the best way to save the planet is to be an multi-axis accelerationist.
posted by 99_ at 12:51 PM on May 3, 2020 [7 favorites]


I kinda wish not every thread about AI had to go right into the feasibility of AGI thing but I get it.

Specific to this post - this is really neat. This is the sort of problem that neural networks can be really good at! And a crazy level of complexity even compared to something that is super complex like Go.

They have successfully taken a process that takes a human expert 6-8 weeks and changed the time taken to 24-40 hours with a comparable or better than human output at the end. That's rad. The nextplatform article linked in the OP is more technical and has more interesting charts imo. Cool stuff.
posted by lazaruslong at 1:13 PM on May 3, 2020 [8 favorites]


Yudkowsky must be vibrating so hard that he's entered a quantum superposition right now.
posted by clawsoon at 1:24 PM on May 3, 2020 [2 favorites]


> Before you start the procedure, there is nothing, after you stop it, there is still nothing. The machine knows no frustration, no desire to stop, no will to keep going.

Listen. Understand. That algorithm is out there. It can't be reasoned with, it can't be bargained with. It doesn't feel pity of remorse or fear and it absolutely will not stop. Ever. Until you are dead.
posted by glonous keming at 2:08 PM on May 3, 2020 [3 favorites]


They have successfully taken a process that takes a human expert 6-8 weeks and changed the time taken to 24-40 hours with a comparable or better than human output at the end.

Tools built upon tools, built upon... Every steam engine article has a picture of an Aeolipile but doesn't explain why the ancient greeks didn't build steam trains. They explode if you don't have steel, how do you get steel, metallurgy, then small engines to roll out larger sheets of steel, then bigger engines.

The infamous Moore's Law held for a couple generations and some suggest we're near the end but GPU tech seems to be accelerating. Cell phones have GPU's. What's mind boggling becomes commonplace in a couple of tweets about a new app. Any idea of how computationally intensive it is to add a silly doggo nose to a moving face?

lazaruslong's point about how cool this tool is and how it'll be crucial for 3D chips is well made. Can a next gen chip be designed and be manufactured by automation? Not yet but probably sooner than we think.
posted by sammyo at 2:38 PM on May 3, 2020 [2 favorites]


sammyo: Every steam engine article has a picture of an Aeolipile but doesn't explain why the ancient greeks didn't build steam trains. They explode if you don't have steel, how do you get steel, metallurgy, then small engines to roll out larger sheets of steel, then bigger engines.

Not to mention boring smooth holes in that steel to allow the tight seal needed for steam engines, which I believe was a by-product of a thousand years of development in gunpowder weapons.
posted by clawsoon at 2:58 PM on May 3, 2020 [2 favorites]


Came here for the "Colossus: The Forbin Project" joke , was disappointed.
posted by grimjeer at 3:10 PM on May 3, 2020 [1 favorite]


Surprised no one's written a sci fi on the thesis that "Russian Bot Nets" are actually an AI/ML that is already in the wild and it's decided the best way to save the planet is to be an multi-axis accelerationist.

Not that I recall, but stories with more-or-less benign wild AIs are not uncommon. Two of my favourites are:
Maneki Neko by Bruce Sterling; and
Cat Pictures Please by Naomi Kritzer.
posted by Joe in Australia at 3:29 PM on May 3, 2020 [2 favorites]


See also Linda Nagata's series starting with (IIRC) The Red wherein --rot13spoilers-- Jung gurl guvax vf n znexrgvat (VVEP) NV vf pbagebyyvat n fznyy havg bs fbyqvref gb ryvzvangr jung vg guvaxf ner rkvfgragvny guerngf gb vgf shgher phfgbzref..
posted by GCU Sweet and Full of Grace at 3:38 PM on May 3, 2020


LOL, It's like they said this to get a rise out of Metafilter...

Yeah, articles along these lines are purpose-built to freak out people, or to separate organizations from their money for funding/investment, or both.

Auto-generated code has been around forever. ASICs have been around forever. "Try all these things and tell me which one worked the best" and "Try your own stuff and tell me which one worked best" (what AI/ML is in this case) has been around not quite forever, for a while now. Mashing up all three might be somewhat novel, but I guess "reducing ASIC design time for the emission control system in the Camry your grandma is going to buy" isn't as sexy as "OMG, Skynet anyone??????".

Also, L to O to the L to all the embedded systems engineers that have to spend like 10x the time figuring out why/how the million ICs, based on an auto-generated design, they fabbed are all fucked up, than it would have took to just have those same peeps just design the ASICin the first place.
posted by sideshow at 3:53 PM on May 3, 2020 [5 favorites]


This post reminded that Westworld is on tonight!
posted by misterpatrick at 6:42 PM on May 3, 2020


When the only tool you have is a general AI, every problem looks like a defiency of paperclips.
posted by thatwhichfalls at 7:08 PM on May 3, 2020 [2 favorites]


"Never trust the autorouter" just took on a whole 'nother level.
posted by pompomtom at 10:28 PM on May 3, 2020 [2 favorites]


This post reminded that Westworld is on tonight!

Although being perhaps one of the last to get that WW was running on different timelines all mixed up with no clues that a flashback was not a flashforward, the scifi addict in me just loves the show (team Clementine Pennyfeather) but the hard 'SF' part of me just also gets that so much 'science fiction' is essentially fantasy. Basically fairy stories with sciencey dress up.

And the google article above was partly about chip design but more specifically about a really interesting growing technique of extending pre-trained models. ML is amazingly resource intensive, as in one run using all world wide computation that would've existed in the 70's. Libraries of pre-trained models may move ahead possible uses by folks that can't really afford weeks of super computer time.
posted by sammyo at 6:32 AM on May 4, 2020


Yeah, I wish these discussions didn't tend to group into either the "AGI is coming to get you" and "It's just linear algebra" camps. We're in the middle somewhere, and there are lots of legitimately amazing and fascinating (and terrifying) things coming out of the work. If there's anything humans do better than overhyping new technology, it is absorbing new advancements so quickly that something that seemed impossible (like say, "solving" chess/Go, style transfer/StyleGAN, GPT-2 AIDungeon, Boston Dynamic quadrupeds, etc.) becomes commonplace.
posted by gwint at 11:25 AM on May 4, 2020 [2 favorites]


I also tend to agree with YN Harari that the SciFi fears of a "consciousness" AI are misplaced (he does a funny riff on this in his latest book). AI as we know it is structurally not really oriented to the mode of recreating subjective emotions.

But hey, there's still lots to fear! 'The Weapons of Math Destruction' (basic AI that is carelessly or cynically implemented as social tools on a wide scale) can do a lot to ruin or mess with our qualitiy of life in the near future.
posted by ovvl at 4:14 PM on May 4, 2020 [1 favorite]


« Older No, seriously, Dark Souls 2 is the best Souls game   |   Computer games set in London Newer »


This thread has been archived and is closed to new comments