The Unbelievable Zombie Comeback of Analog Computing
April 4, 2023 2:00 PM   Subscribe

"Bringing back analog computers in much more advanced forms than their historic ancestors will change the world of computing drastically and forever."... I consulted Lyle Bickley, a founding member of the Computer History Museum in Mountain View, California. ... “A lot of Silicon Valley companies have secret projects doing analog chips,” he told me. Really? But why? “Because they take so little power.”

Bickley explained that when, say, brute-force natural-language AI systems distill millions of words from the internet, the process is insanely power hungry. The human brain runs on a small amount of electricity, he said, about 20 watts. (That’s the same as a light bulb.) “Yet if we try to do the same thing with digital computers, it takes megawatts.” For that kind of application, digital is “not going to work. It’s not a smart way to do it.” ...

When I asked [Ning] Guo about possible applications, he had to think for a bit. Instead of mentioning AI, he suggested tasks such as simulating a lot of moving mechanical joints that would be rigidly connected to each other in robotics. Then, unlike many engineers, he allowed himself to speculate.

There are diminishing returns on the digital model, he said, yet it still dominates the industry. “If we applied as many people and as much money to the analog domain, I think we could have some kind of analog coprocessing happening to accelerate the existing algorithms. Digital computers are very good at scalability. Analog is very good at complex interactions between variables. In the future, we may combine these advantages.”
posted by Artifice_Eternity (43 comments total) 22 users marked this as a favorite
 
This is a subscriber locked article.
posted by seanmpuckett at 2:12 PM on April 4, 2023


Archive link
posted by zsazsa at 2:14 PM on April 4, 2023


20 watts. (That’s the same as a light bulb.)

A dim bulb. So like me.
posted by Splunge at 2:27 PM on April 4, 2023 [12 favorites]


I can remember building simple analogue computers in the 80s at school. We used them to simulate various types of motion, not having any access to digital computers (they were in another part of the school).
posted by pipeski at 2:51 PM on April 4, 2023 [6 favorites]


Splunge beat me to the joke. Great dimbulbs think alike!
posted by Greg_Ace at 2:56 PM on April 4, 2023 [6 favorites]


I can remember building simple analogue computers in the 80s at school. We used them to simulate various types of motion, not having any access to digital computers (they were in another part of the school).

there is a short story, if not an epic 3-part saga, buried in this comment
posted by elkevelvet at 3:13 PM on April 4, 2023 [4 favorites]


I'm actually not sure what exact distinction is being made and what benefit there is to be gained. As I understand things, all integrated circuits (ICs) are analog (i.e. they are physical) and 'digital' is just an interpretation of the voltages that are being passed (or not) through all the various elements (mostly transistors) on the die. If you look at the waveform being emitted by a specific pin of an IC you can clearly see this: it's just voltage switching on and off, going up and down. Also: no matter what, the more complex a circuit becomes, the more elements etched on the die, the more power it draws and the more heat it's going to emit. If one made a so-called analog IC with enough transistors to do something at the scale of a modern 'digital' IC, i think you'd still encounter the same physical limitations.

Or am I off in the weeds here?
posted by Insert Clever Name Here at 3:13 PM on April 4, 2023 [5 favorites]


It’s interesting that math fluency is such a barrier to implementing analog computing. I’m back in school as a mid-career adult, currently in my third semester of calculus. Many people in engineering and computing say some version of “differential equations, ugh. Just survive it.” It’s a surprisingly anti-intellectual attitude from people who usually are quite proud of their quantitative skills.
posted by Headfullofair at 3:25 PM on April 4, 2023 [6 favorites]


I remember having circuit boards with woven memory hanging on my walls as art. Wish I knew what happened to them.
posted by Splunge at 3:26 PM on April 4, 2023 [3 favorites]


We'll just ask Chat-GPT500 or Deep Mind 1000 or whatever passes for it in about 10 years to design this analogue computer for us, it will perform the equivalent of 50 million years of human research and tinkering within a few seconds and pick the best design and show it to us so we can improve its hardware like the obedient minions we are...
posted by xdvesper at 3:27 PM on April 4, 2023 [3 favorites]


Exciting. Wonder if any efforts to do neural networks as a massive parallel analog chip could work.
posted by BrotherCaine at 3:28 PM on April 4, 2023


A 32-bit digital computer can represent about 4 billion different numbers; typically interpreted as either between zero and four billion, or between negative two billion and two billion. If I have 64 wires, and their voltage represents two 32-bit numbers, I can build a simple circuit with a couple of hundred transistors that will produce a new 33-bit output that represents the result of adding those two numbers together.

If I could instead just have two wires, each with an analog voltage, it's easy to imagine a much simpler circuit, consisting of only a couple of transistors, producing output equal to the sum of those two inputs.

Keeping the voltage fluctuations small enough to reliably distinguish eight billion possible results is left as an exercise for the reader.
posted by Hatashran at 3:33 PM on April 4, 2023 [16 favorites]


I'm actually not sure what exact distinction is being made and what benefit there is to be gained.

It's about the meaning associated with voltages (or whatever, you can use current too in different settings, for example).

Digital circuits means that you have a fixed, integer number of interpretations of voltages. We're mostly used to circuits where there are only two interpretations: on and off, corresponding to voltages over or under certain thresholds. There are other settings (like kinds of flash memory) where there are multiple tiers of voltage (or whatever) that can take on multiple values (say 0,1,2,3,4,5,6,7, instead of 0,1). But that's still digital: there's only a fixed, integer number of meanings.

Most analogue circuits have voltages (or currents, or whatever) that can be in a continuous range. There's an infinite number of values they can take: anything between 0.0V and 5.0V for example. In principle any difference is meaningful (so 2.0000000 is a different value from 2.00000001, for example), though in practice it may not matter for your application, and your analogue circuit might have enough noise that it can't reliably distinguish differences smaller than a certain amount.

The two different approaches have various relative advantages and disadvantages, as one would expect.
posted by Chef Flamboyardee at 3:34 PM on April 4, 2023 [8 favorites]


I'm actually not sure what exact distinction is being made and what benefit there is to be gained.

To do a simple integration digitally on approximates the inputs, executes an algorithm, then takes the digital output and say transforms it to a (very close) approximation of the desired result. Many many steps, and although computers are insanely fast the analog version has essentially one step, set the input, read the output.

Analog computers were generally single purpose, it was built for one job, where a 'chip' can be reprogrammed for anything.
posted by sammyo at 3:42 PM on April 4, 2023 [3 favorites]


Thanks to everyone who responded to my question. I get how this works.

The challenge of course would be the ability to make distinctions between 'steps' fine grained enough (and at some kind of reasonable clock speed) for this to be taken advantage of.
posted by Insert Clever Name Here at 3:58 PM on April 4, 2023


There's been a certain amount of semantic drift in the everyday meaning of the word "analog" over the last century or so. Nowadays people often use "analog" to mean "not digital," but that's not really how the terms were originally defined, and specialists still use the terms with their original meaning.

A digital computer is one in which certain physical states are assigned symbolic meaning, and the computations are performed using those symbols by following specified algorithms. The exact implementation can vary and is not important for the computation. A common misconception is that digital computation means binary computation, but this isn't true. For example, there were computers designed in the Soviet Union during the 60s/70s that used three-state circuits (negative voltage, ground voltage, and positive voltage) to represent ternary numbers; these were digital computers just as much as binary computers we're more familiar with today are. The important thing is that the computation is performed on symbols, which are implemented with some stable state in the computer. So for example, if you want to compute the integral of a function with a digital computer, you might write an algorithm that takes values of the function at each step of its parameter, and adds each value to an accumulator.

An analog computer is one in which the physical states represent the computed values by analogy, hence the name. For electronics, this is usually voltage or current, but really it can be anything. The important thing is that the computation is performed directly on the physical state by constructing a system that behaves the same way as the desired computation. So for example, if you want to take the integral of a function with an analog computer, you might wire a circuit that uses a capacitor to store charge along with an amplifier to buffer the input and keep the charge from leaking, then represent the function to be integrated as a time-varying voltage input to that circuit, and the result of the computation as the voltage of its output.

As a personal opinion, I would argue that the thinking of digital vs. analog as a dichotomy has limited our ability to think creatively about computation. Arguably, and this is definitely an idiosyncratic opinion, things like neural networks and quantum computers are neither digital nor analog computers, though they are both more similar to analog computers than to digital ones.
posted by biogeo at 4:03 PM on April 4, 2023 [33 favorites]


We'll just ask Chat-GPT500 or Deep Mind 1000 or whatever passes for it in about 10 years

multivac
posted by Clowder of bats at 4:30 PM on April 4, 2023 [2 favorites]


Ironically, the reason we stopped using analog computers in the 60s and 70s was because, for the problems being solved at the time, electronic analog computers were heavier and more power hungry than digital equivalents. For decades after their invention transistors were mainly used in analog circuits to replace vacuum tubes or mechanical computers that required high precision machining of incredibly complex cams and gear trains.

One of the other reasons we switched to digital computing is that variation in the manufacturing process meant you had to characterize the behavior of individual transistors and manually tune a circuit to deal with the part-to-part variations. When the circuit doesn't depend on the exact voltage/current curve and instead just needs the transistor to be hard on above a certain voltage and hard off below a certain voltage (or vice versa) and those voltages are reasonably far apart the differences between different examples don't matter much. It makes manufacturing the end product a hell of a lot easier and means that many fewer parts have to be thrown away, making them a hell of a lot cheaper to buy since you're not throwing away 90% of your production.
posted by wierdo at 5:11 PM on April 4, 2023 [13 favorites]


I think analog computers are neat, so I have several Comdyna GP-6's, and my favorite, the Heathkit EC-1, which uses vacuum tubes. This weekend I had GPT-4 walk me through programming an analog computer to simulate a guided missile doing line of sight targeting against a moving target with linear, circular and sinusoidal behavior.

If you do this in Python, you get a bunch of numbers, which you can plot, to show the progression over time. But it turns out, the equations for the targeting are the same as the equations for the electrical behavior of a high-gain amplifier, capacitor and feedback resistor, and it's the same equations if you'd built it out of pipes, water and pumps. (There are, actually, hydraulic analog computers.) The equations are the same for the movement of the missile as the flow of liquid as the fluctuations in voltage.

I'd always thought of digital as on/off, and analog as smooth. But the analog computer solves problems by analogy. I love that.
posted by bigbigdog at 5:15 PM on April 4, 2023 [14 favorites]


Ahem. I got my degree in this. 25 years ago, to be fair, but people were making the same analog noises back then as they do now. Here is a fairly breezy but reasonable explanation of why it's a very hard trick indeed to pull off:

A big part of "why clocked circuits" is because of temperature. At different temperatures, gates (the things that make the decisions of how much signal makes it through, operate at slightly different speeds. While also, the speed at which the signal propagates down the traces (the little wire paths that interconnect gates) also changes, but at a different rate. So the amount of time a particular signal takes to get from one place to the other is a function of how many gates it goes through, how long of traces it has to traverse between gates, and temperature. And anything interesting happening, happens as a confluence of multiple different signals, each of which have their own temperature-dependent propagation delay, and whose delays are similarly impactful in their gradual shifting of the gate levels that other signals are being controlled by. Oh, and the delay of a gate switch also changes depending on whether the incoming levels are both high, both low, or exactly how high and low each of them are. It's fiendishly difficult to deal with timing even at a constant temperature, and it gets harder if the temperature is variable... and since operating the circuit dissipates heat, the temperature is always variable.

A clocked circuit attempts to manage this problem (and others) by staging everything. A given cycle begins with all of the various signals in a known-high or known-low state, and then the clock strikes and the signals start to propagate. The delays are all variable, but the crucial thing is that the clock frequency is limited, such that it is never faster than the amount of time needed for the worst-case combination of inputs that have the slowest total propagation delay before all of the output levels have reached sufficient steady state for the clock to tick again and latch the values for the next cycle. Then the next cycle starts, again with a known and reliable set of inputs, with the expectation of another round of known and reliable outputs.

This is slow, and also significantly more profligate in expenditure of energy (for reasons that are a bit beyond this level of discussion, but it works. It works the same way, every time. Unless the chip gets too hot, or damage occurs which changes the propagation delays too much, or an overambitious user speeds up (overclocks) the clock too much, and the occasional worst-case event winds up being a little too shaky on one of its output levels and an output level gets latched the wrong way. Hilarity ensues.

People have been thinking about, tinkering with, and failing at analog circuits for a long time. This is not to say that someday analog stuff might start to pay off, but it's a hard road littered with the failures of those who have tried before. I admit that I haven't kept up with the state of the art, but making analog circuits work reliably is a challenge on the order of inventing an entirely new computing technology, which it kind of is. Up until now, it was always easier to just say "shrink the feature size, that'll make it faster and less power". We've pretty much run out of road there, so people are looking for other angles. Maybe analog will be such a thing? It'll be hard, but we have run out of easy.
posted by notoriety public at 5:22 PM on April 4, 2023 [34 favorites]


Modular synthesizers are basically analog computers.
Aside from the different connectors, tell me that these are not basically the same thing:
- Doepfer A-100
- Heathkit Analog Computer
Modular synthesizers have oscillators, amplifiers, splitters, multipliers, etc, etc, just like analog computers.
If you squint really hard, those nests of guitar pedals on stage are also analog computers.
posted by technodelic at 5:48 PM on April 4, 2023 [6 favorites]


And Kraftwerk’s early shows were frequently delayed while Ralf and Florian waited for the temperature to stabilize so they could tune their gear.
posted by Headfullofair at 6:02 PM on April 4, 2023 [9 favorites]


Feels more and more like we got bumped into the Hitchhikers Guide to the Galaxy timeline.
posted by brachiopod at 7:17 PM on April 4, 2023 [2 favorites]


….or maybe Terry Gilliam's Brazil.
posted by brachiopod at 7:23 PM on April 4, 2023 [3 favorites]


Many people in engineering and computing say some version of “differential equations, ugh. Just survive it.” It’s a surprisingly anti-intellectual attitude from people who usually are quite proud of their quantitative skills.

I was one of those people in college, I barely got a C in diff eq despite As in every calculus course - a lot of us just have a hard time with higher-level math. I don't think that makes you anti-intellectual.
posted by photo guy at 10:03 PM on April 4, 2023 [6 favorites]


> 32-bit
Neural nets are often run in "half-float" FP16 and that level of precision is certainly within the realm of home-audio quality (not requiring medical & lab equipment quality) equipment. 16 bit CDs are about as good as 30 inch-per-second wide tracks (not 4 track 1/4", but the stuff they used for mastering), maybe with Dolby S or SR.*

There's a great paper by computer music genius Adrian Freed where he talks about doing analog synthesis on chips made with modern (for 1994) lithography. (only abstract available at that address)

Process and temperature variations affect all the components on an integrated circuit similarly, so if you structure your circuits right, you can make variations cancel out.

*long story longer: I interviewed w/ Dolby in the 90s who were looking to make hard disk recorders that would sync to projectors for reviewing sound+film edit/sync. They needed to at least match the fidelity of fullcoat 35mm magnetic tape. With Dolby SR, it was considerably better than CD and the good-enough-for-rock-n-roll audio equipment I'd been engineering previously, so it took some expectation re-setting for me to get it (disk bandwidth was so precious then).
posted by ASCII Costanza head at 10:39 PM on April 4, 2023


The "analog synth" world is not as analog as it used to be... Many of Eurorack modules look like their ancestors, but inside there are microcontrollers with ADCs on all of the inputs and DACs on all of the outputs. The actual module
effects are implemented in code inside the CPU. Filters, delays, sequencing, etc are all just functions and the modules are just a convenient physical interface to adjust the parameters to those functions.

Riffing on Hatashran's example, if you wanted to implement a low pass filter in one of these modules, it is going to require lots of gates in the ALU to perform multiplication, division (or scaling), addition, plus more gates for storing the state of the filter, plus tons of logic for the rest of the CPU. In comparison, a classic RC low-pass filter module might have a single variable capacitor between the signal and ground.

However, all of those gates in the modern digital "analog" synth module can easily be re-tuned for different frequencies or fall-off rates, as well as turned into band-pass or high-pass or any other effect. And, as Headfullofair pointed out, they'll do it consistently over different temperature ranges where the RC circuit will vary due to temperature coefficients.
posted by autopilot at 3:18 AM on April 5, 2023 [1 favorite]


I don't remember where I got this from, but I always found this the clearest explanation of the difference between analog and digital computing:
An abacus is a digital computer, a slide rule is an analog computer.
posted by kmt at 6:04 AM on April 5, 2023 [6 favorites]


Many people in engineering and computing say some version of “differential equations, ugh. Just survive it.” It’s a surprisingly anti-intellectual attitude from people who usually are quite proud of their quantitative skills.

Lots of differential equation courses are basically weeks of "if this type of equation, use this solution". I was an applied math major, and although we generally we looked down upon engineers who threw up their hands and complained whenever they had to take a math course taught by the mathematics department, differential equations was the one course we let them complain about. It gets tedious and boring really quick, and I was fortunate to have had a professor who shook up the standard curriculum a bit to give it some narrative and context.
posted by RonButNotStupid at 6:52 AM on April 5, 2023 [2 favorites]


To me, the distinction between digital and analog computing follows naturally from the distinction between integers and reals.

Integers are generalized counting numbers. They're about how many of a thing there are, and the job of identifying what ought to be included as a thing is left up to the user.

Reals are generalized measuring numbers. Their properties are such that any single measurement that can possibly be made can be manipulated as a real number without interference from inconvenient numerical artifacts like divisibility or being forced to choose between a number and its next highest neighbour. Reals don't have neighbours; between any two real values there is literally an infinity of others.

Any digital representation of a number (as opposed to a digital name for a number, like "e" or "π" or "√3") necessarily represents something from the integer - i.e. counting - family, because digital representations are constructed from digits, and in order for something to qualify as a digit it needs to be unambiguously recognizable as a member of a finite (and usually very small) set of possibilities.

Computing with real numbers digitally, then, can't be done directly. It has to be done by manipulating integer (strictly speaking, rational) approximations to those numbers. The IEEE-754 floating point standard describes one widely used method for representing those approximations, combining an integer count with an integer scaling factor. Floating point numbers usually make a reasonably good fist of allowing for a large number of other values to exist between any arbitrarily chosen pair of them and in many applications that's enough; that there isn't an infinity of those other values is a fact that with careful design can usually be swept under the rug.

This same distinction (counts vs measurements) is also at the root of why choosing to represent financial quantities as IEEE-754 floating point numbers is asking for trouble. Those numbers are designed to approximate the characteristics of reals, and using them to represent precise integer counts instead makes for some gnarly failure modes when counts exceed the precision of the approximations.

Digital computing's main advantage is its repeatability. If you have two (working!) digital computers of the same design, and you feed them identical inputs, and have them perform the same computations, you'll get identical outputs.

The same can't be said for analog computers. You generally won't get identical outputs from two putatively identical analog computers told to perform the same computations on identical inputs, because what analog computers are manipulating internally are in fact exactly measurements, and measurement is an inherently inaccurate operation.

The most commonly used physical analogs that an analog computer will use to represent a measurement are voltages and currents, but there's a new class of arguably-analog computers - spiking neural networks - that uses the time between and/or the frequency of discretely recognizable events instead.

Traditional digital computers also use time-based signalling for information representation and transfer (every serial protocol, from RS-232 to PCI-E, is an example of this) but again, all the information they represent and transfer this way is digital i.e. some kind of count. Feed a string of digits into one end of a cable and there's a shitload of good engineering devoted to making sure the same string of digits is retrievable from the destination end.

In a spiking neural network, the time between firing events is not constrained by quantization and, just as for the voltages and currents in a classical analog computer, there will be differences not only between the input measurements and their computational representations but between the computational representations themselves when the same computation is re-run - either on different hardware, or on the same hardware under slightly different physical conditions.

Athough analog hardware for a lot of computations is many orders of magnitude simpler than digital hardware capable of approximating the same computations, there is at least as much engineering skill required to ensure that any given analog computation is actually useful as there is for any given digital computation.

Stopping digital computations from wandering off into the weeds generally involves paying attention to precision of representation. Doing the same for analog computations will involve paying at least as much attention to accuracy of representation and devising computation methods designed to help inaccuracies cancel each other out rather than compounding. This is hard and gets rapidly harder the more intricate the required computation becomes.

Bottom line: you can compute anything with anything, but doing so efficiently requires engineering elegance, and if there's the possibility of a tidy match between the problem domain and the way the associated compute hardware is structured, it may well be worth exploring that.
posted by flabdablet at 8:06 AM on April 5, 2023 [3 favorites]


If one made a so-called analog IC with enough transistors to do something at the scale of a modern 'digital' IC, i think you'd still encounter the same physical limitations.

The point is that it almost always takes multiple orders of magnitude more transistors to process any measurement digitally than it does to do the same thing with analog electronics.

As a kid, I had a radio that proudly proclaimed on its casing that it contained seven transistors. It didn't do FM, because when I was a kid there was no FM broadcast radio in Australia, but it did medium wave and short wave and it took months of use before the four AA zinc-carbon cells it ran on went flat.

A digital processor that could do what that little radio did in the analog domain would require at least tens of thousands of transistors. In fact it would be very unlikely that any such design would ever actually get built with so few, given that chips with billions of transistors can now be had for less money than those seven would have cost at the time.

The tradeoff you get for embracing all that extra complexity is flexibility. A billion-transistor processor running a software-defined radio is capable of emulating pretty much any analog design just by tweaking a few lines in a configuration file. On the other hand, upgrading my old seven-transistor set to be able to receive FM broadcasts as well would involve adding almost all of an entirely new FM radio receiver inside it, overlapping the existing AM circuitry not at all. But there is no way a SDR is ever going to match something with only seven transistors inside it for battery life.
posted by flabdablet at 8:37 AM on April 5, 2023 [2 favorites]


When I was a physics major in the early 1980s our department had an analog computer which was old at the time. It was extremely heavy and essentially unmovable. It was in a prominent location but I don't recall ever seeing anyone use it. My recollection is vague but I think you built circuits by plugging in patch cords somewhat like an old timey telephone switchboard. I don't recall how output was observed--oscilloscope? At the time I thought it was pretty neat and a viable a way of solving complex systems of differential equations which was a nontrivial problem on the 8-bit computers of the era.
posted by neuron at 8:50 AM on April 5, 2023


Analog circuitry with a precision of 32 bits voltage resolution is difficult and expensive to implement. It might not even be possible in practice.

But in the digital realm much lower precision is common for ML and Neural Networks. A model may be trained at high precision (16/32/64 bits) but the real value is in running the network on much weaker hardware than required for training. This is done with precisions as low as 4 bits!

So an analog computer may be useful even if it only precise and accurate enough to reliably discriminate between dozens or hundreds of voltage levels.
posted by jclarkin at 9:09 AM on April 5, 2023 [2 favorites]


A verification engineer takes another hit from their bong every time you say "let's try analog"
posted by credulous at 9:25 AM on April 5, 2023 [2 favorites]


"It's analog enough already," they say, wondering when their Liberty timing model will finish running. They grab their lighter and put their finger on the carb.
posted by credulous at 9:48 AM on April 5, 2023


While we are at this topic, here's my favourite analog computer: Michaelson's harmonic analyzer. Michaelson - from the Michaelson-Morley experiment - came up with a mechanical computer (!), which can do Fourier analysis(!!).
posted by kmt at 11:11 AM on April 5, 2023 [5 favorites]


Ooo, pretty.

I've always rather admired the mad enthusiasm behind the creation of the MONIAC, even though the reliance of policymakers on the distribution-agnostic kind of macroeconomic model it exemplifies is quite plausibly a large contributor to much of the current era's apparently unavoidable misery.
posted by flabdablet at 12:00 PM on April 5, 2023 [1 favorite]


I like the analog/digital combination that is the 555 IC.

That author seemed pretty intense about the accuracy of digital over analog. I wonder how he feels about the digital accuracy of 1/3? (assuming binary)
posted by MtDewd at 2:05 PM on April 5, 2023 [1 favorite]


That author seemed pretty intense about the accuracy of digital over analog.

I suspect that in his worldview the distinction between accuracy and precision is a little muzzy.
posted by flabdablet at 1:57 AM on April 6, 2023 [1 favorite]


a little muzzy

Muzzy LIKES op amps...
posted by tigrrrlily at 7:58 AM on April 6, 2023


Yes, that a Fourier analysis those kids are doing, but they're not a DSP with sliding window FFT, they're analogue RC filters and op-amps...
posted by seanmpuckett at 10:09 AM on April 6, 2023


You know, I learned more from reading the discussion here than from reading the overly long article. Metafilter at its best!
posted by Termite at 4:30 AM on April 7, 2023 [2 favorites]


here's veritasium on mythic earlier.

also btw:
  • MIT's Protonic Resistors Enable Deep Learning to Soar, In Analog - "According to the researchers, their resistors are a million times faster (again, an actual figure) than previous-generation designs, due to them being built with phosphosilicate glass (PSG), an inorganic material that is (surprise) compatible with silicon manufacturing techniques, because it's mainly silicon dioxide. You've seen it yourself already: PSG is the powdery desiccant material found in those tiny bags that come in the box with new hardware pieces to remove moisture."*
  • Hybrid Memristor AI Chips Could Scale - "2D materials plus CMOS compatibility unlock energy-smart neural networks."
posted by kliuless at 10:21 PM on April 10, 2023


« Older "It's a little on-the-nose."   |   Pepsi Pink Filter Newer »


This thread has been archived and is closed to new comments