Join 3,494 readers in helping fund MetaFilter (Hide)


The Machine
June 16, 2014 2:46 PM   Subscribe

HP scaling memristor and photonic computing: "the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity. HP can build these chips with traditional semiconductor equipment and expects to be able to pack unprecedented amounts of memory—enough to store huge databases of pictures, files, and data—into a computer. In theory, that would remove the need for a conventional slow disk/fast memory system. With the Machine's main chips sitting on motherboards right next to the memristors, they can access any needed information almost instantly..."
New memory and networking technology requires a new operating system. Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow. Fink has assigned one team to develop the open-source Machine OS, which will assume the availability of a high-speed, constant memory store.
also btw...
-HP to Take On IBM's Watson In Supercomputers
-Mesosphere: A Platform for the Next Generation Datacenter
-D-Wave's quantum computer shows direct evidence of quantum goodness [0,1]
posted by kliuless (66 comments total) 21 users marked this as a favorite

 
Why are memristors expected to be so much more dense and inexpensive per bit than DRAM? There's a reason our computers have orders of magnitude less RAM than disk (a 128x ratio in this system).
posted by jepler at 3:15 PM on June 16


the device is essentially remembering 1s or 0s depending on which state it is in, multiplying its storage capacity.

This sentence has been bugging me since I read it last week. What is being said here that differentiates a memristor from any other type of memory? The bridge between "remembering 1s and 0s" and "multiplying its storage capacity" appears to be "depending on which state it is in"...and I don't get it.

Also, is a memristor a type of non-volatile RAM, like flash or FRAM, or a whole other type of thing?
posted by mullacc at 3:18 PM on June 16 [5 favorites]


I envision a future where near-lightspeed fractal/quantum computers exist, but they are massively unprofitable (and therefore largely ignored) because no one can port Candy Crush Saga for it's OS.
posted by Avenger at 3:22 PM on June 16 [5 favorites]


Oh is it that time in the memristor hype cycle again? Remember how we were supposed to get them in Early 2013™ last time?

Also, is a memristor a type of non-volatile RAM, like flash or FRAM, or a whole other type of thing?

It's a resistor who's resistance is determined by the last amount of current flowing through it. It's not very well grounded in physics and is the closest thing you can get to electrical engineering's version of a perpetual motion machine.

Remember when HP Labs used to develop cool shit and not just chase around silicon unicorns?
posted by Talez at 3:26 PM on June 16 [5 favorites]


I remember my first floppy disk. It had 104K of storage, and I wondered how I would ever be able to fill it up.
I remember my first hard drive. It had 20MB of storage, and I wondered how I would ever be able to fill it up.
I remember the first time I bought a gigabyte of storage all at once.
I remember the first terabyte.
I expect that one day I will buy a petabyte.

One thing has remained constant: the computer these were attached to had directly-addressable memory that was a fraction of the size of the floppies or hard drives that were attached. It isn't just that disk storage is cheap; it's also very readily scalable. I expect this pattern to continue even if we have cheaper persistent RAM.
posted by Joe in Australia at 3:26 PM on June 16 [1 favorite]


Memristers are nonvolatile, like flash, but can be individually written very quickly, are more durable, and are smaller and simpler and thus cheaper to fabricate than flash cells. At least according to HP's press releases.

The game changer is that, unlike any other form of nonvolatile storage, memristors can be used as a direct replacement for DRAM. All other forms of nonvolatile storage need to be copied to RAM for processing and updated in bulk. Thus, you have separate RAM for processing and bulk flash or disk for larger cached data.

With the bulk storage usable directly as RAM, you don't need to copy it for use; it's always ready for direct immediate use, which is an utterly new situation for operating systems. You could have a hundred applications all open and ready for immediate use at the same time. Applications wouldn't need to wrap their data in neat little bundles called "files" to preserve and later retrieve it. Intuitively it seems that such a system would have massive advantages due to the streamlining opportunities, but as we've never had such a system to work with it's a little hard to see exactly what those advantages would look like.

Anyway, that's what the big deal is more than price; it's likely memristor RAM will be more expensive than disk, just as Flash is, but presumably the streamlining advantages (much greater than those we get with flash disks) would make it worthwhile.
posted by localroger at 3:27 PM on June 16 [7 favorites]


Yeah this is weird because the first paragraph of the wikipedia article says that they cannot exist.
posted by Aizkolari at 3:28 PM on June 16 [1 favorite]


I'm torn. Wikipedia disagrees with a corporate press release and I don't know which to ridicule.
posted by localroger at 3:30 PM on June 16 [22 favorites]


Why are memristors expected to be so much more dense and inexpensive per bit than DRAM? There's a reason our computers have orders of magnitude less RAM than disk (a 128x ratio in this system).

DRAM only holds two states per transistor. In theory a memristor wire could hold as many states as could be measured accurately.
posted by Talez at 3:35 PM on June 16 [2 favorites]


Remember when HP Labs used to develop cool shit and not just chase around silicon unicorns?

Like DRM in inkjets and making the ink more valuable than Gold per ounce?

Or how they were abandoning Unix because Windows NT was going to bury Unix?
posted by rough ashlar at 3:36 PM on June 16 [3 favorites]


Remember when HP Labs used to develop cool shit and not just chase around silicon unicorns?

Silicorn Unicon™ is the name of my new company.
posted by George_Spiggott at 3:37 PM on June 16 [10 favorites]


This is William Gibson-level stuff. You kind of wonder what life is going to be like 10 years from now.
posted by KokuRyu at 3:38 PM on June 16 [1 favorite]


Yeah this is weird because the first paragraph of the wikipedia article says that they cannot exist.

"Martin Reynolds, an electrical engineering analyst with research outfit Gartner, commented that while HP was being sloppy in calling their device a memristor, critics were being pedantic in saying it was not a memristor."

So, uh, I still have no idea if they can exist or not.
posted by Pruitt-Igoe at 3:40 PM on June 16 [8 favorites]


Or how they were abandoning Unix because Windows NT was going to bury Unix?

Remember when people used to say "Unix on a phone!" as a joke?
posted by Talez at 3:40 PM on June 16 [10 favorites]


Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow.
wut
posted by George_Spiggott at 3:42 PM on June 16 [1 favorite]


This is William Gibson-level stuff. You kind of wonder what life is going to be like 10 years from now.

More of Marines needing lasers to shoot down drones. Less personal freedom.
posted by rough ashlar at 3:45 PM on June 16


In theory, fiber could also replace Ethernet cables and link entire racks of servers together.
Um, fiber optics have been used to link computers together since at least the 1980s. They really need to get a more technical writer to do these articles, one who can understand and convey what is actually distinctive about what is being proposed because this is another example of a statement that does not, as phrased, represent any sort of innovation.
posted by George_Spiggott at 3:52 PM on June 16 [8 favorites]


Yeah this is weird because the first paragraph of the wikipedia article says that they cannot exist.

And later describes how to implement them.

The papers they provide as proof that it cannot exist both discuss an "ideal memristor" which I guess would violate Landauer's principle (irreversible processing of information increases entropy), but since HP has built something, chances are they've built something that isn't quite ideal, and of course not everyone considers Landauer's principle impossible to get around either.

wut

Wut what? The previous paragraph talks about eliminating the gap between disks and internal memory. If your hard disks are as fast as your CPU, you have a very unusual computer.
posted by effbot at 3:54 PM on June 16 [3 favorites]


Most applications written in the past 50 years have been taught to wait for data, assuming that the memory systems feeding the main computers chips are slow.
wut


What are you confused about? It's precisely for this reason that a user-space process has I/O buffers in user space, in kernel space, and on the disk controller itself. It's effectively why evented I/O has become so popular.
posted by invitapriore at 3:55 PM on June 16 [5 favorites]


It's a resistor who's resistance is determined by the last amount of current flowing through it.

Not an EE, but this sounds like the type of thing that would create fucktons of heat.

Am i totally off base? Because unless we're talking about really really miniscule amounts of current here, that sounds like a space heater.
posted by emptythought at 4:07 PM on June 16


In the Register article (second link) they mention a couple of other computing technologies on the horizon: advanced interconnect research at Fujitsu, and neuromorphic chips. Combine that with D-Wave's quantum computers from the last link, and it sounds like exciting times are ahead.
posted by Kevin Street at 4:07 PM on June 16


Excuse me, just cringing at the dopey phrasing, which is bad even for talking down to a nontechnical audience.

So basically what they've got is a form of nonvolatile DRAM that's dense and cheap enough to eliminate external storage. With 64bit addresses (just for example) one is theoretically able to address 16 exabytes of data (though not using current chips for a variety of architectural reasons), and yes, this would encourage you write applications differently depending on a whole bunch of other changes relating to how that storage is managed.
posted by George_Spiggott at 4:10 PM on June 16 [1 favorite]


The part about fibers is in a paragraph that talks about silicon photonics, which is optical circuitry at the chip level -- i.e. scaling down to communication between individual chips or even inside a chip, and up to server racks. That's a pretty new thing.
posted by effbot at 4:25 PM on June 16 [1 favorite]


So, uh, I still have no idea if they can exist or not.

It depends...largely on whether or not the cat in that box over there is dead or not.


NO PEEKING! 50% of the time curiosity is literally what kills Schroedinger's cat.
posted by yoink at 4:33 PM on June 16 [5 favorites]


Am i totally off base? Because unless we're talking about really really miniscule amounts of current here, that sounds like a space heater.

Yeah, if you ignore all the computatin' that's going on inside, any integrated circuit is basically a tiny space heater. All those heatsink fins aren't just for show.
posted by [expletive deleted] at 4:52 PM on June 16 [4 favorites]


enough to store huge databases of pictures, files, and data

Hold on, they're saying these things can store pictures, files, and data?!
posted by nanojath at 5:07 PM on June 16 [9 favorites]


But not mp3s. MPAA WINS!
posted by symbioid at 5:30 PM on June 16 [5 favorites]


Yeah, if you ignore all the computatin' that's going on inside, any integrated circuit is basically a tiny space heater. All those heatsink fins aren't just for show.

Well i meant more than normal DRAM or flash, both of which at this point don't really need external cooling solutions 9/10 times.
posted by emptythought at 5:47 PM on June 16


D-Wave's quantum computer shows direct evidence of quantum goodness

Eat crow skeptics.
posted by stbalbach at 6:11 PM on June 16 [1 favorite]


Remember when HP Labs used to develop cool shit and not just chase around silicon unicorns?

I really believe that a generation from now we will look back on the discovery of memristors as a watershed moment in the history of computer development. I am an EE. I do not work for HP.

HP is not your bitch.
posted by newdaddy at 6:16 PM on June 16 [6 favorites]


Back in the Apollo Era, computers had Core memory, which was static, but read once. You could turn off the power, and everything would still be there when you turned it back on. Every read destroyed its contents, but it would be written back after read by the memory controller, so you didn't really worry about it.

Memristors are almost identical in terms of functionality. Quick to read or write, persistent, and only a little bit of trouble to write. The nice thing is that they can be set up at DRAM densities, and right on the chip with compute elements. This means you can have a CPU with tons of fast, directly accessed memory, no cache, no need for I/O.

HP seems to be planning the perfect architecture for doing MapReduce jobs at insane speeds.
posted by MikeWarot at 6:21 PM on June 16 [1 favorite]


This technology is a game changer for parallelism, which is awesome, because it's cheaper to add more cores than it is to push clock speeds appreciably higher and this sort of technology is exactly what we need to take advantage of all those cores. Data locality is a key issue in computing and anything that puts more data closer to your processors is a win. A lot of data really close is a big win. The scale of the undertaking is also significant and it will obviously take some time for it to be worked out in practice. It's not quantum computing level complicated, but it's maybe a third or half of the way on the road from here to there.
posted by feloniousmonk at 6:43 PM on June 16


I'm sorry for the prior reply, I apologize. The thing that I want to underline though is that HP has very deliberately supported a program of fundamental research, where most of its peers have long abandoned theirs. The memristor and the optical interconnect are both long-term efforts. Real research does not always hold the schedule set for it.
posted by newdaddy at 6:49 PM on June 16 [8 favorites]


DRAM has about 1/100 the areal density of hard drive platters, and costs (did I slip a decimal place?) 3000x as much. How is memristor memory going to be 100x as dense as DRAM in the same process, and how will each area of memristor chip cost 1/30 as much as that DRAM? (or if it's merely 10x as dense, how will it cost only 1/300 as much per area?)

That's where I remain stuck with respect to these claims that memristor technology is going to obsolete DRAM and hard drives.
posted by jepler at 6:54 PM on June 16


Talez: Remember when people used to say "Unix on a phone!" as a joke?

No. And actually I would have loved to be the person who made that joke!
posted by wenestvedt at 6:56 PM on June 16


The density numbers I found so far are 20 Gigabytes/cm^2 as of last year for memristors. If this is in the right ballpark, you should be able to have a 64 bit processor with 16 gigabytes of directly accessed fast and wide RAM that survives a power cycle... for a few bucks. No cache lines, disk I/O, or anything to get in the way of computing.

Imagine a cluster of these things running MapReduce. You could have a machine with a terabyte of data that gives any answer about all of it in less than millisecond, consuming a few watts in the process.
posted by MikeWarot at 7:26 PM on June 16 [2 favorites]


Memristors are basically "fucking in the streets" level of exciting for computer engineers. If it becomes financially viable, we'll be able to do bigger and better "big data" kinds of stuff.
posted by boo_radley at 7:27 PM on June 16 [5 favorites]


O wow memristors on Metafilter! I'm using my brothers account, but I do research on resistive switching (the phenomena behind memristors.) The debate over HP and what a memristor is has been going on for a while. This guy does a great job of explaining it:

http://tikalon.com/blog/blog.php?article=2012/memristor
posted by SouthCNorthNY at 7:36 PM on June 16 [13 favorites]


I don't know anything about memristors. I did sit in on an Intel presentation a few weeks ago about very very fast, processor-attached SSDs that can do much of the same thing. So there's going to be some variant of this in the very near future. Like with most computer engineering innovations, software is going to be the sticking point -- it would take an enormous amount of work to get all the common IO intensive applications updated.
posted by miyabo at 7:40 PM on June 16


Meanwhile, numbers I found for 2009 DRAM are under .3 gigabyte / cm2 ("Hynix's 1Gbit DDR2 SDRAM achieved an impressive chip size of 45.1mm², only 2.7 percent larger than Samsung's 1Gbit DDR2 SDRAM"). Even if they doubled it twice in 4 years, that's more than 10x as dense as DRAM, but less than 20x. 80GB of nonvolatile storage for the price of 8GB DRAM is nice, but not nice enough to let me get rid of hard drives.
posted by jepler at 7:41 PM on June 16


Hybrid Crossbar Architecture for a Memristor Based Memory

Ouch, the crossbar design of memristor RAM has ~2500x the write energy requirement of SRAM, including SRAM leakage current. The areal density advantage of the memristor device quickly decreases with their hybrid approach, though happily the write energy also decreases. Unfortunately, their 4x4 hybrid has lower areal density than the 2009 DRAM I mentioned above (1.98 Gbit/cm2 vs 0.451cm2 for the Hynix DRAM); the full crossbar design is 5.6x the areal density.
posted by jepler at 7:54 PM on June 16 [2 favorites]


Metafilter: "fucking in the streets" level of exciting for computer engineers.
posted by localroger at 8:12 PM on June 16 [9 favorites]


The major problem with memristors is how quickly they wear out, and they never talk about that in the marketing material. Last time I spoke to a researcher in the field he said (when asked) that they burned out after less than a hundred writing cycles. That's a few years ago now, but if they were approaching the durability of DRAM, or even flash, I think we'd hear about it big time.
posted by Herr Zebrurka at 9:13 PM on June 16 [1 favorite]


jepler: but it's not just DRAM, it's like a L1 or L2 cache, but without the cache. Imagine having 80GB of memory on the processor. For perspective, the latest Intel Haswell processors have 128MB of L4 cache. If true, it really would be a game (OS) changer.
posted by furtive at 9:50 PM on June 16 [4 favorites]


There are things very similar to memristors (MRAM and phase change memory), that you can buy today. They are non-volatile and as fast as SRAM, but less dense and more expensive than Flash. The companies that sell these are also no doubt also working on scaling it up, but i guess HP is winning the press release battle.
posted by eye of newt at 11:58 PM on June 16


Via my correspondent on The Continent:

“Finding the Missing Memristor” [47:51]—R. Stanley Williams
R. Stanley Williams from HP Labs gives a keynote presentation on memristor technology at the UC San Diego Center for Networked Systems Winter Research Review 2010.
posted by ob1quixote at 12:24 AM on June 17 [1 favorite]


This guy does a great job of explaining it

Also a better description of the "ideal" vs. "HP" memristor I mentioned earlier: "Hewlett Packard's memristor device behaves mathematically as a memristor, but its action is via a chemical effect and not the faster, and more useful, electromagnetic effect of Chua's original proposal."

(however, I think the latest iteration is using a non-chemical effect -- maybe someone who actually knows this stuff can elaborate.)
posted by effbot at 2:49 AM on June 17


If they could bring a memristor product to market at a better price/performance (including power usage, not just speed) ratio than existing flash memory technology, they don't need The Machine to sell it. If it can compete with DRAM and stay compatible with legacy systems, you don't need The Machine, but it starts to make sense. If you build it, they will come.

HP seems to be betting on that these memristors will get built at some point, at which time it will sure be nice to have some key patents and know-how.
posted by delegeferenda at 3:03 AM on June 17


That last link also comments on the space heating aspect, btw:

"Williams says the Mott memristor can be both fast and efficient, because the active volume is very small, with a typical dimension of 30 nanometers. Even though the temperature swing may be as much as 800 Kelvin, the switching time is nanoseconds, and the energy dissipated is femtojoules."

(fwiw, Wolfram Alpha helpfully tells me that 200 million femtojoules is roughly equivalent to the kinetic energy of a flying mosquito)
posted by effbot at 3:11 AM on June 17 [2 favorites]


Memristors are real and do work, and the physics is much better understood now than, say, three years ago - I had a very interesting afternoon in a lab in Exeter University with a prof who was working on phase-change memories, which are closely related but not controversial. It's good physics.

The problem is that any new technology that aims to take over an established technology - or, in memristor's case, a set of established tehnologies, as it has been touted as the mythical 'universal memory' that can replace everything from tape to in-CPU registers - is that it's not good enough just to be better. You either have to be SO much better than the current stuff that you create a huge practical advantage for an expensive niche which will fund ongoing development and drive prices down and performance up. or be cheaper in your first incarnation than the opposition - which has had many years of enormous investment in R&D and production, and is entrenched in the ecosystem. And which has huge resources to respond to any attack. Your best bet, which is what Intel's good at and may be doing with its own silicon photonics, is to engineer your game-changer so it fits as well into the current production infrastructure as possible, so you can leverage all that investment. Not many people can do that.

Look how long it took LCDs to kill CRTs; look how long it's taking OLED to make a dent in LCDs, if it ever really does. How many new memory technologies have there been, how many processor architectures, how many 'better' overall systems than the x86/IBM bodge job? Remember UWB? Gallium arsenide?

I'd love HP's Big Vision to come true and reinvent at least one level of computing. But you have to go back a long way to find any examples of new architectures evicting incumbents. ARM's sorta doing it, but it's taken 30 years and last time I looked, x86 ain't dead yet...

Yrs, a still-bitter ex-Amiga owner.
posted by Devonian at 4:01 AM on June 17 [2 favorites]


how many 'better' overall systems than the x86/IBM bodge job? Remember UWB? Gallium arsenide?

A lot of this is because the 'old' technology keeps getting better. GaAs isn't needed for many applications because the capabilities of CMOS have improved so much.

If memresistors provide a higher density, faster, reliable, lower power, lower cost replacement for dram and flash or other next gen memory technology - with a negligable error rate, it will come to market very quickly I would think.
posted by Golden Eternity at 6:59 AM on June 17


From the 2007 D-Wave thread:

"I suspect Dwave is deluded if not an outright hoax." -- Osmanthus

"First off, I agree with Osmanthus. I'm ridiculously highly suspicious of technology claims made to the mass public." -- muddgirl

"I've covered quantum computing for a number of years -- in fact, my latest book is on information theory and discusses quantum information and quantum computation. I'm quite skeptical." -- cgs06

"I'm highly skeptical as well." -- justkevin

"cgs06: It's comments like yours that make MetaFilter occasionally so damn awesome. Thanks for the insight." -- TBoneMcCool

"The smartest guy I know in the field is Dave Bacon. In short: He's skeptical." -- jeffamaphone

"I am cautiously hopeful. " -- stbalbach
posted by stbalbach at 7:30 AM on June 17 [2 favorites]


The 2012 D-Wave thread is a masterclass in unbalanced skepticism. It was clear by 2012 this was (most likely) going to be the real deal.
posted by stbalbach at 7:55 AM on June 17


There was an article I read about this years ago (no clue where it went...Bueller?) ...apparently, this memristor is one of the four 'basic' or 'ideal' electrical components (I remember that one of the other three was wire and one was something we already have...diode?) and all the components we currently use, transistors, capacitors, and whatnot, are merely crude workarounds. Of course, for all I know 'crude workarounds' might be overstating it a bit, like mathematicians telling us we need a 23-cent piece and an 18-cent piece in order to make 'ideal' change...true, but impractical. But folks have been buzzing about memristors for a while now, so I can't wait to see how this works out, especially if it means more energy-efficient data centers.
posted by sexyrobot at 1:27 PM on June 17


I haven't followed d-wave closely either, but as recently as this spring there are still duelling papers about whether d-wave is a "quantum computer": Wikipedia:
In January 2014, researchers at UC Berkeley and IBM published a classical model explaining the D-Wave machine's observed behavior, suggesting that it may not be a quantum computer.[44]

In March 2014, researchers at University College London and USC published a paper comparing data obtained from a D-Wave Two computer with three possible explanations from classical physics and one quantum model. They found that their quantum model was a better fit to the experimental data than the Shin-Smith-Smolin-Vazirani classical model, and a much better fit than any of the other classical models.
posted by jepler at 3:03 PM on June 17 [1 favorite]


The whole point is the May 2014 paper linked in the FPP which obviously proceeds those earlier papers. From Wikipedia:
In May 2014, researchers at D-Wave, Google, USC, Simon Fraser University, and National Research Tomsk Polytechnic University published a paper containing experimental results that demonstrated the presence of entanglement among D-Wave qubits.
posted by stbalbach at 3:41 PM on June 17


whoops, yes, lost that when I tried for brevity.
posted by jepler at 4:50 PM on June 17


Promises, promises.
posted by Twang at 5:02 PM on June 17


Thanks to ceribus peribus for the favorite today on my old memristor front page post. I'm guessing this thread prompted it.

miyabo, the intel tech is promising, but it's nowhere near as ambitious as this. That tech is about reducing latency (single bit round trip time) to otherwise conventional flash memory as found in today's SSDs. IBM's new X6 intel servers are perhaps the first to ship with SSDs attached to the CPU's memory bus in place of some of the memory boards. The benefit is that compared to conventional SSDs (or, heaven forbid, hard drives) latency is very low and bandwidth is very high. That's a a big performance bump, but it doesn't change things enough to make flash as fast as RAM, not even close. So the traditional memory hierarchy (registers, cache, RAM, SSD/HD) is still intact and no revolution in what computers do or how they do it is in order. Plus you can't boot off that storage, so it'd be a pain for home users.

Also, since modern CPUs talk directly to both RAM and PCIe without going through a platform controller hub/northbridge, I'm not as sure as I once was that the IBM approach will be much different from an SSD attached via PCIe with NVME. And those can be a boot medium. Which is nice.
posted by NortonDC at 7:16 PM on June 17 [2 favorites]


stbalbach: It was clear by 2012 this was (most likely) going to be the real deal.

I disagree with this. D-Wave had low scientific credibility until recently, and for good reasons. When they published a paper on quantum annealing in 2011, their results were very unimpressive compared to the claims they had already made in press releases several years earlier. Still, it was seen as a good step forward that they engaged with the scientific community and published in a peer-reviewed journal.

More recently though, they have reached out to some scientists outside the company and invited them to design tests for their machines. The most interesting D-Wave news to me is a paper from such a collaboration, which was published in Nature Physics earlier this year. One of the external authors was the opponent at my PhD defense, so I know him a little bit. He is among the very top researchers in the field of superconducting qubits and has a strong presence in that community. His name lends the work a lot of credibility in my eyes.

That paper did not show that the D-Wave was either better or faster at solving problems than the classical algorithms they compared it to. For problems of the size they investigated, this was not to be expected either. They also couldn't make any strong projections on whether the D-Wave machine would gain an advantage for larger problems, as is the hope and plan. What they did see was that the D-Wave's probabilities of correctly solving different problems resembled those you'd expect in a quantum process more than a classical one. It's not an extremely strong result, but nonetheless an interesting one.

I think it's still not clear if D-Wave will be the real deal, but these days it's worth paying attention to what they do. They are still seen with quite a bit of suspicion after years of being heavy on marketing and light on science, but they have clearly made some technological progress.
posted by Herr Zebrurka at 9:38 PM on June 17 [2 favorites]


The secrecy is understandable given the IP, and so is the marketing given the realities of the business world. Of course the outside experts are skeptical because they don't have access to test the claims, but inside experts did have access (with a NDA), and given the pedigree of the customers and investors it's unlikely D-wave was fooling anyone so late in the game.
posted by stbalbach at 10:36 PM on June 17


this memristor is one of the four 'basic' or 'ideal' electrical components (I remember that one of the other three was wire and one was something we already have...diode?)

Resistor, capacitor, inductor, and now memristor.
posted by newdaddy at 5:04 AM on June 18


The only thing I hope for from the Memristor is this: That "rebooting" becomes an anathema, because Storage==Memory, and these new computers are both "always on" and "not running when not needed". I want to see the day when there is no distinction between active working memory and data storage.
posted by Xyanthilous P. Harrierstick at 7:01 AM on June 18 [1 favorite]


That paper did not show that the D-Wave was either better or faster at solving problems than the classical algorithms they compared it to.

Didn't someone beat their performance with a Python program running on a Raspberry, or something like that?

(but as you say, that doesn't have much to do with whether it's actually a quantum thing or not.)

That "rebooting" becomes an anathema

Of course, "rebooting" also means bringing the system back to a (hopefully) known state, so chances are that we'll be stuck with that for quite some time...
posted by effbot at 11:33 AM on June 18 [1 favorite]


-'a datacenter in a refrigerator'
-Datacenter of the Future
-Forging a Qubit to Rule Them All: "Construction is now under way on a new information-storing device that could become the building block of a robust, scalable quantum computer."
posted by kliuless at 8:42 PM on June 18 [1 favorite]


-'a datacenter in a refrigerator'

A refrigerator in a datacenter!
posted by miyabo at 9:51 PM on June 19


« Older Vessyl is $199 app-cup that tells you what you pou...  |  Hell on Wheels: Are bad trucki... Newer »


This thread has been archived and is closed to new comments