"Yeah, this stops Spectre and Meltdown very easily"
June 1, 2021 8:34 AM   Subscribe

Morphing computer chip repels hundreds of professional DARPA hackers (New Atlas): Engineers have designed a computer processor that thwarts hackers by randomly changing its microarchitecture every few milliseconds. Known as Morpheus, the puzzling processor has now aced its first major tests, repelling hundreds of professional hackers in a DARPA security challenge. Shape-shifting computer chip thwarts an army of hackers (The Conversation): Hackers need to be intimately familiar with the details of the microarchitecture to graft their malicious code, or malware, onto vulnerable systems. To stop attacks, Morpheus randomizes these implementation details to turn the system into a puzzle that hackers must solve before conducting security exploits. From one Morpheus machine to another, details like the commands the processor executes or the format of program data change in random ways. Because this happens at the microarchitecture level, software running on the processor is unaffected.

Morpheus Turns a CPU Into a Rubik’s Cube to Defeat Hackers: University of Michigan’s Todd Austin explains how his team’s processor defeated every attack in DARPA's hardware hacking challenge (IEEE Spectrum):
Spectrum: Is there an immediate next step research wise or in the company?

Todd Austin: The thing we really discovered with Morpheus was this idea that always-encrypted pointers in code really throw a wrench in the attacker’s ability to understand what’s going on. And we use churn as a mechanism to try and make it even harder. So where we’re going from here is we’re going to continue to embrace the idea of always-encrypted information to protect it from attackers. In addition to pointers, we’re also going to start looking at other data types: strings, integers, floating point values. [And the bottom line is] building computation frameworks around those always-encrypted values and then trying to figure out: “How do we stop people from understanding what is inside that computation?”

That’s where we’re going. We’re expanding the range over which a machine like Morpheus can do encrypted computation on encrypted information. And we’ll then try to understand: What is the impact on programming? What kind of challenge does it create for attackers? What new forms of computing can we create there?
Morpheus: A Vulnerability-Tolerant Secure Architecture Based on Ensembles of Moving Target Defenses with Churn (abstract)
posted by not_the_water (30 comments total) 25 users marked this as a favorite
 
Can someone who's more up-to-speed on the state of malware explain how big of a deal this is?

I was under the impression that a fair amount of malware, viruses, etc. are effectively "legitimate" by way of the user installing it in some way. Either as a part of another shady program, clicking a link, downloading and opening a file, etc.

Can this chip do anything to combat your child or other un-savvy relative downloading a "computer optimizer" program or whatever they clicked?
posted by explosion at 8:48 AM on June 1 [1 favorite]


i for one am certain this is occurring exactly as they say and there is no sensationalistic exaggerations in this reporting
posted by glonous keming at 8:58 AM on June 1 [32 favorites]


The future is fucking wild.
posted by escape from the potato planet at 9:05 AM on June 1 [2 favorites]


Achilles: But when I last heard about your rivalry, it seemed to me you had at last come into possession of an invincible phonograph -- one with a built-in TV camera, minicomputer and so on, which could take itself apart and rebuild itself in such a way that it would not be destroyed.

Crab: Alack and alas! [...]
posted by clew at 9:15 AM on June 1 [32 favorites]


I'm guessing this is meant to deal with buffer overflows and memory exploits, which make up a large portion of the weaknesses that wind up getting patched in security updates.

This doesn't handle trojans, phishing, social engineering, XSS attacks, script injection, or any higher-level things, but those mostly involve attacking people, instead of attacking code.

The company I work at, Polyverse, is doing the same thing at the software level, and we're working on something to deal with XSS and script injection as well. We're accomplishing it using a custom compiler that scrambles code, instead of scrambling the architecture, but keeps that code compatible with binaries that our compiler hasn't touched.

(Mods, if the above goes against site rules, feel free to delete this comment)
posted by fnerg at 9:16 AM on June 1 [12 favorites]


It sounds like a newfangled sort of ASLR, except designed to not fail in the ways that ASLR seems to always fail.
posted by BungaDunga at 9:16 AM on June 1


>Can this chip do anything to combat your child or other un-savvy relative downloading a "computer optimizer" program or whatever they clicked?

As a programmer (not security specialist), and after reading the linked article and the paper's abstract, but not the full paper:

It depends. There are many types of malware and avenues of attack. One avenue is for a malicious program to start running as a program with "guest" permissions, and then use an exploit to give itself full admin permissions. Once it has full permissions, it can do bad stuff. Morpheus helps protect against those exploits. So if you are already practicing good habits and not letting your child use an admin account, Morpheus will help prevent some attacks. But if you let your child run programs with full access, then the malware already has all the access it needs to do bad stuff.
posted by mrgoldenbrown at 9:19 AM on June 1 [3 favorites]


Engineers have designed a computer processor that thwarts hackers by randomly changing its microarchitecture every few milliseconds.

I think the engineers were named Geordi and Data, and the hackers were called "The Borg".
posted by AlonzoMosleyFBI at 9:34 AM on June 1 [9 favorites]


I think the engineers were named Geordi and Data, and the hackers were called "The Borg".

Yeah, AlonzoMosleyFBI beat me to it: to my techspec-challenged brain, this screams "randomly modulate the frequency of the shields and phasers"
posted by Saxon Kane at 9:41 AM on June 1 [15 favorites]


Yeah, this doesn't do anything all that useful in the face of real negative code running on people's systems.

There are a few classes of security flaws I can think of off the top of my head:

1. Low level hardware hacks. Stuff that abuses chip details like branch prediction or memory access patterns. I can see this helping with this
2. Code that 'escapes' from where it's supposed to be. Bad input overflows a buffer, and convinces the system to execute code that wasn't intended. There are existing defenses here, but the underlying issue is that dynamic code is allowed and used by software. Code that is loaded from a file, then executed. The vast majority of security issues fixed by software patches live in this category.
3. An app the user runs that is just evil, but from the point of view of the operating system it is just another app doing stuff. "Free wallpapers!", user downloads, runs, and the app installs just like any other legit app and does what legit apps can do (start on boot, access your files, spy on you, display windows with ads, etc).

When you release software, what you end up with is a set of machine code (assembly) instructions. Very low level stuff like "load memory from here", "add number to that value", "if it's greater than 0, go to this location in the code, otherwise continue on", "call this other function". This is on the hard drive as if it were any other file, just instead of text or an image, it contains a program. An Intel desktop chip has a different way of expressing those commands than your iPhone, although they both have very similar sets of what they can do.

The microarchitecture of a CPU handles those machine code instructions and executes at an even lower level, managing stuff like pre-fetching memory it thinks you'll need soon, rewriting addresses so that several programs can all coexist without knowing about each other, and so on. But to be compatible with a normal intel chip, you need to behave like a normal intel chip, which includes executing the same way. And that model of execution is the root problem of most software vulns. It's not the CPU doing something surprising, it's the CPU doing exactly what you told it, and doing it correctly - just the software running got tricked into running an attacker's code. You can't fix this without being a different CPU design (that doesn't run Windows or your existing software).

To be fair, there are a few modern attacks that abuse the CPU's internal design, and trick some caching mechanisms into leaking data that it shouldn't. Those are rarer, and trickier to exploit, and not what normal malware is doing. A fancy CPU design could help with this.
posted by cschneid at 9:45 AM on June 1 [9 favorites]


So, these are the kind of chips that SkyNet will be running on, then?
posted by Slothrup at 9:46 AM on June 1 [7 favorites]


Seconding/thirding what others have noted — I'm only really security adjacent via some work on a large-footprint open source project, but there are many different classes of security exploits. Some rely on programs just having permission to do things they shouldn't, like modifying other programs without user intervention. Others rely on convincing the user to give them permission to do things. And others — critically — rely on exploiting very subtle aspects of microprocessor behavior to do an end-run around security mechanisms that are supposed to be baked in at the operating system level. This doesn't solve all problems, but that last category gets trickier because it's constantly shuffling around those "very subtle aspects" such that an attacker can't depend on any specific chip executing the same instructions in the same way.

That is, naturally, a gross oversimplification but I think it gets the jist of it?

(on edit: cschneid said it clearer and better upthread, thanks!)
posted by verb at 9:48 AM on June 1


Is this very different than the sandboxing and virtualization layers we already use to prevent data execution (buffer overflow, stack smashing) attacks?

And what's the performance penalty involved in constantly switching between the architecture being used to execute? Presumably a lot of compile-time optimization goes out of the window?
posted by snuffleupagus at 9:51 AM on June 1 [1 favorite]


The IEEE interview says about a 10% performance loss.
posted by clew at 9:55 AM on June 1 [8 favorites]


I have only skimmed the paper linked to the abstract so I could be talking nonsense but the idea seems sound enough if very tricky to retrofit to existing processor designs. It relies on tagging and keeping track of areas of memory that store pointers and code and having a separate piece of the CPU continually "churn" those areas with updated encryption. The linker is modified to generate executables with these tags and the CPU is modified so that the churn can happen transparently.

They claim a slowdown of only 10% but I bet that is heavily dependent on how large the code and pointer segments are. For benchmarking software, these are probably small so not much is being churned. For general purpose software these could be massive.

A few observations: This looks like a heavy hammer designed to fix Return Oriented Programming and stack-smashing games once and for all. Good luck to them but it seems like a lot of work for only a little gain. Stack smashing is still a thing but modern programming techniques (even in C++) go a long way to preventing them from happening. Maybe running the kernel in churned mode while keeping the rest of the system normal would make sense.

I was slightly amused to see that they have introduced another cache into the CPU. I wonder how exploitable that is for leaking information.
posted by AndrewStephens at 10:19 AM on June 1 [5 favorites]


Do you want gray goo? Because this is how we get gray goo.
posted by TheWhiteSkull at 10:19 AM on June 1 [3 favorites]


Is this very different than the sandboxing and virtualization layers we already use to prevent data execution (buffer overflow, stack smashing) attacks?

Yes. To oversimplify, virtualization just creates an environment to run software inside another container. At the basic level, a software exploit within a virtual environment affects only that particular virtual environment, and other environments within the same container shouldn't be affected. When an exploit is detected you can isolate or kill the affected environment and stop the exploit vector (at least until the next similar environment is exploited or you can patch them all). Sometimes it's possible for an exploit to compromise the container itself, in which case there's a bigger problem. This is usually still just the category of software vulnerabilities, though.

CPU-level exploits like Spectre or Meltdown are vulnerabilities in hardware, not software. It doesn't matter what OS or container you're running on affected hardware; the exploits take advantage of a hardware design that's supposed to speed up processing called speculative execution (to simplify that: CPUs are fast enough that in many cases they sit around waiting for programs to tell them what to do next. Speculative execution guesses at the likely next things and just goes ahead and does them all instead of twiddling its thumbs, and then when a program says "give me X" the result already exists and is handed back immediately). In an attack against speculative execution, a process will fill the CPU's execution queue with distinct values and then go looking for those values and their likely products in CPU memory registers, effectively creating a map of where the CPU stores data it's working on. With that map an exploit can read data that belongs to other processes, even when it's supposed to be secure.

A CPU that remaps itself and changes encryption on the fly should, by design, be impervious to the class of attacks that includes Spectre and Meltdown, because even if you know what distinct values you're seeding, you don't know how they're encrypted or where they're stored. In the time it takes to break the encryption the storage locations should have changed. And yes, this introduces a potential class of vulnerabilities if the encryption is flawed or the storage locations aren't really random, and randomness is hard so that possibility isn't entirely out of the question.
posted by fedward at 10:26 AM on June 1 [12 favorites]


Shape-shifting computer chip thwarts an army of hackers

Buys "liquid metal terminator" on Wish; receives a package with this instead.
posted by otherchaz at 10:35 AM on June 1 [1 favorite]


Yeah, AlonzoMosleyFBI beat me to it: to my techspec-challenged brain, this screams "randomly modulate the frequency of the shields and phasers"

Every time they do this they should cut away to show someone down in engineering randomly slapping keyboard keys with one hand and swishing a thermocouple in an ice bath mixed with hot water to attempt to build a truly random data source.

For dramatic tension the bridge would call for even more random movements, which would be indicated by even more frenetic and haphazard keyboard slapping and thermocouple-swishing.
posted by loquacious at 12:04 PM on June 1 [6 favorites]


Every time they do this they should cut away to show someone down in engineering randomly slapping keyboard keys with one hand and swishing a thermocouple in an ice bath mixed with hot water to attempt to build a truly random data source.

Or just get yourself a big old wall of lava lamps.
posted by AlonzoMosleyFBI at 1:05 PM on June 1 [6 favorites]


Every time they do this they should cut away to show someone down in engineering randomly slapping keyboard keys with one hand and swishing a thermocouple in an ice bath mixed with hot water to attempt to build a truly random data source.

And every time, it's O'Brien.
posted by TheWhiteSkull at 1:42 PM on June 1 [3 favorites]


Actually, Todd Austin is a pretty famous academic researcher in computer engineering, him and his students' various work have been highly cited for innovations in CPU design. It's neat to see a familiar paper name come up.

The high-level idea here is really elegant because pointers are a fundamental characteristic of computation, and have long posed difficulties in program analysis: which turns out to be useful for computer security, because the more obfuscated the pointers are, the harder it is for an adversary to exploit them. An information-theoretically-encrypted CPU is pretty much one of the holy grails of computer security. Interesting to see some news on it.
posted by polymodus at 5:59 PM on June 1 [2 favorites]


I'm not sure the framing of this post matches the actual paper here. As noted upthread, defenses against microarchitectural attacks wouldn't really improve the security of end users. However despite the press around this, this paper isn't proposing a microarchitectural defense at all. It's prosing an architectural tool to address control flow hijacking, which could improve the security of end users, if widely adopted. As far as I can tell from an admittedly quick read, this paper is proposing an ISA extension to RISC-V, meaning the microarchitecture and architecture are fixed for the lifetime of the chip -- what is being randomized and re-randomized are the address space and pointers in use by the program.

You can almost see how it happened too:
Academic: *create a new thing*
Journalist: does this stop the last attack who's name I remember?
Academic: uhhhh..... yeah I guess? That's not really the point though...
posted by yeahwhatever at 6:47 PM on June 1


> an ice bath mixed with hot water

I believe the traditional Brownian Motion source is a Nice Hot Cup of Tea.
posted by nickzoic at 7:15 PM on June 1 [10 favorites]


Metafilter: frenetic and haphazard keyboard slapping and thermocouple-swishing.
posted by CynicalKnight at 7:42 PM on June 1 [3 favorites]


Like every other security measure this will work real well until someone on the inside hands over the full specs to some guy he met on-line in return for enough money to go on a three week bender, or get out from under a payday loan.

I'm betting the guy on line bought the full specs and schematics weeks ago.
posted by Jane the Brown at 8:06 PM on June 1 [1 favorite]


However despite the press around this, this paper isn't proposing a microarchitectural defense at all.

I'm betting the guy on line bought the full specs and schematics weeks ago.

Actually from the author's essay, it is a meta-microarchitural approach: the design implements an infinite number of possible microarchitectures satisfying the architectural specifications. It's an elegant idea if you view it that way, because the design could be thought of as self-abstracting since it will morph to a different microarchitecture at any moment.

And it's also why even if you were given the design specs, you could not crack it through such hardware exploits. It's a different paradigm of design, in that it's no longer a one-to-one but a one-to-many (in theory, effectively infinite) microarchitectural implementations of a given CPU architecture existing over time.
posted by polymodus at 9:23 PM on June 1 [1 favorite]


I believe the traditional Brownian Motion source is a Nice Hot Cup of Tea.

Did you miss the part about the Heart of Gold being totally dysfunctional and not at all capable of making anything that was not entirely unlike a cup of tea?

And every time, it's O'Brien.

Oh no, not again!
posted by loquacious at 10:40 PM on June 1


Did you miss the part about the Heart of Gold being totally dysfunctional and not at all capable of making anything that was not entirely unlike a cup of tea?

"I'm going to need some help with this one."
posted by We had a deal, Kyle at 4:45 PM on June 2 [1 favorite]


it's no longer a one-to-one but a one-to-many (in theory, effectively infinite) microarchitectural implementations

posted by polymodus

Eponysterical
posted by snuffleupagus at 6:44 PM on June 2 [1 favorite]


« Older The Chaoyang Trap   |   Sparta seems fantastic for men like them Newer »


This thread has been archived and is closed to new comments