Apple unveils M1, its first system-on-a-chip for Mac computers.
November 10, 2020 6:35 PM   Subscribe

 
It feels so weird for desktop computers to finally be interesting again, instead of "Intel announced today that they would be delaying their new series of CPUs, which offer a 1.2% performance increase, for the fifth time."

I did breathe a sigh of relief that they did not announce a replacement for the 16" MBP, because I did not need an immediate temptation to replace my seven-year-old computer.
posted by DoctorFedora at 6:38 PM on November 10, 2020 [7 favorites]


I'm sure this will make it easier to eliminate hackintoshes.
posted by signsofrain at 6:39 PM on November 10, 2020 [14 favorites]


Honestly, my impression is that the answer to "what does Apple think about hackintoshes?" is pretty much just "they don't."
posted by DoctorFedora at 6:41 PM on November 10, 2020 [43 favorites]


Worrisome signs, to me:

1. The M1 MacBook Pro and M1 MacBook Air are virtually identical on paper. Is the M1 MBP clocked faster? Apple is being very opaque here.

2. The M1 MacBooks appear to have lost eGPU support entirely. I had some vague hope that without needing ATI for discrete GPUs that Apple might return to supporting nVidia cards and thus CUDA for machine learning. That seems vanishingly unlikely at this point, which kinda sucks.

3. Can they please admit that the Touch Bar was a mistake?

4. The 720p cameras and lack of differentiation between the Air and Pro makes it seem like this was a rush job. I wonder if there was a contract with Intel that they really didn't want to renew.

I won't be upgrading my 2015 MBP anytime soon. Basically just as fast as the current Intel Macs, but with physical function keys and a more versatile array of ports.
posted by jedicus at 6:45 PM on November 10, 2020 [11 favorites]


I don’t know. The touchbar is often kind of handy, but 3rd party integration is all over the place. Some software really doesn’t make good use of it at all. I like the contextual buttons for a lot of functions.
posted by caution live frogs at 6:51 PM on November 10, 2020 [6 favorites]


With regard to the MBP vs the MBA... peak performance will be the same. However, due to the fan the MBP will be able to sustain high performance longer. This will be particularly noticeable when both the CPU and GPU are highly loaded (i.e., games). Exactly what the difference in sustained performance will be is unclear but I’d be surprised if it weren’t in the 30-50% range.
posted by doomsey at 6:56 PM on November 10, 2020 [5 favorites]


3. Can they please admit that the Touch Bar was a mistake?

You got your escape key back, try to at least meet Apple halfway here!
posted by pwnguin at 6:56 PM on November 10, 2020 [14 favorites]


Apple might return to supporting nVidia cards and thus CUDA for machine learning. That seems vanishingly unlikely at this point, which kinda sucks.

Apple's lack of CUDA support is a big deal. I'm guessing CUDA itself is going away at some time, but "sometime" could easily be 2024-2025. I mean the reality is that most inner-loop development -- Microsoft-y term for local development I've been using because hey it works -- is that all my ML has been done on things like Lambda hardware, and really I think for a lot of ML tasks we'll eventually see heavy adoption of generalized CPU. There will always be things like training tasks that utilizes specialized hardware, but Google is doing some really cool things with TensorFlow.js which I just kind of ignored as a toy before. Some of what they're doing with their video conferencing stuff is using WASM I believe, but still I can easily see ML turning from being specialized to running mundane things very quickly. Note that Apple in this release said they support 10x increase in ML tasks but I don't know what that means. Apple's libraries suck and there weren't any details like you see for real performance hardware such as NVidia's A100. If Apple isn't throwing up benchmarks against popular models like BERT they're just saying ML because it is hot now.
posted by geoff. at 6:58 PM on November 10, 2020 [2 favorites]


I think they were in line with expectations on the lower end. The MacBook Pro replacement is for the low end 13 inch that maxed out at 16 gigs anyway. The only real difference between that and the new Air seems to be active cooling that allows it to run faster for longer without thermal throttling.

I ordered the higher end Air today as a replacement for my old one. The battery and speed increases should be nice. I still have a 16" Pro as my main laptop.

My hopes for a new 12" are dashed, but the 13 is fine.
posted by mikesch at 7:00 PM on November 10, 2020


Can they please admit that the Touch Bar was a mistake?

Hey! I like it for emojis! It is my emoji keyboard. Really the big mistake with the touchbar is that it is used as an extension of the keyboard. It doesn't make sense, I never look at my keyboard. It works really well for emojis because they're visable, any shortcuts and I've already accomplished that with keyboard shortcuts. If it was utilized more as a second, slimmed down screen that'd be great. Like highlight a piece of text and seeing what it'd look like in different fonts? That's cool and something that I'd use.
posted by geoff. at 7:01 PM on November 10, 2020 [3 favorites]


I’m a bit sad that I can’t get a 32GB new-Mini.
posted by doomsey at 7:01 PM on November 10, 2020 [5 favorites]


I assume that they'll stop making Intel based MacBooks eventually so I'll end up with an ARM based one but hopefully not for three or four years. I'll happily wait until the third party software providers iron out the worst of the bugs in the new builds.
posted by octothorpe at 7:03 PM on November 10, 2020


I have the touchbar setup to act exactly as the hard-key function row did and made sure to get rid of the Siri button.
posted by octothorpe at 7:04 PM on November 10, 2020 [2 favorites]


> The MacBook Pro replacement is for the low end 13 inch that maxed out at 16 gigs anyway.

I intentionally bought a macbook pro 13" in advance of this announcement, and it has 32 GB of ram. You can still buy them on the site. It sucks that the new ones all cap at 16 GB. I upgraded specifically because I was running out of memory on my 16GB macbook air :/
posted by and they trembled before her fury at 7:04 PM on November 10, 2020 [1 favorite]


all my ML has been done on things like Lambda hardware

Hilariously, Apple's logo features prominently in the logos of companies who apparently use Lambda hardware.
posted by jedicus at 7:04 PM on November 10, 2020


I've yet to use a computer with the Touch Bar, but personally don't seem to have any of the concerns that people have and I'm still curious. Mac OS doesn't seem to depend on function keys, and I mapped caps lock to escape long ago.

Aren't most of the cutting edge nerds using reduced size keyboards that don't have function or number keys anyway?
posted by meowzilla at 7:07 PM on November 10, 2020


I've yet to use a computer with the Touch Bar, but personally don't seem to have any of the concerns that people have and I'm still curious.

I have had the touch bar since it came out, at least 3 years? Since it is touch, if you're a fast typist you'll graze the touch bar not often but more than once is annoying. They put Siri in the upper right by default, right next to the delete. So accidentally pressing mute isn't a big deal, having your laptop seize up because Siri decides to activate is enough to make you want to throw it against the wall.

I adjusted the touch bar so that nothing really offends me now and I rarely use it unless I want to send an emoji. Again, if someone can come up for a good use case for it beyond emojis I'd be all for using it more often. But usually the option is between using a shortcut key (in which case I don't need to look down in the first place) or something I'd need to stop typing to use the mouse for. For the latter I'm so used to using a mouse I don't think about it.
posted by geoff. at 7:12 PM on November 10, 2020 [6 favorites]


For work, I mostly use an Apple bluetooth keyboard with a numberpad, left and right ctl keys, home, end, pgh up,
and pgh dn. It's glorious.
posted by octothorpe at 7:12 PM on November 10, 2020 [4 favorites]


The touch bar stopped being a problem for me the day they gave us the escape key back and went back to scissor switches. It’s fine.
posted by mikesch at 7:16 PM on November 10, 2020 [6 favorites]


Apple's lack of CUDA support is a big deal

Tensorflow for Swift
posted by They sucked his brains out! at 7:18 PM on November 10, 2020 [1 favorite]


I've used an iPad with various accessories as my daily driver since, uh, the iPad 3 I guess. So much of my time is spent in remote vim sessions and the iPad screen has been much nicer than the MacBook Air for ages and so much cheaper than the MacBook pros.

So yeah I was hoping for a sort of iPad Pro max but (man oh man) the new mini is tempting. If we're not going to get a more capable iPad just yet then I will def take a mini that actually performant.

So yeah I've obviously skipped the touch bar fiasco and the keyboard fiasco and, weirdly enough, find myself interested in a desktop for the first time in ages.
posted by mce at 7:18 PM on November 10, 2020


> I intentionally bought a macbook pro 13" in advance of this announcement, and it has 32 GB of ram.

You bought the higher-end 4 port model. The low end 2 port model, which the M1 13" replaces, was limited to 16GB. So the MacBook Pro lineup is basically the same as it was before in terms of RAM capacity.
posted by primethyme at 7:19 PM on November 10, 2020 [3 favorites]


hooray more nonrepairable nonupgradable machines with vastly overpriced ram and storage options and inbuilt obsolescence

BARF
posted by lalochezia at 7:21 PM on November 10, 2020 [25 favorites]


The M1 MacBooks appear to have lost eGPU support entirely

Is this true? Apple seemed pretty adamant about continuing to support eGPU enclosures with AMD cards that have existing driver support. This project may end up being useful, as well.
posted by They sucked his brains out! at 7:21 PM on November 10, 2020 [1 favorite]


It feels so weird for desktop computers to finally be interesting again

Indeed. Great advances from AMD (Ryzen/Threadripper and maybe their latest GPUs), Nvidia, higher bus, ssd, and ram speeds, higher resolutions, etc. This, we'll have to see but it is welcome though limited to only one very popular vendor. Unfortunately this:

hooray more nonrepairable nonupgradable machines

Sadly isn't going to change and more and more desktop and of course mobile device manufacturers will adopt the same model.

Great tech. Shitty companies.
posted by juiceCake at 7:26 PM on November 10, 2020 [3 favorites]


Reading the Anandtech A14 deep dive and the M1 is going to be a god damned monster.

Apple's chip design guys must have made a deal with Satan himself because what they've put on a chip is black magic fuckery.
posted by Your Childhood Pet Rock at 7:28 PM on November 10, 2020 [15 favorites]


Tensorflow for Swift

That's just about using the Swift language with Tensorflow. It doesn't do anything to provide support for hardware acceleration on Apple machines. You can vaguely hack together something using PlaidML and Keras, but it's an extra hassle. Tensorflow and PyTorch support are right out since they are deeply tied to CUDA.

Apple seemed pretty adamant about continuing to support eGPU enclosures with AMD cards that have existing driver support.

It's based on this observation that eGPUs are not listed as an accessory plus this vaguely sourced article ("Apple’s first Macs built around its self-developed SoC do not support eGPUs, TechCrunch has learned").
posted by jedicus at 7:30 PM on November 10, 2020


BARF

Your barf is my soup.
posted by brambleboy at 7:31 PM on November 10, 2020 [5 favorites]


From what I read on The Verge, the MBA uses 'binned' M1 chips with only 7 of 8 cores active to recover the ones that fail in the manufacturing process, and the MBP gets all 8. Add in a fan to the MBP to sustain intense computing and there's the difference.
posted by msbutah at 7:31 PM on November 10, 2020 [8 favorites]


Tensorflow and PyTorch support are right out since they are deeply tied to CUDA.

Both kits were mentioned as Metal-(i.e, hardware-)optimized during the presentation, I think.
posted by They sucked his brains out! at 7:32 PM on November 10, 2020


That's a really good link, Your Childhood Pet Rock. Thanks for that.
posted by lazaruslong at 7:34 PM on November 10, 2020


It's based on this observation that eGPUs are not listed as an accessory

Oh, bummer. I thought otherwise based on previous statements. That's really a shame.
posted by They sucked his brains out! at 7:36 PM on November 10, 2020


It's based on this observation that eGPUs are not listed as an accessory plus this vaguely sourced article ("Apple’s first Macs built around its self-developed SoC do not support eGPUs, TechCrunch has learned").

AppleInsider says they have "sources" in addition to the aforementioned delisting screenshots to confirm eGPU is dead. I don't really know enough about hardware architecture to know why or what kills it as I thought it just relied on the speed of the USB-C connection? I assumed that was a generic sort of connection where the limiting factor was bandwidth and latency, that anything can be sent over that signal.

In any case it was kind of a niche market of gamers and ML developers. If you're going to need a box that sits on your desk somewhere might as shell out your model training to accelerated hardware. And for gamers OSX is already the bastard stepchild.
posted by geoff. at 7:39 PM on November 10, 2020


Well, it goes over Thunderbolt 3, I think (at least mine does). I tried to use it to extend the life of an older Mac mini, but it really eats bandwidth. I couldn't do videoconferencing with it connected.
posted by They sucked his brains out! at 7:42 PM on November 10, 2020


Got a 13" MBP in May, my first new Mac since a 2008 MBP (in 2015 and 2017 I built Hackintoshes).

It's worth the $30/mo Barclays 0% minimum payment through the end of next year but does run a tad warm (~130℉) doing nothing, which is annoying AF. Shoulda waited even tho I knew then the ARM Macs were coming (the utter lack of new APIs in macOS for several years was a big tipoff).

Unified memory is cool, in some ways the new Macs are like Amigas, really powerful game computers now.
Wonder how the ARM Mini compares to a PS5 . . .

I never did like the x64 era, there's just so much cruft that comes with Intel. Crazy that when Apple threw in the towel on PPC it had $4.5B in Mac sales (vs. $4.5B in iPod sales in 2005, that was the year the iPod exploded).

Last year Apple had $140B in iPhone sales, $30B in Macs, $24B in IPads, $30B in wearables & other accessories, and a market cap 10X that of Intel.
posted by Heywood Mogroot III at 7:43 PM on November 10, 2020 [2 favorites]


AppleInsider says they have "sources" in addition to the aforementioned delisting screenshots to confirm eGPU is dead. I don't really know enough about hardware architecture to know why or what kills it as I thought it just relied on the speed of the USB-C connection? I assumed that was a generic sort of connection where the limiting factor was bandwidth and latency, that anything can be sent over that signal.

The largest hurdle will be that there aren't ARM drivers available for any card. Apple appear to be making a clean break on the driver side of things. The T series chips normally arbitrate and secure the PCIe bus so whether it can be reenabled if Nvidia were to write Apple Silicon drivers for the 3000 series is also unknown.
posted by Your Childhood Pet Rock at 8:01 PM on November 10, 2020


Me checking my 2018 Mini’s trade in value: Hmmm.
Me realizing these won’t run an OS that supports 32 but apps so my games library evaporates in a puff of forced obsolescence: Ah, well.
posted by rodlymight at 8:10 PM on November 10, 2020


How much is it going to be for this new Mac that has the wonderful feature of not even running Windows if you need it? Just double what an equivalent PC would be?

(Yeah, I'm pretty cranky. I mean, I like the idea of an ARM-based laptop, but it's going to be priced right out of my range as usual.)
posted by JHarris at 8:12 PM on November 10, 2020 [2 favorites]


I wasn't too far off
posted by They sucked his brains out! at 8:55 PM on November 10, 2020


They're already in the store, you can price them out now.
posted by aramaic at 9:07 PM on November 10, 2020 [1 favorite]


Personally, I'm waiting for the M5 computer, the "unit that must survive."
posted by jabah at 9:11 PM on November 10, 2020 [1 favorite]


Tensorflow and PyTorch support are right out since they are deeply tied to CUDA.
This feels… unlikely? If anything, I remember seeing someone note a year and a half ago that Tensorflow ran faster on iPads/iPhones than on any desktop computer they had access to, presumably because Apple's been deliberately architecting their chips to run JavaScript fast.
How much is it going to be for this new Mac that has the wonderful feature of not even running Windows if you need it? Just double what an equivalent PC would be?
They're putting in virtualization support, though that would require Microsoft to sell ARM Windows to the public, which they do not, to my understanding. Right now that operating system is… not super great. You could, in theory, run Windows on an x86 emulator like QEMU or something, just brute-forcing it in actual emulation instead of a compatibility layer, but my understanding is that the CPU architectures make that a fairly inefficient proposition. Still, though, for my use case (the few times a year I need Windows, it's almost always to convert a file from a proprietary format into a more readily intercompatible format), even if Windows runs with a ⅔ speed hit on the CPU or whatever, it'd still be fine for my needs.

On the other hand, well, kind of literally the whole point of this processor transition is that there isn't an equivalent PC. Like, you aren't going to get 50%+ longer battery life with performance significantly faster than the vast bulk of what Intel offers, with what Intel offers. (And the prices stayed the same as before anyway, or dropped $100 in the case of the Mac Mini.)
posted by DoctorFedora at 9:32 PM on November 10, 2020 [4 favorites]


Tensorflow ran faster on iPads/iPhones than on any desktop computer they had access to, presumably because Apple's been deliberately architecting their chips to run JavaScript fast.

... are JS developers doing a lot of matrix multiplication?
posted by pwnguin at 9:37 PM on November 10, 2020


I dunno, honestly! All I know is that they've been optimizing the A-series chips for the last few years to run JavaScript real fast, presumably because that brings real-world benefits, and tensorflow.js has apparently really benefited from this as well
posted by DoctorFedora at 9:42 PM on November 10, 2020


There are some pretty excellent refurb iMacs on deep discount and I've been considering getting one and turning it into a Linux box because the form factor and screen is just so nice. But on the other hand, more used laptops to buy when everyone turns them in...
posted by Ghostride The Whip at 10:19 PM on November 10, 2020 [2 favorites]


... are JS developers doing a lot of matrix multiplication?

Not really JS, with WASM JS is becoming like Python, see Background Features in Google Meet, Powered by Web ML ... specifically XNNPack. Heavily optimized for the ARM already.

The problem with lacking Cuda isn't that there's alternatives out there to Cuda it is that the vast majority of ML training is done on Cuda. So much so that you have to kinda hack popular libraries like Detectron2 to do things on the CPU and the performance is so optimized for Nvidia libraries it is almost why bother. It is ingrained as the default for ML, you'll very quickly run into issues even setting up and running most projects. There's a sea of difference between needing Cuda to do a specific task and being a developer looking at how other projects did things and running constantly into projects built with Cuda flags hardcoded everywhere.

Google, and I assume Facebook/Apple have huge incentives to get ML working in the browser which is a large reason for the push to WASM. If FB can offload all their face detection or Twitter could offload thumbnail selection to the client-side they'd save enormous amounts of money. Right now Nvidia chips are crazy priced and the data center versions of the same GPU is something like 2-3x just for the data center shrink wrap agreement.

Expect all these performance gains by things like Apple's M1 to get destroyed when Netflix decides that the preview scrubbing and interesting scene selection is a lot easier to do when the consumer is paying for the electricity and computational costs. Can't wait for browser to get bogged down like it is 1999 when Facebook tries to push tagging friends onto me. ML-blocker might be the new Ad-blocker. Right now it is esoteric and only a few large companies with large management overhead have the resources to do it. ML will be the new Flash when someone figures out how to make it easily accessible. Compiling C to WASM and then using TensorJs to call it isn't exactly a low barrier to entry.
posted by geoff. at 10:44 PM on November 10, 2020 [12 favorites]


I do wonder what their marketing messaging will be during the transition to mainstream users who won’t know the difference between Intel and ARM. How will they explain that there are two seemingly identical but actually somewhat different lines of computers, one of which can’t yet do all the things the other one can? Will they explain it at all?
posted by Ian A.T. at 10:48 PM on November 10, 2020 [2 favorites]


I’m always happy for technology to push forward, and I never begrudge when something I bought has been superseded because it means the next one I buy will be even better. But having bought the 16” MacBook Pro exactly one year ago for thousands of dollars, it was a bit of a bummer to hear that specific laptop alluded to repeatedly in the presentation as an example of the technology these new chips will immediately outstrip. I half-expected Tim to look into the camera and call me an idiot by name.
posted by Ian A.T. at 10:53 PM on November 10, 2020 [3 favorites]


I do wonder what their marketing messaging will be during the transition to mainstream users who won’t know the difference between Intel and ARM. How will they explain that there are two seemingly identical but actually somewhat different lines of computers, one of which can’t yet do all the things the other one can? Will they explain it at all?

I think they're basically trying to get the stuff that doesn't have ARM binaries ready to run in Rosetta at a minimal performance impact, which probably covers like 90% of use cases. Word, Excel, etc. Probably the same for Photoshop, etc, but if you're using those professionally, you probably know a bit more about your hardware and most people probably aren't running those on the lower spec machines they announced today.

Outside of covering those major cases, they're letting the developers take the hit if there are compatibility problems. If you stay in the Apple development ecosystem you get fat binaries that work in both. If you don't and your users have issues, blame shifts to the developers for not keeping up.

Not saying it's right or wrong, but that's the assumption. 95% of everything should work from day 1, with fat binaries or Rosetta. They're not worried about the stuff that doesn't. By the time the rest of the line switches, the professional level apps will have been updated.

I wonder what the stats are about how many MacBook Airs run any 3rd party software at all?
posted by mikesch at 11:04 PM on November 10, 2020 [2 favorites]


I half-expected Tim to look into the camera and call me an idiot by name.

Oh, like Apple has been doing to all its users for decades?
posted by flabdablet at 11:13 PM on November 10, 2020 [3 favorites]


hooray more nonrepairable nonupgradable machines with vastly overpriced ram and storage options and inbuilt obsolescence

My daughter is still running a 2013-ish Macbook Air with a Magsafe 2 power connector to tell you how old it is (maybe it's not 2013 I forget). Macs have a really long shelf life these days. You don't need to expand a machine for it to keep working.

Processor improvements are almost all incremental these days so it's pretty cool to see some really core improvements here. 8 core of a little-big architecture is basically an average smartphone these days but the higher tdp will make these very impressive. Putting memory in the MCM package is a real blow to expandability, but it's amazing that basically the entire computer is a single package. Even a few years ago I don't think anyone would have imagine that for a desktop computer. The laptops are basically a mainboard that's probably the size of the iphone mainboard and a ton of battery. It's true these machines lack a bunch of features (2 thunderbolt ports????) but it's a feature set that will be sufficient for a lot of people. It really is a huge advance in a world that had basically settled into a perpetual Intel monopoly. I fully expect that if you bought one of these M1 laptops today that you could reasonably expect it to still work pretty well in 2030. It'll probably be stuck on an OS that's a few years out of date at that point, but I expect it'll still work.

Running iOS and iPadOS apps on these things should be interesting - running Android apps on ChromeOS is janky at times, but a lifesaver other times when there's an app out there with no decent web version. I expect that in a couple years there will be someone out there with a Macbook that just runs Safari and iOS apps. I think certain apps like Slack or Discord would run way better as iOS apps vs the desktop versions based on Electron (unless the ios versions are also electron and I'm confused).
posted by GuyZero at 11:17 PM on November 10, 2020 [4 favorites]


2013-ish Macbook Air with a Magsafe 2 power connector to tell you how old it is

Those Magsafe connectors were a genius idea. I've lost count of the number of boards I've seen take damage from people tripping over cables and crowbarring them out sideways.

Such a shame Apple never really got the build quality right. Magnetically attached connectors should have taken over the world.
posted by flabdablet at 11:31 PM on November 10, 2020 [20 favorites]


I think certain apps like Slack or Discord would run way better as iOS apps vs the desktop versions based on Electron (unless the ios versions are also electron and I'm confused).
No, you're not wrong! I've heard of people running those in the Xcode iPad simulator rather than the native Mac version because they were actually less inefficient that way :0
posted by DoctorFedora at 12:30 AM on November 11, 2020 [2 favorites]


... are JS developers doing a lot of matrix multiplication?

I'm using Brain.js with excellent results (for my use case) in the browser. In addition to what geoff. mentioned above, I think with JS we'll see ML being used all over the place for tiny tasks that won't fry your smartphone (because those tasks will not rely on gigantic dirty data sets).
posted by romanb at 1:00 AM on November 11, 2020


I'm hoping that gen 2 of this equipment comes with a new form factor, selfishly because I miss my 12" MacBook. An iPad Pro + Smart Keyboard is a nice replacement but still missing some things.

I really wish Apple would let their iPhone camera team take over the facetime cameras, since it's just stupid they're still shipping a 720p lens on them.

As a tangent, a resurgence of the old iSight camera would be amazing, as a USB-C model, and compatible with the AppleTV. Combine that with a HomePod Mini and you've taken over the low end conference room system market.
posted by mrzarquon at 1:42 AM on November 11, 2020 [2 favorites]


For what it’s worth, the screen/lid of a MacBook is a lot thinner than any phone, which is maybe why the camera is 720.
posted by snofoam at 2:07 AM on November 11, 2020 [5 favorites]


(unless the ios versions are also electron and I'm confused).

Electron doesn't run on mobile, and Apple won't approve third-party web engines in any case. There was/is an iOS equivalent of Electron, which is stuffing a bunch of HTML/CSS/JS in an app resource folder and packaging it with a one-file Objective C app that just creates a (system) web view and runs it. Unsurprisingly, it runs like flaming garbage, and most developers avoid it.
posted by acb at 2:09 AM on November 11, 2020 [4 favorites]


>Putting memory in the MCM package is a real blow to expandability, but it's amazing that basically the entire computer is a single package. Even a few years ago I don't think anyone would have imagine that for a desktop computer.
ARM Ltd was founded by Apple, (Britain's) Acorn Computers and chip integrator VLSI (which stands for Very Large Scale Integration). Acorn put out computers, the A3010, A3020 and A4000, with system-on-a-chip designs in the mid nineties. The promise of Gordon Moore's transistor density law was always this kind of integration. Plus, plenty of people have tried cellphone+keyboard+screen as desktop computing experience, but you don't know of their failures.

The M1 and the Mac Mini and MacBooks released today are the result of a very long game played by Apple. I wonder if, along the way, the iPhone revenues almost killed their interest in desktop computers.
posted by k3ninho at 2:23 AM on November 11, 2020 [1 favorite]


I'm sure this will make it easier to eliminate hackintoshes.

Both of those guys will be okay.
posted by Thorzdad at 2:36 AM on November 11, 2020 [30 favorites]


The largest hurdle will be that there aren't ARM drivers available for any card.

ARM systems already exist that have NVIDIA GPUs and NVIDIA is buying ARM. I don't think that's going to be as big a hurdle as one might think.
posted by edd at 2:45 AM on November 11, 2020


Looking at these devices, I'm thinking if I buy one, it'll be the Air. My current personal MacBook is a 15" Pro from 2016, which has been kept on Mojave because of 32-bit music plug-ins I don't want to lose, so I could do with upgrading. Though, with this being version 1.0 of the new architecture, I'm reluctant to spend lots of money on the top-end option; perhaps when the 2022-2023 M3/M4 comes out, I'll buy the 16" version of that.

The touch bar would be nice (I mostly use it for scrubbing through photos), but not $400 or whatever worth of nice.
posted by acb at 2:46 AM on November 11, 2020


This makes me confused as to what to buy. I've been using Windows machines for several years now, but am ready to switch back to Apple. I'm planning to buy an iMac next April (gotta wait that long if I want my work to pay for it), but I'm not terribly excited about the loss of having a Windows partition.

Last time I had an iMac, I would sometimes boot up Windows and for games or other software I couldn't get on iOS. Now looks like that capability will be gone. I guess the tradeoff is better performance (which I need for video footage rendering), but I'll be married to iOS. I find both operating systems have their pros and cons, but I just like having the freedom of a hackintosh.
posted by zardoz at 2:51 AM on November 11, 2020 [1 favorite]


Windows 10 exists on ARM, this heralding the end of bootcamp may be premature, unless I just haven’t read the article making it explicit.
posted by thedaniel at 3:49 AM on November 11, 2020


ARM systems already exist that have NVIDIA GPUs and NVIDIA is buying ARM. I don't think that's going to be as big a hurdle as one might think.

ARM versions of the macOS drivers not generic ARM drivers. Apple didn't rewrite the drivers for Apple Silicon.
posted by Your Childhood Pet Rock at 4:16 AM on November 11, 2020


Windows 10 on ARM only runs store apps, so, useless.

> I'm sure this will make it easier to eliminate hackintoshes.
Both of those guys will be okay.


I know for a fact that more than two people run hackintoshes, someone has to be writing the ten thousand how-to-hackintosh tutorials that litter the web.
posted by JHarris at 4:17 AM on November 11, 2020 [1 favorite]


Also, is there yet a standard for ARM chipsets/boards, as there is for x86, that you can buy a board and install a boxed Windows (or Linux or *BSD or whatever) on? From what I understand, ARM is a constellation of proprietary SOCs and boards, with each having its own idiosyncratic bootloader, and “ARM Windows” just means “a custom version of Windows baked into this proprietary locked-down tablet”.
posted by acb at 4:21 AM on November 11, 2020 [2 favorites]


It is looking like the switch to ARM means a drop of $200 in price, which is not bad, although considering the machines that are in my price range usually, probably not enough, especially if it locks me out of the Windows ecosystem.
posted by JHarris at 4:23 AM on November 11, 2020


I wonder what the stats are about how many MacBook Airs run any 3rd party software at all?

I have a Pro, not an Air but personally, I run 90% third party software for work: Microsoft, Google, Mozilla, Jetbrains, Docker, iTerm2, Beyond Compare, git and whatever our vpn and firewall are. The only Apple program I use with any regularly is Finder.
posted by octothorpe at 4:40 AM on November 11, 2020 [2 favorites]


I've been working almost exclusively on a Mac since 2013 (that 13" Pro is still going strong). Since I occasionally game and need Windows for ArcGIS, I upgraded last fall to a new 15" Pro with a dedicated graphics card and plenty of room for a Windows partition. I suppose I could have waited until this fall to get the last generation Intel Mac, but I suspect this one will last me for a while. Hopefully someone will have figured out the "run Windows on ARM" thing by the time I need to replace it, as it's convenient to have both OSs on the same machine. If they don't, we'll see if it's a deal breaker at time.
posted by mollweide at 6:29 AM on November 11, 2020 [3 favorites]


Regarding Windows support: Parallels is working on supporting x64 emulation on the M1. In the linked post, they also imply that Microsoft's plans for improving x64 emulation for Windows on ARM will somehow be relevant.
posted by thedward at 7:03 AM on November 11, 2020 [3 favorites]


Can't wait for browser to get bogged down like it is 1999 when Facebook tries to push tagging friends onto me.

Just move to Illinois where friend tagging in photos is illegal! (I am waiting to get paid for facebook illegally doing it thanks to a settled class action).
posted by srboisvert at 7:27 AM on November 11, 2020 [3 favorites]


I'm really curious at which levels the emulation is going to happen with parallels and presumably vmware - I mean, is it just going to huck the x86_64 instructions at Rosetta to translate, or is it going to go back to the bad old days of PPC / 68k to x86 emulation, with every emulator running their own stack? And where do the VT-x / AMD-V instructions get handled?

And on the other other hand: how much of that is a real problem with modern CPU designs? I mean sure, the PPC made a pretty slow x86 emulator, but the PPC is also 20 year old technology now. The Rosetta demos at WWDC made it feel like they'd have no problem running complicated macOS x86 binaries, anyway. And how much of that is due to the _gigantic_ caches the M1 has?

I'm cautiously optimistic, although I'm still annoyed that Apple wants $400-ish for a 1tb SSD. $400 gets me nearly three sticks of WB Black 1tb NVMe! One of these days I'll figure out how to comfortably run my media libraries on external storage. My NAS has plenty of space.
posted by Kyol at 7:29 AM on November 11, 2020 [2 favorites]


I use rclone, why even use a NAS anymore? I don't remember the last time my Internet was down and my local network was working.
posted by geoff. at 7:42 AM on November 11, 2020 [1 favorite]


And I'm really curious how far up this design will scale - I know my iPhone's geekbench scores are already awfully competitive with my Ryzen 3400g's score (1293/3278 vs 918/3816 - the single core rating beats my iMac's i5-8600's 1026 even), and the phone is far more power and thermally constrained. Is this a case of once they turn up the tap and design them for 65w+ of power, they're going to be unstoppable, or is the design fundamentally limited in other ways?
posted by Kyol at 7:43 AM on November 11, 2020 [1 favorite]


I use rclone, why even use a NAS anymore? I don't remember the last time my Internet was down and my local network was working.

See, I've heard people say that before, but isn't storing mumpty-TB in the cloud prohibitively expensive over time? Wasabi claims $5.99 per tb per month, and S3 is.. Uh. Oof, I could buy a _lot_ of local terabytes per month for what S3 wants for my local storage.
posted by Kyol at 7:49 AM on November 11, 2020 [5 favorites]


Anyone know if these machines and their OS support apps using OpenGL? I read somewhere awhile back that Apple’s backing away from OpenGL.
posted by disentir at 7:58 AM on November 11, 2020 [1 favorite]


I mean if raw storage is all you need, you're talking more about a temporary disk-set. But if you care in the least about backups, those high storage costs quickly reach parity with whatever you could do at home. When you start talking about the costs of actually ensuring the backups work, then Wasabi's economies of scale kick in quickly. Around ~10 years ago I was running a RAID5 on actual server hardware for my film collection and I found the hard way that rebuilding actually has a non-zero chance of killing your storage server. Right now I cycle out, so if I haven't watched something in 6 months it'll save the name of the file, the hash, where I got it into a simple immutable text log. My hope is that if I run into a situation where I lose something like "War and Peace in the Nuclear Age" a documentary I still can't find and is apparently notoriously hard to obtain, at least I have the original file name and other things to go off of to help my search.
posted by geoff. at 7:58 AM on November 11, 2020


I'm super impressed they're jumping straight to 5nm process. They only started making iPhones with 5nm chips a month ago. Meanwhile NVidia is messing around with 8nm for the flagship 3xxx GPUs and can't get enough build capacity. AMD's still mostly shipping 12nm chips (with some 7nm now out) and Intel isn't even going to start shipping 7nm chips until 2022. And of course Intel's debacle trying to switch to 10nm is what opened the market to AMD and ARM in the first place.

It's astonishing really. Did Apple just aggressively buy up all the capacity at the most advanced fabs?
posted by Nelson at 8:09 AM on November 11, 2020 [2 favorites]


Anyone know if these machines and their OS support apps using OpenGL?

Despite being deprecated for what seems like forever, OpenGL amazingly is supported on ARM Macs.
posted by zsazsa at 8:10 AM on November 11, 2020 [2 favorites]


The marketing claim that 16GB of RAM with these chips is more like 32GB of RAM under Intel seems dubious to me.

It's not completely nonsensical, probably ARM machine code takes up less space than x86 machine code in practice. But surely it's not half as much? And also people want to use RAM for data, not just code?
posted by vogon_poet at 8:12 AM on November 11, 2020 [3 favorites]


Ah, yeah, and I mean if I came in to it before having purchased the disk sets, that would have helped defer the cost a bit, but for now it's a synology backed up both to a local gigantic USB drive nightly, and my zpool rsyncs everything else off every few hours as well. Recovery is pretty much repointing my mounts to the zpool while I restore to the synology. But everything that I'm interested in is backed up in triplicate.

Frankly I'm _mostly_ concerned that the synology is just mdraid, so when a disk fails it's both sudden and surprising. On the other hand, the vast majority of data on that drive could suffer quite a significant amount of bit flips and other things zfs deals with without being too noticeable.

But at the end of the day the big thing is having somewhere to put my 250 gig iTunes library and my 120 gig photos library without spending $400 for the Apple privilege. Technically I know I can put those libraries on external drives or NAS, but I don't have a lot of practice with that in the macOS ecosystem, only that it's kind of iffy in Windows.
posted by Kyol at 8:13 AM on November 11, 2020 [1 favorite]


Is this a case of once they turn up the tap and design them for 65w+ of power, they're going to be unstoppable, or is the design fundamentally limited in other ways?

They can probably clock the cores higher in a desktop or larger laptop, but I would expect more cores over clockspeed. MacOS and many workloads can definitely use them, hence the up to 16 and 28 cores you can get in the iMac Pro and Mac Pro respectively. I don't know about Apple's design specifically, but there are ARM CPUs with 128 cores. Admittedly those are intended for highly virtualized cloud computing, which is a very different kind of workload than, say, video editing, but it does suggest that there's no inherent reason why ARM designs can't do large numbers of cores similar to the 28-core Xeon or 64-core AMD Epyc.

I read somewhere awhile back that Apple’s backing away from OpenGL.

Apple deprecated OpenGL back in 2018. That said, Apple docs say "OpenGL is deprecated, but is available on Apple silicon," so they aren't making a hard break yet.
posted by jedicus at 8:14 AM on November 11, 2020 [3 favorites]


Despite being deprecated for what seems like forever, OpenGL amazingly is supported on ARM Macs.

OpenGL is used in standards like WebGL, so even if Apple could cut loose all pre-Metal macOS apps, they couldn't nix it without breaking a lot of things.
posted by acb at 8:16 AM on November 11, 2020 [1 favorite]


Touchbar: I like this Pock project to make the touchbar a replacement for the dock rather than keys, to get a bit more screen real estate.
posted by bendybendy at 8:33 AM on November 11, 2020 [4 favorites]


But at the end of the day the big thing is having somewhere to put my 250 gig iTunes library and my 120 gig photos library without spending $400 for the Apple privilege. Technically I know I can put those libraries on external drives or NAS, but I don't have a lot of practice with that in the macOS ecosystem, only that it's kind of iffy in Windows.

Right now my iTunes library lives on a 500 GB USB-C Samsung SSD that's stuck to the back of iMac with 3M command strips. It's relatively painless under macOS -- just set the library location to the new drive/share in the application and it'll move everything over. Or if you don't want to trust the application, move ~/Music and ~/Pictures yourself and replace the original folders in your home directory with links.
posted by nathan_teske at 8:42 AM on November 11, 2020


Did Apple just aggressively buy up all the capacity at the most advanced fabs?

Probably. And they probably did it years ago. This is the leverage having nearly $200B cash on hand gets you. This is what making a few million A14s gets you.

They can probably clock the cores higher in a desktop or larger laptop, but I would expect more cores over clockspeed.

Yep. This is what phones are doing. This is what AMD is doing. This is what Moore's Law says is going to happen. Like jedicus says this is what Ampere is doing in the server space. Highly parallelizable tasks like audio or video processing scale well across multiple cores. Compilers can mostly scale well across cores (you can compile a bunch of stuff at once assuming you have multiple source files, a pretty safe assumption). Note how these are the tasks Apple highlights in their keynote. big.LITTLE is really ARM's secret sauce (or dynamicIQ or whatever Apple's variant is) and is what has allowed mobile devices to get better while not having to have a 10 Ah batteries attached. This is what's going to make these devices have the ultra-long battery life.

Also for SoCs, yes there have been SoCs for ages but most aren't MCMs with memory on the module. Every phone in existence has a tightly integrated SoC but moving even more system components onto the module is pretty novel. It's not new in a theoretical sense, there are plenty of MCMs out there and they've incorporated memory before, but never at this kind of commercial volume. This is moving into SIP territory where a future generation of the M1 is literally a single chip that runs the whole system and the motherboard is nothing but a breakout board for IO and the battery connector.

I'm curious whether there's hardware h.265 support in this thing, for either encode or decode as this would be a big win for both speed and power consumption for video users.

Overall I have to say I'm starting to hate Apple for being so good.
posted by GuyZero at 9:40 AM on November 11, 2020 [4 favorites]


> Windows 10 on ARM only runs store apps, so, useless.

That was the case if you go back to Windows RT, but not now. Ive got the ARM powered Surface Pro X, and Windows Store does have ARM64 apps. It also lets you install 32-bit x85 apps (which run under emulation, with 64-bit 'coming soon').

Outside of the store, you can definitely install whatever you feel. I've got the Windows 95 version of Freecell running, as well as the daily ARM64 builds of the Chromium browser, all installed from direct download, not the Windows Store.
posted by ewan at 9:59 AM on November 11, 2020 [2 favorites]


I'm mostly curious how this will impact the Mac Pro lineup which was updated last year with Intel x64 chips. How long will it take for them to convert the Mac Pro workstations to Apple Silicon? Or will they abandon that market segment entirely? If and when they make the switch, they'll need to rewrite drivers for the PCIe connected graphics cards there. That effort should translate to eGPU support as well but maybe it will take some time?

I'm not a developer or engineer. I work with audio and media using macOS so I keep tabs on their hardware developments. In response to a comment above, eGPU support is useful not just for gamers and deep learning enthusiasts but for anyone editing high res video or even people like me who work with audio but need support for 3 monitors while streaming/decoding 4k video.
posted by Evstar at 10:11 AM on November 11, 2020 [1 favorite]


Did Apple just aggressively buy up all the capacity at the most advanced fabs?

They literally booked the entire production capacity of TSMC's 5nm lines. All of it.

I think that just leaves Samsung with 5nm capacity, but hell if I know for sure these days. Anyway, one hell of a bold play; basically nobody else will be doing big production at 5nm now as they'll all be fighting for whatever scraps Samsung is willing to sell.
posted by aramaic at 10:11 AM on November 11, 2020 [7 favorites]


It feels so weird for desktop computers to finally be interesting again, instead of "Intel announced today that they would be delaying their new series of CPUs, which offer a 1.2% performance increase, for the fifth time."

AMD already did that, first with their very large core counts, and more recently with Zen 3's significant IPC improvement over anything Intel has managed of late.
posted by wierdo at 10:17 AM on November 11, 2020 [2 favorites]


Also, is there yet a standard for ARM chipsets/boards, as there is for x86, that you can buy a board and install a boxed Windows (or Linux or *BSD or whatever) on? From what I understand, ARM is a constellation of proprietary SOCs and boards, with each having its own idiosyncratic bootloader, and “ARM Windows” just means “a custom version of Windows baked into this proprietary locked-down tablet”.

There are arm64-based platforms that are UEFI and ACPI compliant, so in that sense, yes. Apple has explicitly indicated that they have no plans to support direct boot to any non-macOS operating system, and I wouldn't be surprised if the implication there is not only that the M1 SoC doesn't comply with those standards but also that the system won't boot to an unsigned OS. Even if they make the latter restriction configurable, you'd have to have an OS explicitly patched to understand the M1's device discoverability protocols and etc., so that probably means no direct boot to Windows ever and direct boot to a niche, janky, poorly-maintained Linux fork maybe.
posted by invitapriore at 10:25 AM on November 11, 2020 [2 favorites]


Intel is a garbage fire of a company, and this shit makes me giddy happy.

I won't buy any of these machines though, because my 2012 rMBP has 16G of ram and I'm like... I feel a machine I buy eight years later should probably have at least twice as much ram, you know, to deal with the next eight years?

I'm sure they'll be announcing a Mac Mini with like 12 cores and 32G of ram in the next six months, like running an M1s or something, and that might get me to contemplatively stroke my credit card.
posted by seanmpuckett at 10:40 AM on November 11, 2020 [4 favorites]


How long will it take for them to convert the Mac Pro workstations to Apple Silicon? Or will they abandon that market segment entirely? If and when they make the switch, they'll need to rewrite drivers for the PCIe connected graphics cards there. That effort should translate to eGPU support as well but maybe it will take some time?

I think far more likely is that Apple has a Big M1 variant coming with an absolutely massive GPU complex. It could then use something like a console with an SoC flanked by GDDR memory on all sides or they could have an interposer and HBM stacks to provide the memory bandwidth necessary to drive a larger GPU core variant.
posted by Your Childhood Pet Rock at 10:42 AM on November 11, 2020


My work MBP has the touchbar, which I don't mind, but also has the old terrible keyboard. (I also use Pock, mentioned upthread). With the M1 Macbook Pros only replacing the low-end 13-inch Intel models, I'm glad I pulled the trigger on a customized Intel model a few weeks ago for my personal laptop. My current one is a 2013 MBP with a 2.8 GHz dual-core i7 and 8GB RAM, and in a few weeks I'll be stepping up to a 2.3 GHz quad-core i7 with 32GB of much faster RAM. Townscaper is about as much gaming as I like to do, but I'm looking forward to all the extra horsepower for multitrack audio and video production.
posted by emelenjr at 10:43 AM on November 11, 2020


It occurs to me that macOS (like all 3 modern OSs) applies compression to RAM and uncompresses on the fly -- trading space for time in a way that can end up saving you time overall. So maybe the faster processor or differences in architecture lets them be more aggressive with that, and this is also driving their dubious claim of 16GB M1 equivalent to 32GB Intel.
posted by vogon_poet at 10:48 AM on November 11, 2020 [2 favorites]


I’m disappointed they didn’t shrink the Mac mini to the size of the Apple TV. Or smaller!
posted by Monochrome at 11:29 AM on November 11, 2020


RAM sits in the memory hierarchy between cache and virtual memory. It seems unlikely but just possible that with fast SSD you can do better with a monster cache and less RAM in many cases.
posted by sjswitzer at 11:35 AM on November 11, 2020 [1 favorite]


Power consumption was a major reason why Apple laptops were limited to 16GB of RAM in the past, but I'm not sure to what extent those factors apply to the M1 laptops.
posted by jedicus at 11:55 AM on November 11, 2020 [2 favorites]


I'm open to the possibility that intelligent memory tiering may render DRAM quantity less vital than it was in the past however until some really comprehensive benchmarks posing the question and providing some very surprising answers I remain dubious.
posted by seanmpuckett at 12:05 PM on November 11, 2020


Production of the iPhone brings in most of Apple's revenue. Now that all their platforms use many of the same underlying hardware pieces, more or less, I wonder if memory limits are driven less by power usage than by supply chain constraints, such that they would prefer to favor iPhone production.
posted by They sucked his brains out! at 12:06 PM on November 11, 2020 [1 favorite]


re: The 16GB RAM limit.

The RAM is two x64 (64-bit memory bus) chips on the SoC module itself compared to four x32 (32-bit memory bus) on the board with the Intel models. The LPDDR5 that M1 probably uses (Micron MT62) is only available in 8 and 12GB chip variants so far with the 128Gbit/16GB chips still sampling and probably weren't ready in time for this generation.
posted by Your Childhood Pet Rock at 12:17 PM on November 11, 2020 [5 favorites]


One complication to the memory issue is that it's shared between CPU and GPU- there's just the one pool; it will be interesting to see how the benchmarking works for high GPU utilization tests.
posted by jenkinsEar at 12:35 PM on November 11, 2020 [2 favorites]


> OpenGL is used in standards like WebGL, so even if Apple could cut loose all pre-Metal macOS apps, they couldn't nix it without breaking a lot of things.

Chromium (on all platforms) and Firefox (on Windows) already use ANGLE, which translates OpenGL ES (what WebGL is) into a variety of graphics APIs, including Direct3D 9/11 (which has better driver support on Windows), OpenGL (ES), Vulkan, and in-progress support for Metal.

I wouldn't be surprised to see Apple go full scorched earth with their graphics API, kill off everything on macOS except Metal, and switch to ANGLE's OpenGL->Metal themselves for Safari.

Actually, thinking about it, I'm almost surprised they didn't do exactly that for the ARM transition, building ANGLE or something similar into Rosetta 2, and saying native Apple Silicon apps have to use Metal (potentially with ANGLE or MoltenVK).
posted by zekesonxx at 12:50 PM on November 11, 2020 [2 favorites]


So maybe the faster processor or differences in architecture lets them be more aggressive with that, and this is also driving their dubious claim of 16GB M1 equivalent to 32GB Intel.

Or it could be exactly the kind of complete bullshit we've all come to expect from the marketing arms of tech companies.
posted by flabdablet at 12:58 PM on November 11, 2020 [5 favorites]


That's MetaFilter's own hodgman
posted by those are my balloons at 1:01 PM on November 11, 2020 [4 favorites]


I see that things continue to happen in the world of computers.
posted by turbid dahlia at 1:30 PM on November 11, 2020 [5 favorites]


Looking forward to the 16” Pro. I’m still running my 2012 Retina 2012 MPB. I run Adobe crap, Logic, CaptureOne, DaVinci Resolve and Cinema 4D all without issue. Given how long this computer has worked flawlessly (one battery replacement) I haven’t been in a rush to upgrade. But also I’m kinda broke so glad it isn’t out yet.

My biggest issue is a great Henge dock, it takes them forever to update to new models.
posted by misterpatrick at 1:34 PM on November 11, 2020 [1 favorite]


Rosetta 2, like the original Rosetta, will emulate a very fast machine by translating blocks of code on-the-fly. That will work for most applications. It won't work for code which touches hardware or contains assumptions about hardware. That's why you can't simply run VMWare/Parallels in Rosetta 2.

The Xcode tools for this are pretty seamless. Unlike the PowerPC->Intel transition, Apple developers have learned (mostly) to abstract out hardware details, and Apple has provided more APIs which are platform-agnostic.

Most large applications take less than a week to transition; it's mostly a recompile of existing code (with a lot of hand-waving for the subtle bugs that arise.) For instance, Adobe Photoshop took months to port from PowerPC to Intel; it's already demonstrated as a working port to M1. Adobe did a lot of code cleanup over the years which also resulted in faster, more stable code on all platforms.

It's a lovely situation Apple finds itself in.
posted by blob at 1:42 PM on November 11, 2020 [3 favorites]


Apple have come on a bit over the years:
  • 1976: MOS 6502. 8 µm process. 3510 transistors. 1 MHz
  • 2020: Apple M1. 5 nm process. 16 billion transistors. 1.8 – 3.1 GHz
So in 44 years: 1600× smaller, > 4½ million × more complex, > 1800× faster.

Sophie's simple (30k transistor) processor for a 1980s educational computer (and let's not forget the ARM in Apple's Newton) has come a long way.
posted by scruss at 2:28 PM on November 11, 2020 [7 favorites]


@CDEspinosa (Apple Employee #8, still working at Apple) :
"For comparison, *two* iPhone 11 Pro phone processors contain more transistors than we shipped in all the 6502s in all the Apple IIs we ever made."
posted by JoeZydeco at 2:37 PM on November 11, 2020 [6 favorites]


And I realize now that tweet is dated a bit. The M1 has 16 billion gates, which is 76% of Apple's entire Apple ][ 6502 count. A13 Bionic (iPhone 11) has 8.5 billion.
posted by JoeZydeco at 3:27 PM on November 11, 2020 [1 favorite]


Will R [statistics software] work on Apple Silicon?

TL;DR Fortran compilers is a fuck.
posted by Monochrome at 3:42 PM on November 11, 2020 [4 favorites]


If someone said in 1997 that Apple would be the most successful computer maker, have the most valuable operating system(s) and be producing the fastest chips no one would have believed you. It is like they became what Microsoft, Dell and Intel were at the time. Pretty astounding.
posted by snofoam at 4:05 PM on November 11, 2020 [6 favorites]


Will R [statistics software] work on Apple Silicon?

I'm also wondering how this will affect my conda/python development, LaTeX stuff and general linux based command lining?

I just got the 16 inch macbook pro this year so guess I won't have to worry about it for a few years yet.
posted by piyushnz at 4:33 PM on November 11, 2020


Also, some have complained that Apple hasn’t had any big breakthrough products since the ipad, but it looks like they may have developed the best chips just 10 years after releasing the first chip they designed.
posted by snofoam at 4:43 PM on November 11, 2020


From what I read this chip has even more lookahead and speculative execution than Intel's ones. Have they got a way around the exploits that come with these techniques, or have we basically given up on preventing them?
posted by Joe in Australia at 5:20 PM on November 11, 2020 [3 favorites]


4½ million × more complex, > 1800× faster

Now I want to see how much performance can be had from a 5nm SoC made almost entirely out of 6502 CPU cores and 64K RAM islands.
posted by flabdablet at 6:00 PM on November 11, 2020 [3 favorites]


Did Apple just aggressively buy up all the capacity at the most advanced fabs?
They have form if so. IIRC it's what they did with 1.8" HDDs for the iPod, and 90mm capacitive touchscreens for the iPhone.
posted by rhamphorhynchus at 6:14 PM on November 11, 2020 [1 favorite]


Now I want to see how much performance can be had from a 5nm SoC made almost entirely out of 6502 CPU cores and 64K RAM islands.
posted by flabdablet
Well, there is this, at least
posted by DoctorFedora at 6:18 PM on November 11, 2020 [1 favorite]


Some early numbers.

If those pan out it’s like 50% faster than my early 2019 iMac. In a MacBook Air. Jinkies!
posted by Kyol at 8:38 PM on November 11, 2020


it looks like they may have developed the best chips just 10 years after releasing the first chip they designed.

The M1 is roughly the 35th CPU-like chip they've designed.
posted by GuyZero at 10:12 PM on November 11, 2020


there is this, at least

I wrote some of the code that runs that beast's inter-board network :-)
posted by flabdablet at 11:52 PM on November 11, 2020 [10 favorites]


It seems like Apple is abandoning the pro market. I like my Apple Products, but as a programmer and power user this will probably push me back to the PC.
posted by interogative mood at 7:01 AM on November 12, 2020


It seems like Apple is abandoning the pro market. I like my Apple Products, but as a programmer and power user this will probably push me back to the PC.

I think it probably depends on what one thinks of as the “pro market.” Moving away from Intel in order to take advantage of much more powerful chips that are getting better at a much faster pace seems like a move catered to the kind of professionals who often use macs (design, video production, etc.). The kind of professionals that are using very specific or custom software that never gets updated were not using macs in the first place. I guess for developers it might depend on what you develop. People developing for the Mac are about to get unleashed. I imagine things will be clearer by the time the whole line is switched over. In a couple years the performance difference could be vast.
posted by snofoam at 7:39 AM on November 12, 2020 [2 favorites]


TL;DR Fortran compilers is a fuck.

Hey, don't blame the 40+ year old tested code. If a vendor wants their silicon used, they rush a compiler out. More worrying is R's reliance on doing something special with NaNs that the ISO FP standard tells you not to rely on, but since it sorta-kinda worked on Intel chips, it was okay up to now.

Some folks closer to Arm Ltd in the UK have suggested that the Apple SoC might only be using the AARCH64 ABI (≅ "instruction set") but not contain any core IP licensed from the company itself. Since Apple have owned PA Semi since 2008, they've had the internal silicon design expertise for a while.
posted by scruss at 7:50 AM on November 12, 2020 [2 favorites]


Given that this is supposed to be a two year transition away from Intel, it isn't surprising that Apple's first products to be transitioned are the consumer friendly ones. Odds are pretty good that there is going to be a healthy market for Apple Silicon compatible applications and hardware in the not too distant future.

Just imagine if Apple went the other direction. There would be endless complaints about how Apple doesn't know anything about the pro market by not giving enough time to get all of the pro ducks in a row... Instead, pros can stick to the stable, tried and true Intel solutions for the time being as the transition is made. Sure, the Intel chips are massively slower and energy draining than the consumer level M1, but superior performance and all day battery life isn't everything, right? (And given it was just officially announced yesterday, kind of mind-boggling how everybody is complaining that their favorite tool/language/application/etc. hasn't been fully transitioned already.)

As a programmer, it looks like it is going to be an exciting time... I don't know how you can look at Apple's push into high performance machine learning in the hardware as not opening new paths. As a bit of a CPU geek, it is glorious to see the Intel/AMD hegemony finally being challenged to innovate again. Maybe Intel will start making fast and interesting chips again, if they can get over their fab problems.

I'm quite curious to see if Apple starts making their own server hardware again, given the potential massive energy savings and performance increases for all of their cloud infrastructure. Given the licensing changes for colocation usage, there might be something in the works...
posted by rambling wanderlust at 8:20 AM on November 12, 2020 [4 favorites]


I ordered one of these 13" MBPs yesterday, fully specced. My (Magsafe era) 2015 MPB is still running like a champ after I upgraded its storage to 1TB in 2015, but I never upgraded the RAM from 8gb and the way I work often involves having many many programs open and task switching for days on end, so it's been struggling a little lately. Plus I had $2k in professional development money by the end of the year that I was going to use on conference travel, but obviously that ain't happening.

I'm excited for Hodgman. He's been jokily begging Apple to bring him back for years. Also, not only is he MeFi's own (albeit long absent here), a recent JJHO episode featured ColdChef.
posted by deludingmyself at 8:31 AM on November 12, 2020 [2 favorites]


R's reliance on doing something special with NaNs

Well that sounds fun, can you please explain or give a reference for those of us not familiar with R's internals?
posted by Nelson at 9:38 AM on November 12, 2020


Some folks closer to Arm Ltd in the UK have suggested that the Apple SoC might only be using the AARCH64 ABI (≅ "instruction set") but not contain any core IP licensed from the company itself.

I think that's common knowledge. Apple have a perpetual licence to the instruction set, but do their own chip design in-house, and have spent a lot on it, which is one of the reasons it outperforms other ARM vendors. (I remember an examination of one of their early CPUs—the A4, I think—in which the author noted that Apple appear to lay their chips out by hand rather than algorithmically, suggesting a higher than average amount of attention to detail.)
posted by acb at 9:46 AM on November 12, 2020 [3 favorites]


>> R's reliance on doing something special with NaNs

> Well that sounds fun, can you please explain or give a reference for those of us not familiar with R's internals?


Will R Work on Apple Silicon?
R’s NA for floating point numbers is represented using NaN with a special payload value. NaNs that originate from computations not involving NA have a different (e.g. zero) payload, so can be distinguished from NA. NaNs are often passed to computations inside R without explicit checks and the same happens inside package code and external numerical code, which have no idea about R’s NA concept nor representation.

The IEEE 754 standard for floating point arithmetics does not mandate how NaN payloads should be propagated through computations.
posted by ASCII Costanza head at 9:48 AM on November 12, 2020 [4 favorites]


Discussion on the R + Fortran vs. AS issue at Hacker News
posted by ASCII Costanza head at 9:49 AM on November 12, 2020 [1 favorite]


Apple appear to lay their chips out by hand rather than algorithmically

To be clear, this means the layout was hand-optimized, but it's no longer possible to lay out a chip totally by hand like they did with the 6502. The A4 has 149 million transistors including 512Kb of L2 cache, which no one is going to lay out by hand. They probably tweaked some of the cores.
posted by GuyZero at 1:34 PM on November 12, 2020


imagining a guy with a really tiny set of tweezers and a damp, furrowed brow
posted by DoctorFedora at 2:05 PM on November 12, 2020 [4 favorites]


PA Semi was a hand layout house like AMD and Intel and I understand this tradition has been very much continued under Apple. Hand layout doesn't mean every transistor is touched. A lot of the transistors will be caches or cores with repeated blocks so the design of certain cells will be hand laid out and then repeated. The point is they don't just pull in IP blocks or use whatever is in the automated design tool's IP library for a particular circuit.
posted by Your Childhood Pet Rock at 2:36 PM on November 12, 2020 [1 favorite]


Yeah, that NA thing in R sounds like it's going to deliver nice surprises for some time to come.
A number of tests for R and recommended packages have failed for a platform-specific reason, but it turned out that all for the same reason: surprising propagation of NaN payload, where e.g. NA * 1 is NaN.
posted by chortly at 4:50 PM on November 12, 2020


imagining a guy with a really tiny set of tweezers and a damp, furrowed brow

No, it's Apple, it would be child labourers working at really big semiconductor looms in a rainforest.
posted by Joe in Australia at 5:00 PM on November 12, 2020 [1 favorite]


No, it's Apple, it would be child labourers working at really big semiconductor looms in a rainforest.

Come on, fair go. Apple has done more and has been stricter and more transparent about rooting out child labor than any other big company in Silicon Valley.
posted by Your Childhood Pet Rock at 5:19 PM on November 12, 2020 [2 favorites]


Fair enough, I confess to not keeping up with this. But I do like the mental image, minus the child labourers.
posted by Joe in Australia at 5:27 PM on November 12, 2020


When you're dealing with 5nm transistors, the nimble fingers of a small child really do make a difference.
posted by acb at 1:55 AM on November 13, 2020 [1 favorite]


Only a child's hand can polish the inside of a 5nm transistor.
posted by Thorzdad at 5:33 AM on November 13, 2020 [1 favorite]


A claim that from Big Sur onwards, macOS has a program that phones home to Apple, with the hash of every program you run in an unencrypted payload, and that this bypasses system-level firewall/VPN APIs and is not blockable, allowing everyone from the spooks and cops to adtech companies that pay ISPs to put sniffers on their networks to track your activity patterns.
posted by acb at 6:10 AM on November 13, 2020 [2 favorites]


That is definitely not a good look. Hopefully they'll explain it, and secure it. On the other hand, it means Apple has a malware kill switch that can't be disabled. Something they've had on iOS since they opened the App store. Definitely more of the "walled garden" vs "wilderness" mindset at play. And Metafilter can do without that argument again. And also this weak-ass attempt at humour re child labour.
posted by seanmpuckett at 6:34 AM on November 13, 2020 [2 favorites]


IT at work has put out a "please don't upgrade until we know what this new OS version does" email.
posted by octothorpe at 6:52 AM on November 13, 2020


It will be interesting to monitor how much effort Apple puts into making this tracking facility hard to block at the LAN gateway. If they wanted to be complete pricks about it they could run it over the same addresses and ports as functionality like the App Store that most people would want to leave working.

If it's just some well-defined address and port combination, I would expect router manufacturers to be exposing blocking UI fairly swiftly.
posted by flabdablet at 7:08 AM on November 13, 2020


not sure but the apple subreddit supplied this fix:

Open /etc/hosts (sudo nano /etc/hosts)
Add in 0.0.0.0 ocsp.apple.com
Run sudo dscacheutil -flushcache
posted by lazaruslong at 7:17 AM on November 13, 2020 [5 favorites]


Come on, fair go. Apple has done more and has been stricter and more transparent about rooting out child labor than any other big company in Silicon Valley.

There's still the whole relying on ultracheap Chinese labor thing, which is ubiquitous among tech companies yes, but still definitely A Problem for Apple.
posted by JHarris at 7:21 AM on November 13, 2020


It's wild that Apple apparently went through a lot of trouble to make the OCSP check unstoppable by user VPN or firewall software but left it susceptible to modifying /etc/hosts. If that's true it makes me all the more leery of how (in)secure the rest of the OCSP pipeline is.
posted by jedicus at 7:25 AM on November 13, 2020 [3 favorites]


Beyond the privacy issues, I'm surprised that the malware check data are sent unencrypted.
posted by They sucked his brains out! at 7:28 AM on November 13, 2020


IT at work has put out a "please don't upgrade until we know what this new OS version does" email.


Of course, the OS has been available for months in public betas.
posted by pwnguin at 11:40 AM on November 13, 2020


Beyond the privacy issues, I'm surprised that the malware check data are sent unencrypted.

To be fair, this is the same company that considers epcip.jcdfx.lqkqb.nwqbx.xkuay too weak for an AppleID account password but still - still! In 2020! I just checked! - rates the strength of AppleID1 as "moderate".
posted by flabdablet at 11:42 AM on November 13, 2020 [5 favorites]


The ocsp.apple.com domain name worries me a bit. It would be good to know whether redirecting that also nobbles the OS's certificate revocation checks.

I'm also wondering whether there's a bit of a beat-up going on here. If the "application hash" traffic of concern actually is OCSP requests and they are indeed going unencrypted then yes, that's a privacy leak every bit as severe as Jeffrey Paul says it is, but its purpose will be to check whether or not the app you're about to run has had its certificate revoked or not. Given the curated design of the App Store ecosystem, that's a reasonable purpose and Apple isn't being sinister and intrusive just for shits and giggles.

Same with the business about backing up what are supposed to be end-to-end device-local private keys to iCloud. That reads far more like a careless cock-up than a deliberate back door to me, though yes, if it really is doing that then that absolutely is exploitable as a back door. I would have expected all the secret stuff to be held in the on-device Secure Enclave though, not just left lying around where iCloud can get at it.

Apple security is a really weird mix. You have stuff like the Secure Enclave that really does appear to be capable of enforcing privacy even against state-level actors, and then all these entry-level fuckups like broken password strength policies and key leaks and whatnot, plus all the jailbreaks people keep finding for their boot loaders. It's almost like they're too popular to need to care much.
posted by flabdablet at 12:56 PM on November 13, 2020 [1 favorite]


If you have iCloud backups, there's probably a copy of your personal data on some server in Utah, splayed, indexed and algorithmically catalogued, all the better in case you turn out to be connected to al-Qaeda or Antifa or otherwise of interest. Perhaps they also let the FBI go fishing for paedophiles in it, and/or the IRS for tax cheats, with the proviso that if they catch any, they parallel-construct a plausible alternative story for how they did it.

If you're in China, it is a given that the Chinese Communist Party's security organs have all your private data, and they won't even bother to parallel-construct a pretext if they catch you changing your password to WinnieThePoohSucks8964 or something.
posted by acb at 1:08 PM on November 13, 2020 [2 favorites]


If the "application hash" traffic of concern actually is OCSP requests and they are indeed going unencrypted then yes, that's a privacy leak every bit as severe as Jeffrey Paul says it is, but its purpose will be to check whether or not the app you're about to run has had its certificate revoked or not. Given the curated design of the App Store ecosystem, that's a reasonable purpose and Apple isn't being sinister and intrusive just for shits and giggles.

Apple could instead send the hashes of revoked apps to the client and not "leak" their app usage and location data (even if it's only to Apple) by the simple tactic of not collecting it in the first place. It's how virus scanning works.

Honestly how many people do you think would opt in to statement to the effect of "Do you grant Apple permission to collect what apps you open, when and where for security purposes?"
posted by Mitheral at 3:03 PM on November 13, 2020 [4 favorites]


30 percent, same as in town.
posted by pwnguin at 3:49 PM on November 13, 2020


I'm reading this post on my 6 years old Microsoft computer, and appears that I won't replace my machine for any of these.
posted by CRESTA at 4:44 PM on November 13, 2020


I upgraded my personal laptop to Big Sur and :shrug:. The corners of things seem to be more rounded and there's some sort of new control panel thing that I'll never use next to the clock. It didn't break anything, so there's that.
posted by octothorpe at 4:51 PM on November 13, 2020


So Big Sur has exactly one new feature that means a lot to me, and I will be using it ten times a day.

In macOS, document title bars have a little “proxy icon” that, to the untrained eye, just sort of looks like a little decorative icon for that file type. However, if you click and hold for half a second on it, you can then drag it to wherever and manipulate the file itself directly without having to navigate to it in Finder.

I do this ALL THE TIME to drag and drop Word files into Slack in order to upload them. It’s amazing.

Anyway, Big Sur has a new feature related to this. Ordinarily, you have to briefly click and hold before dragging, to indicate you aren’t just trying to move the window around real quick. In Big Sur, if you hold the shift key, you can immediately start dragging the proxy icon without the wait, because the shift key indicates to the system that you’re doing that on purpose.

I will use this ten times a day, and I am so pumped to have it.
posted by DoctorFedora at 1:16 AM on November 14, 2020 [5 favorites]


Speaking of speed boosts... The new version of Pixelmator Pro sees a 15x performance increase for their ML Super Resolution function on M1 based machines.
posted by rambling wanderlust at 4:07 AM on November 14, 2020 [2 favorites]


sees a 15x performance increase for their ML Super Resolution function on M1 based machines.

Can someone explain why this is? I guess I fell asleep during "Computer Architecture" at school. I get broadly that CPUs can optimize for certain types of code execution but I'm unsure why and how I as a developer could help take advantage of this. Or more importantly how I know my code is being executed efficiently. I'm assuming that the hardware itself decides what instruction sets to send to what part of the processor? Or does the compilation process give the hardware hints?
posted by geoff. at 12:43 PM on November 14, 2020


Sure. The M1 (like the A chips it is based on) has an enormous vector co-processing engine tuned for evaluating neural network models. Intel chips don't.
posted by seanmpuckett at 12:59 PM on November 14, 2020 [2 favorites]


I get broadly that CPUs can optimize for certain types of code execution but I'm unsure why and how I as a developer could help take advantage of this. Or more importantly how I know my code is being executed efficiently. I'm assuming that the hardware itself decides what instruction sets to send to what part of the processor? Or does the compilation process give the hardware hints?

This is a massive oversimplification but should be enough to get you started.

So a lot of ML involves neural networks which basically involve taking an input, putting it through hidden layers, and coming up with an output. These hidden layers basically are numbers which are multiplied by the inputs, along with a bias, which then gets passed to the next layer.

If we want to make this happen quickly we need one type of machine, a machine that can multiply vectors and matrices really quickly with an add at the end. This is what you would commonly call a "tensor core". A tensor core can take a matrix of a certain size, multiply it with another matrix of a certain size, and then add another matrix to get a result. Because matrixes can basically be split up almost arbitrarily, you can split up a huge matrix into lots of little submatrixes to have this operation carried out on it.

What a program will normally do is load up a special register with a packed representation of a matrix. So if you wanted to compute a 4x4 matrix of 8-bit integers would would load one register with a 16 byte number representing the first matrix, a second register with a 16 byte number representing the second 4x4 matrix, then a third register with a 16 byte number representing a third 4x4 matrix to add it to. Then you issue another instruction which tells the tensor core to do its thing and it rapidly does the math internally and spits out a 32 byte number which is basically a 4x4 matrix of 16-bit integers.

Now in the case of machine learning APIs, this is normally abstracted away from the developer. The developer provides the inputs and the model, while the library will find the fastest execution method (tensor cores, GPU, CPU SIMD as a fallback), subdivide the matrixes appropriately, then run them all through whatever execution path is chosen, one after the other, until the full operation is complete.
posted by Your Childhood Pet Rock at 2:01 PM on November 14, 2020 [7 favorites]


A deeper examination of the macOS OCSP phoning-home behaviour; it appears that (a) it doesn't send a hash of the application's certificate but rather its developer's (so a MITM profiling you wouldn't know whether you're using, say, Microsoft Word, Teams or Visual Code), and (b) it doesn't phone home every time an application is launched, but only if that certificate hasn't been checked for a while. So it's not quite as bad as it sounds, though OTOH it is still unencrypted and eavesdroppable.
posted by acb at 4:04 PM on November 14, 2020 [6 favorites]


(tensor cores, GPU, CPU SIMD as a fallback)

Yeah so I'm used to specifying CUDA vs CPU, I just didn't know if with the M1 chip we were closer to abstracting that out so that you didn't have to specify. Right now as far as I'm aware it is pretty dependent on the hardware to the point in code if I'm using tf I have to know the architecture as part of even the method call on some operations. I think that's been cleaned up a bit but I do remember passing in something along the lines of 4GPU or something when when using CUDA. I guess we're not at the point where we the processor sees a bunch of matrices coming in ready for multiplication and knows how to execute them. That may seem naive, but you're right as a developer I'm just abstracted and looking at an API and know that if I offload certain steps to a server with a bunch of A100s it trains faster. I can pretend I know more about the underpinning but in reality you could be telling me it runs on a magic spaceship, I just know it returns results fast.
posted by geoff. at 7:50 PM on November 14, 2020 [1 favorite]


I just didn't know if with the M1 chip we were closer to abstracting that out so that you didn't have to specify

Wouldn't that be mainly a library design issue rather than a hardware design one?

I'm also not super persuaded that abstracting these things out is always going to be a good thing; that's going to depend pretty sensitively on what your aims are, as a library user.

Libraries abstract certain details of the hardware away so that the application layer can be designed independently of them, but this process is itself a tradeoff in that it also hides the intent of the application layer away from the hardware.

If you want your code to drag the most performance possible out of any given chunk of hardware, there's almost always some benefit to be had from knowing how the underlying hardware is going to go about getting your application-level work done. Sometimes all it takes is a tiny change in expected scope at application level to make a huge difference in what that application is able to achieve when run on any given hardware platform. If the libraries in between are deliberately designed to encourage the application designer to treat the hardware like a magic spaceship, it can be easy to miss that kind of opportunity.

Abstraction is a really really good tool to help us understand how systems work, but going too far with it can pose almost insurmountable obstacles to understanding how they fail and, more importantly, how likely they are to fail. And to my way of thinking, hardware performing at well below the rate it's capable of should certainly count as a kind of failure.
posted by flabdablet at 12:57 AM on November 15, 2020 [1 favorite]


I wouldn't disagree that should be a library issue, but as of right now popular GitHub frameworks or whatever you want to call them that consume TensorFlow are littered with issues that stem from having to specify GPU or CPU. I think maybe more up to date libraries address that and if I were designing software I'd make sure to simply run it on a CPU vs GPU to make sure I didn't make assumptions but considering some of these libraries come from pretty big names is indicative of it being a larger problem.

Here is one I randomly found from Facebook research, "Inference on CPU for models with Deformable conv layer."

I get what you're saying and I agree in theory but when there's systemic problems with this in major libraries by well funded companies (and it is not limited by Facebook), there's got to be a better way. If you do anything related to ML on a MacBook Pro without an eGPU this becomes apparent. I have work arounds, simply running the code on a GPU capable machine, but I've tried fixing these problems in code and it is never as simple as swapping a few configs around.

I'd hate to see further bifurcation with M1. I guess we're already there with having to compile against arm -- but I could easily see this "Only works on M1, MacBook Pro Model 2021." Seems a step backward.
posted by geoff. at 10:00 AM on November 15, 2020 [1 favorite]


If you're using Core ML it'll automatically use whatever is the best solution whether it be the neural engine/tensor core, GPU, SIMD, or plain scalar FPU. The problem is each layer is an order of magnitude slower than the next.
posted by Your Childhood Pet Rock at 10:10 AM on November 15, 2020 [2 favorites]


Not sure what to make of this, I haven’t had a chance to look at this closely: Apple Silicon M1 Emulating x86 is Still Faster Than Every Other Mac in Single Core Benchmark
posted by 1970s Antihero at 2:38 PM on November 15, 2020 [4 favorites]


If Parallels and VMWare can put together a stable x86 virtualization it won’t matter whether Microsoft releases the ARM version of windows to the public... Seriously, what the hell has Intel and AMD been doing all of these decades that the emulation is faster?
posted by rambling wanderlust at 3:34 PM on November 15, 2020 [1 favorite]


The big difference is that CISC and RISC have been converging in how their microarchitectures work for decades. Early ARM chips might only have a couple dozen different instructions and operate in-order. This meant that every time you emulate CISC with RISC, you have to sit there burning cycles waiting for the multiple RISC instructions to finish executing to emulate a single CISC instruction.

These days, both Apple Silicon, ARM, and x86 are more of a CISC in the streets, RISC between the sheets. There's not even a real concept of pipelines anymore. A modern CPU will typically have dozens of instructions in flight at any one time. They get decoded into micro-operations and there can be anywhere between just over 200 for Skylake/Zen to 630(!) micro-operations on M1 in flight. These instructions will be allocated to different execution ports which handle different kinds of micro-ops.

Instead of pipelines CPUs these days have execution ports that will do various things like integer math, loading and storing to/from memory, ports that calculate addresses from operands, SIMD floating point execution ports (which also handle scalar x87 code), and any other features the chip maker wants to add to the architecture. The CPU has special circuitry that looks for code that doesn't depend on results from previous operations and will dispatch that code in parallel filling up as many execution ports as possible.

With the two uarchs looking more similar under the hood, ARM starts gaining because it doesn't spend its time twiddling its thumbs waiting for instructions to retire like it would with ARM7TDMI. As long as the code is previously translated and cached (which it normally would be), the runtime performance doesn't even blink. The CPU is doing it all under the hood because it's almost the same as an x86, just Apple Silicon has many more execution ports and having way more instructions in flight. That leading edge lithography lets them stuff a lot more everything onto the chip. Because of that the Apple Silicon can keep its execution ports more active and get more single threaded performance than natively executing the code.
posted by Your Childhood Pet Rock at 3:58 PM on November 15, 2020 [9 favorites]


Anyway, Big Sur has a new feature related to this. Ordinarily, you have to briefly click and hold before dragging, to indicate you aren’t just trying to move the window around real quick. In Big Sur, if you hold the shift key, you can immediately start dragging the proxy icon without the wait, because the shift key indicates to the system that you’re doing that on purpose.

I hate to break it to you, DoctorFedora, but I just tested this and found the same feature is present in my very much not Big Sur version of MacOS. It's a good feature! But it isn't new.
posted by vibratory manner of working at 12:51 AM on November 16, 2020 [4 favorites]




Jinkies. I was kind of worried due to the lack of specificity in the charts in the keynote that it would be some weaksauce like "The M1 is faster than the Iris Plus IGP in the old Air" which... I mean, for a first effort I guess, sure? But getting 1050ti performance out of a 10W package? Jinkies, I say. I mean, if I remember correctly the AMD 2400/3400g APUs were targeting roughly 1050ti performance, and they're 65W chips.
posted by Kyol at 8:37 AM on November 16, 2020 [1 favorite]


I think realistically, there's not going to be any good support for using Apple's chips for numerical/ML stuff unless they pay someone to add it to open source libraries themselves. Doesn't seem like the kind of thing people could do effectively without Apple cooperation.

But Apple can't even get it together to correctly document and maintain their existing Accelerate library for linear algebra -- so much so that numpy and scipy had to drop support for it. So I'm doubtful unless there's a change in priorities.
posted by vogon_poet at 9:13 AM on November 16, 2020 [3 favorites]


These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.

In addition, over the the next year we will introduce several changes to our security checks:

•  A new encrypted protocol for Developer ID certificate revocation checks
•  Strong protections against server failure
•  A new preference for users to opt out of these security protections

posted by They sucked his brains out! at 3:34 PM on November 16, 2020 [1 favorite]


I'd upgrade my Mac Mini for 1050-level performance, so, wow.
posted by GuyZero at 4:30 PM on November 16, 2020


Anyway, Big Sur has a new feature related to this. Ordinarily, you have to briefly click and hold before dragging, to indicate you aren’t just trying to move the window around real quick. In Big Sur, if you hold the shift key, you can immediately start dragging the proxy icon without the wait, because the shift key indicates to the system that you’re doing that on purpose.

I hate to break it to you, DoctorFedora, but I just tested this and found the same feature is present in my very much not Big Sur version of MacOS. It's a good feature! But it isn't new.
posted by vibratory manner of working
WHAAAAAAAAAAAA

well… great!
posted by DoctorFedora at 6:32 PM on November 16, 2020 [5 favorites]


Engadget has just reviewed a Macbook Air with the M1 and were highly impressed by it.
posted by any portmanteau in a storm at 10:03 AM on November 17, 2020 [1 favorite]


The Ars Technica M1 overview is reasonably interesting as well, and obviously AnandTech has one on the MIni as well. Long story short, the reviews seem to be "holy shit".
posted by aramaic at 10:24 AM on November 17, 2020 [2 favorites]


Metafilter user #1 shows off his M1.
posted by Nelson at 4:04 PM on November 17, 2020 [3 favorites]


Reading those benchmark articles is cool, although its depressing that these new chips will be stuck running "macOS." (Yeah I'm cranky, but these days, whenever I see some slick marketing name for new technology, like "Firestorm" or "Icestorm" or "Metal," I feel a very strong urge to throw scarequotes up around it.)
posted by JHarris at 4:43 PM on November 17, 2020 [1 favorite]


(Especially if it has funny capitalization. But then, over a decade ago I wanted to say X-box instead of Xbox. Marketers, you don't get to bend the rules of proper grammar to your benefit dammit.)
posted by JHarris at 4:44 PM on November 17, 2020


You can always reanalyze it as "Mac OS" if it makes you feel better - it's retro!
posted by vibratory manner of working at 12:49 AM on November 18, 2020 [2 favorites]


its depressing that these new chips will be stuck running "macOS."

I'm sure it won't be too long before you'll be able to run Debian on one, if you want.
posted by flabdablet at 12:53 AM on November 18, 2020 [2 favorites]


I'm not so sure Linux will run natively on this hardware; it depends on whether Apple signs any other boot code. So far they've said no. See this previous comment.

(As always: you don't own most modern hardware, you merely lease the right to run the code that some company has decided is OK to run on it.)
posted by Nelson at 8:38 AM on November 18, 2020 [4 favorites]


So it seems Apple is actually investing in developing versions of TensorFlow that run on the new M1 chip.
posted by vogon_poet at 1:16 PM on November 18, 2020 [4 favorites]


I'm sure it won't be too long before you'll be able to run Debian on one, if you want.

Can you provide us with the reason you're sure about this? Is the M1 open for development for other Operating Systems?
posted by juiceCake at 4:08 PM on November 20, 2020 [2 favorites]


They at least built virtualization support in. Apparently even the bootloader runs in a tiny instance of macOS, so I probably wouldn’t count on a dual-boot situation. On the other hand, Craig Federighi mentioned in an interview that the ball’s in Microsoft’s court regarding selling ARM Windows to the public, so
posted by DoctorFedora at 4:10 PM on November 20, 2020


Can you provide us with the reason you're sure about this? Is the M1 open for development for other Operating Systems?

Debian variants have been built on FreeBSD kernels. Even if Apple makes it impossible to boot any kernel but theirs on their silicon, as long as there remains some way to sideload arbitrary userland components I would expect running a complete Debian userland on top of a current Apple kernel to be feasible.

Given Apple's past failures to prevent boot loader jailbreaks it's probably mostly a matter of time before the kernel can get replaced as well.
posted by flabdablet at 7:10 AM on November 21, 2020


Also, didn't Apple promise a hypervisor that's usable with VMs and Docker? If so, you'd be able to have Linux guests (and presumably Windows guests once Microsoft release a suitable ARM Windows).
posted by acb at 9:59 AM on November 21, 2020 [1 favorite]


I've been holding at Mojave because a lot of games haven't been updated to run in the post-Catalina 64-bit only world, but apparently Battletech runs just fine on the M1 so I've got some choices to make.
posted by rodlymight at 10:35 AM on November 21, 2020 [1 favorite]


Is Battletech 64-bit Intel code? If not, does this mean that Rosetta 2 on Apple Silicon is the one way of running 32-bit OSX code on post-Mojave OSes?
posted by acb at 11:35 AM on November 21, 2020


I think at some point it must have been updated to 64-bit code without me noticing, though I can't find anything in the patch notes. There definitely used to be disclaimers on its Steam and Gog pages that it wouldn't run on Catalina that seem to be gone now.
posted by rodlymight at 12:30 PM on November 21, 2020


What’s potentially even weirder is that I have a couple games installed on Steam that have the big “this won’t run on your computer!” warning because of dropped 32-bit support, but also, they run perfectly fine anyway
posted by DoctorFedora at 8:54 PM on November 21, 2020 [1 favorite]


If this means my no-longer-updated 32-bit music plugins get automagically converted to ARM64 code, I'm all for it.
posted by acb at 3:10 AM on November 23, 2020


Also on audio plugins: it's possible to use iOS instrument apps as AudioUnit plugins in macOS DAWs on a M1 machine, at least with some fiddling.
posted by acb at 1:40 PM on November 23, 2020 [2 favorites]


Given Apple's past failures to prevent boot loader jailbreaks it's probably mostly a matter of time before the kernel can get replaced as well.

Officially or through hacks? From what I've read about native Linux on the iPad, it has happened in an easy and officially supported way from any of the variants but perhaps my information is out of date. Perhaps it won't be difficult for this processor but I remain entirely skeptical.
posted by juiceCake at 12:51 PM on November 24, 2020


That's hasn't happened...
posted by juiceCake at 1:19 PM on November 24, 2020 [1 favorite]


Through hacks, of course. Expecting Apple to provide any official support for stuff outside its own ecosystem is generally not realistic. But I have a great deal of faith in the ongoing ability of Apple's software team to continue undermining the excellent work done by its hardware security people.
posted by flabdablet at 4:09 PM on November 24, 2020


Huh. Apparently if I want 16 gigs of ram in my M1 Mini, I'm not getting it this year, but if I only want 8, it'll be here in about a week. But I want it NOWWW!!! I mean… if I were to buy one.
posted by rodlymight at 6:33 PM on November 26, 2020


Developer runs ARM Windows virtualized on Apple M1 Mac, finds it ‘pretty snappy’

Here's how:

https://patchwork.kernel.org/project/qemu-devel/list/?series=391797
posted by issue #1 at 1:16 PM on November 28, 2020 [2 favorites]


Linus doubts it will get ported.
posted by juiceCake at 1:04 PM on November 29, 2020 [1 favorite]




« Older Ewww...Sinclair? Oh, you mean a *different*...   |   a B low-high dum-dum-diddy Newer »


This thread has been archived and is closed to new comments