The Subprime AI Crisis
September 17, 2024 2:43 PM Subscribe
Ed Zitron: "I hypothesize a kind of subprime AI crisis is brewing, where almost the entire tech industry has bought in on a technology sold at a vastly-discounted rate, heavily-centralized and subsidized by big tech. At some point, the incredible, toxic burn-rate of generative AI is going to catch up with them, which in turn will lead to price increases, or companies releasing new products and features with wildly onerous rates ... that will make even stalwart enterprise customers with budget to burn unable to justify the expense. What happens when the entire tech industry relies on the success of a kind of software that only loses money, and doesn’t create much value to begin with?"
I read the title and my mind immediately flew to Uber & Lyft, which had their low rates subsidized by VCs and then had to raise them a bit (je crois) but are… still here, and have still mostly replaced cabs.
posted by Going To Maine at 3:10 PM on September 17 [11 favorites]
posted by Going To Maine at 3:10 PM on September 17 [11 favorites]
Yep, added to the list of many comeuppances I hope to live to see.
posted by Sing Or Swim at 3:10 PM on September 17 [2 favorites]
posted by Sing Or Swim at 3:10 PM on September 17 [2 favorites]
so you're saying $20/mo is the teaser rate??
posted by torokunai at 3:13 PM on September 17 [1 favorite]
posted by torokunai at 3:13 PM on September 17 [1 favorite]
Well that explains why bankers all around the world are going all in on AI
posted by chavenet at 3:14 PM on September 17 [3 favorites]
posted by chavenet at 3:14 PM on September 17 [3 favorites]
I'm unable to summon any glee here. We've been griping about AI for some time, and that's fine, but as Ed is pointing out here, a lot of companies have made huge bets on AI. And those bets translate into jobs, and those jobs are going to be lost. That effect is going to ripple out, too. It's not going to be just AI guys getting fired; in some companies, other people are going to be laid off to continue trying to support the AI guys. So what's happening with this bubble is that we're teeing up an enormous amount of human suffering. We've moved way beyond "haha this thing doesn't work" straight to "people are going to lose their homes because this doesn't work," and it's not going to be, like, Sam Altman losing his home, it's going to be the low-level employees at the companies who have licensed the technology because they thought that AI was going to provide them a future. That is, the people who will pay for this, aren't the sinners.
posted by mittens at 3:22 PM on September 17 [55 favorites]
posted by mittens at 3:22 PM on September 17 [55 favorites]
It does look like banks are joining the vapor gold-rush (hmmmm, they really spotted that subprime fiasco a few Years back) but so what. Zitron lays out a very persuasive piece that AI does nothing well, no one wants it, no one wants to pay for it, is mind-numbingly expensive, is built on the theft of other actual human being's work, is running out of that work which will bring the 'training' to a halt, will cause the biggest tech to likely decimate their actually 'valuable' operations to continue burning staggering amounts of resources (ie will fire a shit ton of regular people and end any development they might do that could have value) and is rapidly accelerating the scorching of the earth. For what purpose? Chat bots that might 'streamline operations' and 'power growth'?
It seems that the likely (possibly not, but likely) crash will be..... significant. And as Mittens put it much more eloquently, the people getting fucked when this uselessness comes crashing down aint the tech lords. Late stage capitalism means they'll be just fine. And if a few million (or more) people need to suffer for that, so be it...
posted by WatTylerJr at 4:17 PM on September 17 [11 favorites]
It seems that the likely (possibly not, but likely) crash will be..... significant. And as Mittens put it much more eloquently, the people getting fucked when this uselessness comes crashing down aint the tech lords. Late stage capitalism means they'll be just fine. And if a few million (or more) people need to suffer for that, so be it...
posted by WatTylerJr at 4:17 PM on September 17 [11 favorites]
No, it's fine. AI companies are living examples of fully automated luxury communism. Any talk of money or paying for things is just a kind of tacky joke with them.
I suppose I feel for the people who work at Microsoft or Apple or Google or whatever, who maybe thought they weren't at an AI company, but the people who work at Open AI are contractors on the Death Star.
posted by surlyben at 4:19 PM on September 17 [11 favorites]
I suppose I feel for the people who work at Microsoft or Apple or Google or whatever, who maybe thought they weren't at an AI company, but the people who work at Open AI are contractors on the Death Star.
posted by surlyben at 4:19 PM on September 17 [11 favorites]
it's not going to be, like, Sam Altman losing his home
yeah well, i dunno about that, man...
posted by slater at 4:23 PM on September 17 [7 favorites]
yeah well, i dunno about that, man...
posted by slater at 4:23 PM on September 17 [7 favorites]
This is quite a takedown! Pretty interesting for a layperson like me. I didn't make it to the end, but I'm wondering won't "AI" just be paid for by advertising, like the rest of the industry? Maybe these numbers are higher than can be generated by planting ads on every surface known to human kind?
posted by latkes at 4:26 PM on September 17
posted by latkes at 4:26 PM on September 17
I think Ed's falling into the trap that many people here do, as well. "AI" is not LLMs and generative models for images. That's like a tiny fraction of what deep learning brings to the table. It is absolutely revolutionary in robotics (my field), for example.
Similarly, companies (Intel, AMD, and NVIDIA) are using generative models to design chips and circuitry; deep learned techniques perform better than human experts at this point. Protein modeling, drug discovery, climate simulation and forecasting, and on and on. Any field where there is a large amount of numerical data that you need to extract meaning from is impacted by deep learning. I think we are in a bubble of sorts -- nobody's going to be getting funding for a "prompt engineering" startup in two years -- but the fundamental tech and science behind it is real and powerful.
posted by riotnrrd at 4:26 PM on September 17 [43 favorites]
Similarly, companies (Intel, AMD, and NVIDIA) are using generative models to design chips and circuitry; deep learned techniques perform better than human experts at this point. Protein modeling, drug discovery, climate simulation and forecasting, and on and on. Any field where there is a large amount of numerical data that you need to extract meaning from is impacted by deep learning. I think we are in a bubble of sorts -- nobody's going to be getting funding for a "prompt engineering" startup in two years -- but the fundamental tech and science behind it is real and powerful.
posted by riotnrrd at 4:26 PM on September 17 [43 favorites]
If boosters/exploiters insist on calling LLMs and image generators "AI," then I'm fine with critics using the term "AI" in turn.
posted by audi alteram partem at 4:33 PM on September 17 [38 favorites]
posted by audi alteram partem at 4:33 PM on September 17 [38 favorites]
Another good article by Zitron, and so much deja-vu
Something that surely exists in some form in the Y Combinator handbook for new recruits:
How To Get (More) Rich While Hyping The Latest Distraction
1. Bloviate about some new technology and use the easily excitable media to craft stories for you, practically for free, about how valuable it will be (to investors) and transformative for society (the solutions for whose problems are in many cases already identified but not easily solved by powerpoint decks and sexy hot developer tools, and are less fun or more frictionful or less profit-enabling to implement)
2. Rally countless millions of people behind said technology & exploit their dedicated labor to support and extend the con
3. Watch nonchalantly as huge swaths of them and their families suffer the proven effects of unemployment on their health, wellbeing and productivity, and enjoy the resulting increased leverage over the remaining minority who double their efforts, laboring in fear of you and for your increasing comfort and freedom from burden or consequence
4. Profit!
5. Goto 1
We have seen this so. Many. Times. But i’m afraid Zitron may be right. This could be a big one.
posted by armoir from antproof case at 4:41 PM on September 17 [6 favorites]
Something that surely exists in some form in the Y Combinator handbook for new recruits:
How To Get (More) Rich While Hyping The Latest Distraction
1. Bloviate about some new technology and use the easily excitable media to craft stories for you, practically for free, about how valuable it will be (to investors) and transformative for society (the solutions for whose problems are in many cases already identified but not easily solved by powerpoint decks and sexy hot developer tools, and are less fun or more frictionful or less profit-enabling to implement)
2. Rally countless millions of people behind said technology & exploit their dedicated labor to support and extend the con
3. Watch nonchalantly as huge swaths of them and their families suffer the proven effects of unemployment on their health, wellbeing and productivity, and enjoy the resulting increased leverage over the remaining minority who double their efforts, laboring in fear of you and for your increasing comfort and freedom from burden or consequence
4. Profit!
5. Goto 1
We have seen this so. Many. Times. But i’m afraid Zitron may be right. This could be a big one.
posted by armoir from antproof case at 4:41 PM on September 17 [6 favorites]
Any field where there is a large amount of numerical data that you need to extract meaning from is impacted by deep learning.
Sure, but that's been going on for years already, and is not what all this startup capital is getting invested in.
posted by OnceUponATime at 4:42 PM on September 17 [19 favorites]
Sure, but that's been going on for years already, and is not what all this startup capital is getting invested in.
posted by OnceUponATime at 4:42 PM on September 17 [19 favorites]
What if—hear me out—we pay for the AIs using cryptocurrencies, and then use the AIs to make more cryptocurrencies? The good times will never end!
[Aside: my Mac's predictive text wanted to fill that in as "The good times will never come." This artificial intelligence may be more intelligent than I gave it credit for.]
posted by adamrice at 4:52 PM on September 17 [15 favorites]
[Aside: my Mac's predictive text wanted to fill that in as "The good times will never come." This artificial intelligence may be more intelligent than I gave it credit for.]
posted by adamrice at 4:52 PM on September 17 [15 favorites]
>a lot of companies have made huge bets on AI. And those bets translate into jobs, and those jobs are going to be lost.
The very great likelihood is that they're going to be lost either way. Our Techbro Overlords live to lay people off to make line go up, and the whole point of AI is that it'll be able to do things barely acceptably that used to be done well by paid humans. What we're hoping for here is that the guys at the top ALSO lose their shirts; it's the only remaining satisfaction.
posted by Sing Or Swim at 4:52 PM on September 17 [3 favorites]
The very great likelihood is that they're going to be lost either way. Our Techbro Overlords live to lay people off to make line go up, and the whole point of AI is that it'll be able to do things barely acceptably that used to be done well by paid humans. What we're hoping for here is that the guys at the top ALSO lose their shirts; it's the only remaining satisfaction.
posted by Sing Or Swim at 4:52 PM on September 17 [3 favorites]
ChatGPT usage reaches 200 million weekly active users, double what it was in November of last year.
OpenAI says it has 11 million paying subscribers (that's $2B/yr)
I appreciate poking holes in the AI/LLM narrative, but plenty of people seem to be finding utility in these tools?
posted by gwint at 5:06 PM on September 17 [4 favorites]
OpenAI says it has 11 million paying subscribers (that's $2B/yr)
I appreciate poking holes in the AI/LLM narrative, but plenty of people seem to be finding utility in these tools?
posted by gwint at 5:06 PM on September 17 [4 favorites]
I think Ed's falling into the trap that many people here do, as well. "AI" is not LLMs and generative models for images. That's like a tiny fraction of what deep learning brings to the table. It is absolutely revolutionary in robotics (my field), for example.
I think the fact that you are referring to what “deep learning” brings to the table is something of a tell about “AI” being the vaporware term. For most of us, the bubble will burst when we stop hearing about LLMs talking all the jobs.
posted by Going To Maine at 5:06 PM on September 17 [9 favorites]
I think the fact that you are referring to what “deep learning” brings to the table is something of a tell about “AI” being the vaporware term. For most of us, the bubble will burst when we stop hearing about LLMs talking all the jobs.
posted by Going To Maine at 5:06 PM on September 17 [9 favorites]
Counterexample: Terrance Tao, one of the best Math people in the World:
https://mathstodon.xyz/@tao/113132502735585408
" The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student."
[snip]
In https://chatgpt.com/share/94152e76-7511-4943-9d99-1118267f4b2b I gave the new model a challenging complex analysis problem (which I had previously asked GPT4 to assist in writing up a proof of in https://chatgpt.com/share/63c5774a-d58a-47c2-9149-362b05e268b4 ). Here the results were better than previous models, but still slightly disappointing: the new model could work its way to a correct (and well-written) solution *if* provided a lot of hints and prodding, but did not generate the key conceptual ideas on its own, and did make some non-trivial mistakes. The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of "competent graduate student" is reached, at which point I could see this tool being of significant use in research level tasks. (2/3)
posted by aleph at 5:11 PM on September 17 [11 favorites]
https://mathstodon.xyz/@tao/113132502735585408
" The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student."
[snip]
In https://chatgpt.com/share/94152e76-7511-4943-9d99-1118267f4b2b I gave the new model a challenging complex analysis problem (which I had previously asked GPT4 to assist in writing up a proof of in https://chatgpt.com/share/63c5774a-d58a-47c2-9149-362b05e268b4 ). Here the results were better than previous models, but still slightly disappointing: the new model could work its way to a correct (and well-written) solution *if* provided a lot of hints and prodding, but did not generate the key conceptual ideas on its own, and did make some non-trivial mistakes. The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student. However, this was an improvement over previous models, whose capability was closer to an actually incompetent graduate student. It may only take one or two further iterations of improved capability (and integration with other tools, such as computer algebra packages and proof assistants) until the level of "competent graduate student" is reached, at which point I could see this tool being of significant use in research level tasks. (2/3)
posted by aleph at 5:11 PM on September 17 [11 favorites]
Dude just described "government". Meh.
posted by metametamind at 5:27 PM on September 17
posted by metametamind at 5:27 PM on September 17
My guess is that the Gartner hype cycles very much applies to the current AI landscape. I place us just a bit past the peak of inflated expectations.
posted by Pemdas at 5:27 PM on September 17 [4 favorites]
posted by Pemdas at 5:27 PM on September 17 [4 favorites]
One of Zitron’s central points on GenAI is not that it’s useless but that it’s not useful enough, despite the hype, to generate enough revenue to offset the staggering expense. That the VC grift demands enormous returns, and the likely real returns-operating costs are not likely to repay the costs. That could induce a cascading collapse of the substantial infrastructure costs paid out by the major tech companies, especially with Google already facing some potentially devastating effects of the monopoly trials.
posted by GenjiandProust at 5:29 PM on September 17 [15 favorites]
posted by GenjiandProust at 5:29 PM on September 17 [15 favorites]
My workplace is 'piloting' Google's Gemini. A bank, in fact. I'm not remotely involved in any banking activities so maybe it's just the IT support/software management teams. But anyhow, we are really being encouraged to use it enthusiastically and I just don't understand why. Is Google giving us a a discount on the G-Suite to try it out and get ourselves hooked? People seem to use it for a) composing emails, documents, and b) research. It's definitely useful and people seem to like it. But I wonder if my workplace is using us to test the tool that will make some of us...redundant.
posted by kitcat at 5:48 PM on September 17 [7 favorites]
posted by kitcat at 5:48 PM on September 17 [7 favorites]
yes. you're training your replacement.
posted by j_curiouser at 5:51 PM on September 17 [25 favorites]
posted by j_curiouser at 5:51 PM on September 17 [25 favorites]
> OMG, I am living for the moment when the emperor is discovered to be living clothing free!
It's more like a bunch of naked emperors are having an orgy and we are forced to watch it.
All day, every day.
posted by donio at 6:03 PM on September 17 [18 favorites]
It's more like a bunch of naked emperors are having an orgy and we are forced to watch it.
All day, every day.
posted by donio at 6:03 PM on September 17 [18 favorites]
I think Ed's falling into the trap that many people here do, as well. "AI" is not LLMs and generative models for images.
Yes. You are likely holding a device that has what the layperson mistakenly calls “AI” providing a large amount of functionality since the Obama Administration. It was a marketing choice to not make a deal all that “AI” was in there, just like it was a different marketing choice to expose much of that functionality on a slightly different way (plus some new stuff of course) for the device that might be getting delivered to you on Friday.
posted by Back At It Again At Krispy Kreme at 6:04 PM on September 17
Yes. You are likely holding a device that has what the layperson mistakenly calls “AI” providing a large amount of functionality since the Obama Administration. It was a marketing choice to not make a deal all that “AI” was in there, just like it was a different marketing choice to expose much of that functionality on a slightly different way (plus some new stuff of course) for the device that might be getting delivered to you on Friday.
posted by Back At It Again At Krispy Kreme at 6:04 PM on September 17
" The experience seemed roughly on par with trying to advise a mediocre, but not completely incompetent, graduate student."
This is specifically about OpenAI’s new release,
o1, which I don’t think has been discussed here yet. It seems to be the realization of the “q-star” project that there were leaks/rumors/hype about last year, and is basically a model trained to talk itself/GPT through things step by step. The math/code/procedural reasoning benchmarks are really good at the expense of, you guessed it, more compute (theoretically scaling to an arbitrary large amount at inference time, if I understand the idea correctly).
posted by atoxyl at 6:17 PM on September 17 [1 favorite]
This is specifically about OpenAI’s new release,
o1, which I don’t think has been discussed here yet. It seems to be the realization of the “q-star” project that there were leaks/rumors/hype about last year, and is basically a model trained to talk itself/GPT through things step by step. The math/code/procedural reasoning benchmarks are really good at the expense of, you guessed it, more compute (theoretically scaling to an arbitrary large amount at inference time, if I understand the idea correctly).
posted by atoxyl at 6:17 PM on September 17 [1 favorite]
Yep. o1
To get to an "mediocre" grad student at this point...
"...but that it’s not useful enough, despite the hype, to generate enough revenue to offset the staggering expense."
Yes. Seems like we're looking at at least a couple boom/bust cycles. But there does seem to be possibilities on the other end of those cycles.
posted by aleph at 6:20 PM on September 17 [1 favorite]
To get to an "mediocre" grad student at this point...
"...but that it’s not useful enough, despite the hype, to generate enough revenue to offset the staggering expense."
Yes. Seems like we're looking at at least a couple boom/bust cycles. But there does seem to be possibilities on the other end of those cycles.
posted by aleph at 6:20 PM on September 17 [1 favorite]
At my tech company we have been required to take a series of courses around AI, including the use of CoPilot for coding, but now they won't pay for any of us to actually have access to CoPilot. To be honest I've been perfectly happy with Intellisense in VSCode (which has been around for years now).
In the summer they had a symposium that I attended briefly and the main thing I remember is some senior marketing person at a major bank saying they thought AI was overhyped, and everyone else kind of looking down and squirming.
posted by maggiemaggie at 6:33 PM on September 17 [5 favorites]
In the summer they had a symposium that I attended briefly and the main thing I remember is some senior marketing person at a major bank saying they thought AI was overhyped, and everyone else kind of looking down and squirming.
posted by maggiemaggie at 6:33 PM on September 17 [5 favorites]
This discussion reminds me of that story Vernor Vinge (I think) did. Don't know if it's an offshoot of that Rainbow's End book he did or if just a chapter in that book.
Setting: A Senior who's just come back into life (for reasons) is being tutored on the new reality of Vernor Vinge's AI + augmented reality + Universal connectivity.
She's a middle aged Asian(?) women and is being helped by a 14 yr old student at the High School.
There's an exercise called out. It's to come up with a device that will do a fairly complicated thing. The women is overwhelmed the technical challenge at first, the kid says something like "Don't worry about that stuff." Then "throws up" a minimal copy of a design that vaguely does something close to that. Then puts it through a visual simulation that shows several discrepancies. Then he says something like "We can use the AI to tweak the parameters..." and proceeds to show her several iterations getting (slowly) closer to the specified goals. All in a few minutes. After she calms down a little she says she was an Engineer and thinks if they... and it works.
Get the sequence? A person that "understands" and tools that support. Vinge seems a good guide on this.
Of course these will replace a *lot* of mediocre Grad students. In a lot of occupations.
posted by aleph at 6:41 PM on September 17 [5 favorites]
Setting: A Senior who's just come back into life (for reasons) is being tutored on the new reality of Vernor Vinge's AI + augmented reality + Universal connectivity.
She's a middle aged Asian(?) women and is being helped by a 14 yr old student at the High School.
There's an exercise called out. It's to come up with a device that will do a fairly complicated thing. The women is overwhelmed the technical challenge at first, the kid says something like "Don't worry about that stuff." Then "throws up" a minimal copy of a design that vaguely does something close to that. Then puts it through a visual simulation that shows several discrepancies. Then he says something like "We can use the AI to tweak the parameters..." and proceeds to show her several iterations getting (slowly) closer to the specified goals. All in a few minutes. After she calms down a little she says she was an Engineer and thinks if they... and it works.
Get the sequence? A person that "understands" and tools that support. Vinge seems a good guide on this.
Of course these will replace a *lot* of mediocre Grad students. In a lot of occupations.
posted by aleph at 6:41 PM on September 17 [5 favorites]
I would give a great deal to never read or hear another word from Ed Zitron that does not acknowledge the existence of the open source AI community, the tens of thousands of models on Hugging Face, or the enormous gains in efficiency from their efforts. He's not just laser-focused on LLMs to the exclusion of all other machine learning, he's positively fixated on the billion-dollar blowhards and Silicon Valley hype. That is not all that exists under the sun, not even close, and conflating it with cryptocurrency - a technology of pure waste and zero utility - is simply disingenuous. LLMs may not have sufficient utility to justify the current capital expenditure, but it's far from zero and to claim otherwise is pushing a false narrative.
...and yeah, it's a narrative that people like Ed Zitron desperately need to be true.
Meanwhile, for people that actually want a nuanced, balanced take, there's AI Explained on OpenAI o1: it's a significant improvement in terms of crude common-sense reasoning, but it is highly, highly inconsistent in the first preview release, only a partial implementation at best of the Q* papers, and not at all the kind of thing that is going to justify the investment Altman is demanding, or that any future generation models built along current lines would require. Philip's Discord has a fair number of AI researchers on it, and the SimpleBench test suite he's created with them consistently sees scores of 90% or higher from humans, and under 30% from all models prior to o1. When I say that o1 is a significant improvement I'm specifically referring to it scoring 50% on SimpleBench, and similar gains in other - more abstraction-oriented - tasks. The one thing I don't like - though I do understand - is that SimpleBench is not publicly available for analysis and refinement because Philip doesn't want to give the larger LLM projects an easy target for adversarial training.
At any rate, I do not disagree with the general line of thought that investment has wildly outpaced likely revenue, or that the CXO suites of most large companies building pioneer models are filled with deeply unethical people who should not be allowed to run a popsicle stand. But to say there is no path forward and anyone who disagrees is part of OpenAI's marketing is simply false: RNN-LLM hybrids are going to take actual time and research. In an ideal universe both the hype peddlers and the hipster pundits will mutually self-annihilate in the interim, returning some measure of energy and sanity to the rest of us.
posted by Ryvar at 6:55 PM on September 17 [19 favorites]
...and yeah, it's a narrative that people like Ed Zitron desperately need to be true.
Meanwhile, for people that actually want a nuanced, balanced take, there's AI Explained on OpenAI o1: it's a significant improvement in terms of crude common-sense reasoning, but it is highly, highly inconsistent in the first preview release, only a partial implementation at best of the Q* papers, and not at all the kind of thing that is going to justify the investment Altman is demanding, or that any future generation models built along current lines would require. Philip's Discord has a fair number of AI researchers on it, and the SimpleBench test suite he's created with them consistently sees scores of 90% or higher from humans, and under 30% from all models prior to o1. When I say that o1 is a significant improvement I'm specifically referring to it scoring 50% on SimpleBench, and similar gains in other - more abstraction-oriented - tasks. The one thing I don't like - though I do understand - is that SimpleBench is not publicly available for analysis and refinement because Philip doesn't want to give the larger LLM projects an easy target for adversarial training.
At any rate, I do not disagree with the general line of thought that investment has wildly outpaced likely revenue, or that the CXO suites of most large companies building pioneer models are filled with deeply unethical people who should not be allowed to run a popsicle stand. But to say there is no path forward and anyone who disagrees is part of OpenAI's marketing is simply false: RNN-LLM hybrids are going to take actual time and research. In an ideal universe both the hype peddlers and the hipster pundits will mutually self-annihilate in the interim, returning some measure of energy and sanity to the rest of us.
posted by Ryvar at 6:55 PM on September 17 [19 favorites]
Technology aside, this is a stupid analogy because the way that financiers made money on both sides of the last mortgage debacle is not very similar at all to how this is unfolding.
The people who were targeted to invest in subprime mortgages and the way their 'assets' and those of their counterparties were repositioned afterwards is just not what we're going to see next.
I know that's not the point and the point is there's a bubble, but they're not alike and using the term 'subprime' is poor form.
posted by Reasonably Everything Happens at 7:06 PM on September 17 [6 favorites]
The people who were targeted to invest in subprime mortgages and the way their 'assets' and those of their counterparties were repositioned afterwards is just not what we're going to see next.
I know that's not the point and the point is there's a bubble, but they're not alike and using the term 'subprime' is poor form.
posted by Reasonably Everything Happens at 7:06 PM on September 17 [6 favorites]
My bet is not on this stuff being a bust for the big players, based simply on the observations that
a.) the state of the art is definitely more useful than it was two years ago, and it doesn’t seem we’ve run out of potential improvements
b.) the equivalent of the two years ago state of the art is (contra Zitron previously) definitely now considerably cheaper
But every empirical indication is that it’s a technology that improves in performance linearly with exponential compute/data investment. That would not normally be a sensible thing to throw money after but for the history of Moore’s law, of computers themselves becoming exponentially more cost effective. So there’s a lot riding on the idea of that continuing (perhaps including the idea that AI will help us do that!)
Obviously the sector as a whole is full of scams, though. The closer the big players come to being able to deliver on anything they are promising, the less the companies that just wrap their product make any sense.
posted by atoxyl at 7:17 PM on September 17 [5 favorites]
a.) the state of the art is definitely more useful than it was two years ago, and it doesn’t seem we’ve run out of potential improvements
b.) the equivalent of the two years ago state of the art is (contra Zitron previously) definitely now considerably cheaper
But every empirical indication is that it’s a technology that improves in performance linearly with exponential compute/data investment. That would not normally be a sensible thing to throw money after but for the history of Moore’s law, of computers themselves becoming exponentially more cost effective. So there’s a lot riding on the idea of that continuing (perhaps including the idea that AI will help us do that!)
Obviously the sector as a whole is full of scams, though. The closer the big players come to being able to deliver on anything they are promising, the less the companies that just wrap their product make any sense.
posted by atoxyl at 7:17 PM on September 17 [5 favorites]
Two unrelated thoughts.
""AI" is not LLMs and generative models for images. That's like a tiny fraction of what deep learning brings to the table. It is absolutely revolutionary in robotics (my field), for example."
Indeed. We were all oohing and ahing over that neat walking table - the leg design was improved with generative AI. Case in point.
Apart from that, thinking about the impact of a big price rise, and thinking about the banks and other industries that are effectively big bureaucracies going hard - I wonder how much prices have to rise before the prospect of laying off huge numbers of workers becomes unattractive to those kinds of businesses.
posted by i_am_joe's_spleen at 7:31 PM on September 17 [1 favorite]
""AI" is not LLMs and generative models for images. That's like a tiny fraction of what deep learning brings to the table. It is absolutely revolutionary in robotics (my field), for example."
Indeed. We were all oohing and ahing over that neat walking table - the leg design was improved with generative AI. Case in point.
Apart from that, thinking about the impact of a big price rise, and thinking about the banks and other industries that are effectively big bureaucracies going hard - I wonder how much prices have to rise before the prospect of laying off huge numbers of workers becomes unattractive to those kinds of businesses.
posted by i_am_joe's_spleen at 7:31 PM on September 17 [1 favorite]
Fantastic comment, atoxyl.
FWIW I think we're pretty close to the ceiling on pure LLMs - or at least the scaling laws are so not in our favor that the moment of sheer potential waste vs shifting to a better approach is upon us. Llama-3.1 70b finetunings (not even 405b) are now surpassing GPT-4. As much as I hate them, the overall direction OpenAI promised of shifting towards new architectures and hybrid systems was and is the right call. The slew of departures (including Schulman, recently) points to the enormous difficulty in doing so in quarterly-statements-friendly timeframes.
If you want to see someone who actually fully understands the technology and reads the papers wax despondent on the direction taken by the large corporations, the previous AI Explained video lays things out in pretty stark terms. If one is absolutely determined to adopt a negative stance towards technology then for fuck's sake: at least put in the effort to do it right?
posted by Ryvar at 7:33 PM on September 17 [2 favorites]
FWIW I think we're pretty close to the ceiling on pure LLMs - or at least the scaling laws are so not in our favor that the moment of sheer potential waste vs shifting to a better approach is upon us. Llama-3.1 70b finetunings (not even 405b) are now surpassing GPT-4. As much as I hate them, the overall direction OpenAI promised of shifting towards new architectures and hybrid systems was and is the right call. The slew of departures (including Schulman, recently) points to the enormous difficulty in doing so in quarterly-statements-friendly timeframes.
If you want to see someone who actually fully understands the technology and reads the papers wax despondent on the direction taken by the large corporations, the previous AI Explained video lays things out in pretty stark terms. If one is absolutely determined to adopt a negative stance towards technology then for fuck's sake: at least put in the effort to do it right?
posted by Ryvar at 7:33 PM on September 17 [2 favorites]
I kind of keep finding myself thinking about the question of "what trillion-dollar problem does this all solve?" when it comes to the generative ML model hype bubble, and so far a lot of it feels like it sort of tops out at "this can replace hiring someone on Fiverr," which doesn't really justify the amounts of money going into a lot of it.
Granted, there's that ongoing phenomenon where if it's good and useful it's called "ML" and if it's a useless boondoggle being added to software solely to avoid looking like you're "falling behind" then it's called "AI." Whisper is an incredibly useful tool (when it works) for transcribing English-language audio primarily and then secondarily any other given language, though for some reason the v3 model seems to work worse than the v2 model.
There's probably also other useful ML models out there too, though at the moment I'm still just kind of thinking of ChatGPT and how it's mostly good at producing answer-shaped utterances that do not take into account whether it's true or not. I guess I should probably be worried about that though because "confidently wrong" is one of the main features I offer.
posted by DoctorFedora at 8:20 PM on September 17 [3 favorites]
Granted, there's that ongoing phenomenon where if it's good and useful it's called "ML" and if it's a useless boondoggle being added to software solely to avoid looking like you're "falling behind" then it's called "AI." Whisper is an incredibly useful tool (when it works) for transcribing English-language audio primarily and then secondarily any other given language, though for some reason the v3 model seems to work worse than the v2 model.
There's probably also other useful ML models out there too, though at the moment I'm still just kind of thinking of ChatGPT and how it's mostly good at producing answer-shaped utterances that do not take into account whether it's true or not. I guess I should probably be worried about that though because "confidently wrong" is one of the main features I offer.
posted by DoctorFedora at 8:20 PM on September 17 [3 favorites]
> But every empirical indication is that it’s a technology that improves in performance linearly with exponential compute/data investment.
For training, yes. But once you have a trained model, scaling is linear with a slope well below 1.0. And the performance gains have been truly astounding on the inference side. Algorithms alone have lead to a 10x reduction in inference cost in 2 years while scaling to much larger data sizes.
posted by constraint at 8:53 PM on September 17 [4 favorites]
For training, yes. But once you have a trained model, scaling is linear with a slope well below 1.0. And the performance gains have been truly astounding on the inference side. Algorithms alone have lead to a 10x reduction in inference cost in 2 years while scaling to much larger data sizes.
posted by constraint at 8:53 PM on September 17 [4 favorites]
Just gotta drop this heavy quote....
I am deeply concerned that this entire industry is built on sand.
posted by cosmologinaut at 9:48 PM on September 17 [3 favorites]
I am deeply concerned that this entire industry is built on sand.
posted by cosmologinaut at 9:48 PM on September 17 [3 favorites]
that and also gpus are made of silicon lol
posted by cosmologinaut at 9:48 PM on September 17 [3 favorites]
posted by cosmologinaut at 9:48 PM on September 17 [3 favorites]
Technology is well known for not getting faster or cheaper over time.
posted by chasing at 10:11 PM on September 17
posted by chasing at 10:11 PM on September 17
The underlying purpose of AI is to allow wealth to access skill while removing the ability of the skilled to access wealth.
posted by GallonOfAlan at 10:55 PM on September 17 [28 favorites]
posted by GallonOfAlan at 10:55 PM on September 17 [28 favorites]
Every major tech player — both in the consumer and enterprise realm — is selling some sort of AI product, integrating either one of the big Large Language Models or their own, invariably running cloud compute on one of the big tech players’ systems. On some level, every single one of these companies is dependent on big tech’s willingness to subsidize the entire industry.
This is such an Ed Zitron statement! I love his anger about this stuff but, like - what major players are we talking about here? Is this the thin ChatGPT integration that some web app boasts about having glued on to their core product? Because that sounds like something a company could get rid of easily. Or are we still talking about Microsoft et al.? Those big players will surely be getting hosed, but that doesn’t sound like what concerns him here. Gimme some names, some faces.
posted by Going To Maine at 11:10 PM on September 17
This is such an Ed Zitron statement! I love his anger about this stuff but, like - what major players are we talking about here? Is this the thin ChatGPT integration that some web app boasts about having glued on to their core product? Because that sounds like something a company could get rid of easily. Or are we still talking about Microsoft et al.? Those big players will surely be getting hosed, but that doesn’t sound like what concerns him here. Gimme some names, some faces.
posted by Going To Maine at 11:10 PM on September 17
I think one of the problems is the technology- aka neural networks - is being used to power language learning machines (LLMs). Neural networks have uses. LLMs are easy to make (by the standards of AI, which is quite far along the technology tree), easy to integrate with other text based technology (aka almost everything in your phone and laptop), but frequently useless. OpenAI were quick to capture the market and the public eye with ChatGPT, thereby defining the concept of AI as an LLM.
The other problem is that AI as a concept is being used by a flailing tech sector to ask for more investment and demand favourable legislation.
posted by The River Ivel at 11:25 PM on September 17 [1 favorite]
The other problem is that AI as a concept is being used by a flailing tech sector to ask for more investment and demand favourable legislation.
posted by The River Ivel at 11:25 PM on September 17 [1 favorite]
I was talking to a friend in the astronomy field, who was teling me about a tool that can take a newly-developed analysis and apply it to All The Data in All The Fields (hyperbole). Basically generate the kind of perfectly adequate paper that perfectly adequate grad students write.
The problem is that if you take away those paper-writing opportunities, then those grad students never get the skills they need to become perfectly adequate faculty members and researchers. It's eating the seed corn in much the same way that tv writers have been complaining about that industry doing.
People who aren't yet good at Thing need a chance to practice Thing, so they can get good at Thing. Barring a Star Trek situation, machines won't get good at Thing in the way society needs, even if there are short term productivity gains
posted by DebetEsse at 11:30 PM on September 17 [25 favorites]
The problem is that if you take away those paper-writing opportunities, then those grad students never get the skills they need to become perfectly adequate faculty members and researchers. It's eating the seed corn in much the same way that tv writers have been complaining about that industry doing.
People who aren't yet good at Thing need a chance to practice Thing, so they can get good at Thing. Barring a Star Trek situation, machines won't get good at Thing in the way society needs, even if there are short term productivity gains
posted by DebetEsse at 11:30 PM on September 17 [25 favorites]
I think this may be true short term, especially with websites that just awkwardly shove in a chatbot to help you search their docs or settings menu, or companies adding weird ai art features next to the gif button and social media buttons, for you to tweet your plumbing payment to your friends.
It's definitely going to burn a lot of people. However, I think AI is a credible threat to a lot of existing jobs, and as a software developer, we have been seeing consistent improvements in AI with more money and training time. There are bottlenecks, and maybe one will bring about another AI winter, but I think most centrist left leaning people are ignoring the real risks this has to the economy. I don't think the answer is to grift or blindly buy ai products to improve your marketability. But Mark Andressen isn't going to advocate for UBI. His milieu thinks it's cute to joke welfare is America's UBI already. The people at the reigns are not careful or serious people and people are going to get hurt by their mistakes or their will with even more obscene wealth and soft power. I don't think AI is as scary as global warming, but I do think it's a possibility worth fighting against or preparing for.
We laughed at bitcoin, and you can now get bitcoin from bodega atms and wall street firms let institutional investors buy them. This is silly, and I doubt healthy, enduring fortunes will be made by people buying in now. And there's still massive warehouses burning through insane amounts of electricity to do nothing but fight to sign a block and get some arbitrary scrip. Proof of work isn't even necessary, cryptocurrencies have adopted proof of stake for the same richest guy takes all stakes, without the european nation carbon dioxide footprint. But the people who run bitcoin like their entrenched gigs.
It's not enough to just laugh at technologies we don't want deployed. Probably tech workers should have unionized decades ago, for one.
posted by MuppetNavy at 11:46 PM on September 17 [8 favorites]
It's definitely going to burn a lot of people. However, I think AI is a credible threat to a lot of existing jobs, and as a software developer, we have been seeing consistent improvements in AI with more money and training time. There are bottlenecks, and maybe one will bring about another AI winter, but I think most centrist left leaning people are ignoring the real risks this has to the economy. I don't think the answer is to grift or blindly buy ai products to improve your marketability. But Mark Andressen isn't going to advocate for UBI. His milieu thinks it's cute to joke welfare is America's UBI already. The people at the reigns are not careful or serious people and people are going to get hurt by their mistakes or their will with even more obscene wealth and soft power. I don't think AI is as scary as global warming, but I do think it's a possibility worth fighting against or preparing for.
We laughed at bitcoin, and you can now get bitcoin from bodega atms and wall street firms let institutional investors buy them. This is silly, and I doubt healthy, enduring fortunes will be made by people buying in now. And there's still massive warehouses burning through insane amounts of electricity to do nothing but fight to sign a block and get some arbitrary scrip. Proof of work isn't even necessary, cryptocurrencies have adopted proof of stake for the same richest guy takes all stakes, without the european nation carbon dioxide footprint. But the people who run bitcoin like their entrenched gigs.
It's not enough to just laugh at technologies we don't want deployed. Probably tech workers should have unionized decades ago, for one.
posted by MuppetNavy at 11:46 PM on September 17 [8 favorites]
If Zitron is wrong, a lot of people who don't work in AI will lose their jobs because AI will replace them, and none of the money that gets "saved" - or more accurately, added to the TechBro and Finance megapiles - will be shared. And if he's right, then the people that would have lost their jobs will bail out the too-big-to-fail banks that are betting on their replaceability.
I get there are AI Positive types on here who like to say, yeah, but it's not just LLMs, and the like. They may even be right, for what it's worth. But the point isn't what the technology is, it's what those tech owners want to do with it. And that stinks either way
posted by onebuttonmonkey at 12:25 AM on September 18 [9 favorites]
I get there are AI Positive types on here who like to say, yeah, but it's not just LLMs, and the like. They may even be right, for what it's worth. But the point isn't what the technology is, it's what those tech owners want to do with it. And that stinks either way
posted by onebuttonmonkey at 12:25 AM on September 18 [9 favorites]
@riotnrrd
I'm glad someone else is saying it. It might not be so obvious to the layperson who sees chatbots and deeofakes but the power of deep learning - even before the current, more showy offerings - is truly revolutionary. The stuff I was taught at the beginnings of the 2000s doesn't even compare. I won't be particularly surprised if this current round fizzle but I think the next will be sooner and perhaps "the one".
posted by DeepSeaHaggis at 12:47 AM on September 18 [2 favorites]
I'm glad someone else is saying it. It might not be so obvious to the layperson who sees chatbots and deeofakes but the power of deep learning - even before the current, more showy offerings - is truly revolutionary. The stuff I was taught at the beginnings of the 2000s doesn't even compare. I won't be particularly surprised if this current round fizzle but I think the next will be sooner and perhaps "the one".
posted by DeepSeaHaggis at 12:47 AM on September 18 [2 favorites]
OMG, I am living for the moment when the emperor is discovered to be living clothing free!
schadenfreude is best served naked.
posted by unearthed at 2:25 AM on September 18 [1 favorite]
schadenfreude is best served naked.
posted by unearthed at 2:25 AM on September 18 [1 favorite]
"...with exponential compute/data investment. That would not normally be a sensible thing to throw money after but for the history of Moore’s law"
You *have* seen all the "Moore's law is dead" articles? For good reasons. We've been at the edge of "no more" for a while now. More and more articles about having to go parallel because we can't go smaller/faster/less-power any more. We (maybe) have one two nodes left to go, at exponentially increased cost, and then we're looking at places where individual atoms matter. We can manipulate individual atoms but not at scale.
posted by aleph at 3:53 AM on September 18 [1 favorite]
You *have* seen all the "Moore's law is dead" articles? For good reasons. We've been at the edge of "no more" for a while now. More and more articles about having to go parallel because we can't go smaller/faster/less-power any more. We (maybe) have one two nodes left to go, at exponentially increased cost, and then we're looking at places where individual atoms matter. We can manipulate individual atoms but not at scale.
posted by aleph at 3:53 AM on September 18 [1 favorite]
mm. ChatGPT 3.5 wasn’t quite worth it to me. ChatGPT 4 became a (somewhat frustrating) part of my daily work. ChatGPT 4o is now a part of every project I work on. I used the o1 preview last week until I ran out of tokens. Looks like a gateway drug to me.
People being dubious about tech is part of any new big thing (like Krugman and the Internet, for example), and sometimes they’re right. But maybe they should wait a bit longer before performing burial services.
posted by shipstone at 4:16 AM on September 18 [2 favorites]
People being dubious about tech is part of any new big thing (like Krugman and the Internet, for example), and sometimes they’re right. But maybe they should wait a bit longer before performing burial services.
posted by shipstone at 4:16 AM on September 18 [2 favorites]
Each time someone says the phrase "replace workers" with respect to AI, there should be a giant asterisk. There is no evidence that AI can replace workers. Meanwhile, the tech companies are lying about the cost. Data center emissions are probably 662% higher than big tech claims.
posted by ftrtts at 5:02 AM on September 18 [6 favorites]
posted by ftrtts at 5:02 AM on September 18 [6 favorites]
The underlying purpose of AI capital is to allow wealth to access skill while removing the ability of the skilled to access wealth.
Started Vonnegut's Player Piano audiobook in my car this weekend. Being early SF, it's a slow slog but it does have things set up this way.
posted by torokunai at 5:39 AM on September 18 [5 favorites]
Started Vonnegut's Player Piano audiobook in my car this weekend. Being early SF, it's a slow slog but it does have things set up this way.
posted by torokunai at 5:39 AM on September 18 [5 favorites]
"what trillion-dollar problem does this all solve?"
for me it's language study and being able to get nuanced language questions answered 'correctly' (more or less).
The non-believers say 'it's only language models' but AFAICT 99% of the useful stuff in my own meat programming got there via language: hundreds of textbooks, several sets of encyclopedias, dictionaries, leafing through ~50 years of collected National Geographics (with pictures!), 30 years of surfing the internet, hundreds (thousands?) of non-fiction books.
This next-token generative stuff shouldn't be so good at what it's doing now . . . then again I guess it's somewhat analogous to A* (chatgpt session link) if you squint . . .
posted by torokunai at 5:57 AM on September 18 [1 favorite]
for me it's language study and being able to get nuanced language questions answered 'correctly' (more or less).
The non-believers say 'it's only language models' but AFAICT 99% of the useful stuff in my own meat programming got there via language: hundreds of textbooks, several sets of encyclopedias, dictionaries, leafing through ~50 years of collected National Geographics (with pictures!), 30 years of surfing the internet, hundreds (thousands?) of non-fiction books.
This next-token generative stuff shouldn't be so good at what it's doing now . . . then again I guess it's somewhat analogous to A* (chatgpt session link) if you squint . . .
posted by torokunai at 5:57 AM on September 18 [1 favorite]
I'll be very curious to see how this chatbot from a payroll company targeting small business owners pans out. Skip the awful video and scroll down for examples. Compliance questions, a questions-to-queries kind of thing, a chat alternative to... a screen full of buttons you can easily click?
It seems like huge risk for tepid payoffs to me.
posted by McBearclaw at 6:04 AM on September 18 [2 favorites]
It seems like huge risk for tepid payoffs to me.
posted by McBearclaw at 6:04 AM on September 18 [2 favorites]
It seems like huge risk for tepid payoffs to me.
Hoo boy. Folks shouldn't use probabilistic solutions for deterministic tasks.
posted by ryoshu at 6:55 AM on September 18 [6 favorites]
Hoo boy. Folks shouldn't use probabilistic solutions for deterministic tasks.
posted by ryoshu at 6:55 AM on September 18 [6 favorites]
I haven't seen the substack prognosticator who's done the power analysis, comrades, so let's look at what happens when people doing knowledge work have ML tools to amplify their work and so become managers of knowledge workers...
Say a manager gives in to "I need to use this for my work" and their worker, with core company know-how, is catapulted from worker to manager-of-worker status, and has a concrete sense of the benefit to the bottom line of the company. Stop me if you've read about Management Cybernetics, but that information about input/transformation/output/cost becomes transparent and useful to anyone in the company, not just its reporting and control structure.
This shifts the power to the person who wields the automated workers and puts the ongoing use (efficiency, improvements) of the automated workers locked in to the success of that business unit.
Then this removes the need for MBA-type managers and simplifies organisations to report-wielding C-levels and the manage-the-automation-workers business analysts.
I can see why no MBA-types have products that would undercut their status and earning power. If their excuse for culling workers is right -- they're racing the competition to become more efficient -- then they don't understand the complexities if their own business and you can fire them for being slow to act on (their words not mine) an existential threat.
posted by k3ninho at 7:03 AM on September 18
Say a manager gives in to "I need to use this for my work" and their worker, with core company know-how, is catapulted from worker to manager-of-worker status, and has a concrete sense of the benefit to the bottom line of the company. Stop me if you've read about Management Cybernetics, but that information about input/transformation/output/cost becomes transparent and useful to anyone in the company, not just its reporting and control structure.
This shifts the power to the person who wields the automated workers and puts the ongoing use (efficiency, improvements) of the automated workers locked in to the success of that business unit.
Then this removes the need for MBA-type managers and simplifies organisations to report-wielding C-levels and the manage-the-automation-workers business analysts.
I can see why no MBA-types have products that would undercut their status and earning power. If their excuse for culling workers is right -- they're racing the competition to become more efficient -- then they don't understand the complexities if their own business and you can fire them for being slow to act on (their words not mine) an existential threat.
posted by k3ninho at 7:03 AM on September 18
Duolingo is an example of a company that's been using generative AI well, to create sentences for use in lessons, among other things. That kind of task makes sense for this technology. A lot of other things don't.
I've been among the cohort of tech colleagues who have been laid off or soft laid off (managed out to save costs to avoid saying they're having layoffs) in the past year. And I've been among the cohort of workers who management is demanding find some use, any use for AI in workflows to "keep up" in some completely undefined way.
It reminds me of when everyone was supposed to learn more about SEO (but then the algorithms changed and now AI skews the results). Or when we were going to make a podcast, because everyone was making podcasts (a wave of hype that actually hasn't entirely gone away, and now so many interesting stories are locked away in inaccessible audio content without transcripts). Or when pivot to video was a thing that siphoned away companies' time and effort on the basis of false metrics (and also locked away a ton of good stories in inaccessible video form). It starts to feel like every initiative or event has to have AI somewhere in the title or description, because AI is one of the things companies are still willing to spend any money to use. I've seen some industry associations get grants from the biggest tech companies and suddenly their newsletters are just full of AI academies and articles. No one's gonna turn down that money.
So naturally, this article resonates, and Zitron's concerns similarly resonate. I see where this kind of AI can be useful to connect and gather info from databases (with a lot of manual work to massage the data formats). I see where it can help jump-start a creative process (though it's no substitute for real creative thought and experience). I also see where effort and money is being thrown at this shiny thing that will likely get more expensive, of course at the point where it's been integrated into all our workflows in some way. The shiny thing is already siphoning off valuable, sensitive data that people aren't worried enough about feeding to the hype machine. And the margins are already thin and people can't afford hype. But they're afraid they can't afford not to buy into the hype.
It's almost like a Roko's basilisk situation, where I'm kind of afraid to talk bad about the AI. Except the real reason I'm afraid to do so isn't that I think the AI is that good or smart. It's the fear of others' judgment—which, really, is how Calvinism, the religious analogue to this, operates too. You gotta be vocal in your zeal, or the hype monster is gonna get ya.
posted by limeonaire at 9:09 AM on September 18 [7 favorites]
I've been among the cohort of tech colleagues who have been laid off or soft laid off (managed out to save costs to avoid saying they're having layoffs) in the past year. And I've been among the cohort of workers who management is demanding find some use, any use for AI in workflows to "keep up" in some completely undefined way.
It reminds me of when everyone was supposed to learn more about SEO (but then the algorithms changed and now AI skews the results). Or when we were going to make a podcast, because everyone was making podcasts (a wave of hype that actually hasn't entirely gone away, and now so many interesting stories are locked away in inaccessible audio content without transcripts). Or when pivot to video was a thing that siphoned away companies' time and effort on the basis of false metrics (and also locked away a ton of good stories in inaccessible video form). It starts to feel like every initiative or event has to have AI somewhere in the title or description, because AI is one of the things companies are still willing to spend any money to use. I've seen some industry associations get grants from the biggest tech companies and suddenly their newsletters are just full of AI academies and articles. No one's gonna turn down that money.
So naturally, this article resonates, and Zitron's concerns similarly resonate. I see where this kind of AI can be useful to connect and gather info from databases (with a lot of manual work to massage the data formats). I see where it can help jump-start a creative process (though it's no substitute for real creative thought and experience). I also see where effort and money is being thrown at this shiny thing that will likely get more expensive, of course at the point where it's been integrated into all our workflows in some way. The shiny thing is already siphoning off valuable, sensitive data that people aren't worried enough about feeding to the hype machine. And the margins are already thin and people can't afford hype. But they're afraid they can't afford not to buy into the hype.
It's almost like a Roko's basilisk situation, where I'm kind of afraid to talk bad about the AI. Except the real reason I'm afraid to do so isn't that I think the AI is that good or smart. It's the fear of others' judgment—which, really, is how Calvinism, the religious analogue to this, operates too. You gotta be vocal in your zeal, or the hype monster is gonna get ya.
posted by limeonaire at 9:09 AM on September 18 [7 favorites]
"Hoo boy. Folks shouldn't use probabilistic solutions for deterministic tasks."
Well, "shouldn't", but they do all the time. As long as the "errors" are at an acceptable level. And by "acceptable", that depends on the people doing it. Or the laws that govern it or...
posted by aleph at 9:10 AM on September 18 [2 favorites]
Well, "shouldn't", but they do all the time. As long as the "errors" are at an acceptable level. And by "acceptable", that depends on the people doing it. Or the laws that govern it or...
posted by aleph at 9:10 AM on September 18 [2 favorites]
Also, thinking about this put a Dar Williams song in mind, "Play the Greed."
"The market doesn't care but it wants to understand
And you can play the greed right into your hands"
posted by limeonaire at 9:37 AM on September 18 [1 favorite]
"The market doesn't care but it wants to understand
And you can play the greed right into your hands"
posted by limeonaire at 9:37 AM on September 18 [1 favorite]
I'm pretty sure that the reason computers are useful is because they are pedantic and systematic, and the reason that computers are annoying is because they don't speak natural human language.
If you could put a natural language interface in front of a pedantic computer, then you'd have a wonderful, star-trek-like thing. But the current LLM technology isn't that. If I ask a chatbot to multiply 6 x 7 it doesn't open up a calculator, run the computation and tell me the answer -- it just responds with a vague word association with the three words 'six' 'seven' and 'multiply.'
The great swindle happening now is that the money is saying "Finally, you can talk to a computer!" when the reality is "Finally, you can talk to a dumb guy simulated by a computer!"
posted by eraserbones at 10:05 AM on September 18 [5 favorites]
If you could put a natural language interface in front of a pedantic computer, then you'd have a wonderful, star-trek-like thing. But the current LLM technology isn't that. If I ask a chatbot to multiply 6 x 7 it doesn't open up a calculator, run the computation and tell me the answer -- it just responds with a vague word association with the three words 'six' 'seven' and 'multiply.'
The great swindle happening now is that the money is saying "Finally, you can talk to a computer!" when the reality is "Finally, you can talk to a dumb guy simulated by a computer!"
posted by eraserbones at 10:05 AM on September 18 [5 favorites]
If you could put a natural language interface in front of a pedantic computer, then you'd have a wonderful, star-trek-like thing.
...or you'd find yourself being disassembled to be made into paper clips.
posted by GCU Sweet and Full of Grace at 10:33 AM on September 18 [1 favorite]
...or you'd find yourself being disassembled to be made into paper clips.
posted by GCU Sweet and Full of Grace at 10:33 AM on September 18 [1 favorite]
If you could put a natural language interface in front of a pedantic computer, then you'd have a wonderful, star-trek-like thing. But the current LLM technology isn't that. If I ask a chatbot to multiply 6 x 7 it doesn't open up a calculator, run the computation and tell me the answer -- it just responds with a vague word association with the three words 'six' 'seven' and 'multiply.'
Even better, someone asked o1 to add two 64-bit integers. It took 29 seconds to give the wrong answer.
posted by ryoshu at 11:02 AM on September 18 [2 favorites]
Even better, someone asked o1 to add two 64-bit integers. It took 29 seconds to give the wrong answer.
posted by ryoshu at 11:02 AM on September 18 [2 favorites]
limeonaire: The "locked up data" in audio and video seems like a good business for AI => transcripts-as-a-service
posted by aleph at 11:40 AM on September 18 [1 favorite]
posted by aleph at 11:40 AM on September 18 [1 favorite]
I'd be kind of hilarious if the ultimate outcome of the 'subprime AI crisis' was the same as the 'subprime housing crisis' so called by pundits.
1) pundits blame the crisis on the few AI companies that mostly work to improve the lives of poor people.
2) the actual crisis is a lack of broad investment, not too much investment.
3) the outflows of AI investment go to some other inferior tech product, like Cybertrucks with machine guns sure why not. (the real inferior product was Florida, TX, Arizona, and Nevada housing).
3) Congresses passes laws to prevent the AI crisis, which somehow mostly impacts AI programmers in middle America. (Dodd Frank).
4) prevents the poor people from working with or using AI for their own good. Manual labor only for them. (Dodd Frank diminishes potential of ownership due to low credit scores - so renting has a captive market. Yay landlords!).
5) pushes the captive value of AI up such that it becomes the highest paying job in the US (SF, Boston)
6) pushes Cybertrucks into top 10% of cars sold in the US (AZ, NV, TX, FL)
posted by The_Vegetables at 11:43 AM on September 18
1) pundits blame the crisis on the few AI companies that mostly work to improve the lives of poor people.
2) the actual crisis is a lack of broad investment, not too much investment.
3) the outflows of AI investment go to some other inferior tech product, like Cybertrucks with machine guns sure why not. (the real inferior product was Florida, TX, Arizona, and Nevada housing).
3) Congresses passes laws to prevent the AI crisis, which somehow mostly impacts AI programmers in middle America. (Dodd Frank).
4) prevents the poor people from working with or using AI for their own good. Manual labor only for them. (Dodd Frank diminishes potential of ownership due to low credit scores - so renting has a captive market. Yay landlords!).
5) pushes the captive value of AI up such that it becomes the highest paying job in the US (SF, Boston)
6) pushes Cybertrucks into top 10% of cars sold in the US (AZ, NV, TX, FL)
posted by The_Vegetables at 11:43 AM on September 18
As an aside, there is a similar rug pull beiong foised upon all the fraudsters in the Ethereum cryptocurrency ecosystem. Ethereum cannot scale of course, so they've encurages a lot of "zk roll ups", which run a little version of Ethereum, and produce a non-zk SNARK of each block, but SNARKs require lots of CPU time. How much? A recent one made tiny blocks of 186 transactions for $26 on Amazon EC2, so that's $136 million per year if you want 6 second blocktimes. It's still cheaper than Ethereum of course. lol
Anyways, you should not count upon these AI dipshits running out of money too quickly, because someone they'll weasel it out of government energy subsidies or whatever.
posted by jeffburdges at 11:45 AM on September 18
Anyways, you should not count upon these AI dipshits running out of money too quickly, because someone they'll weasel it out of government energy subsidies or whatever.
posted by jeffburdges at 11:45 AM on September 18
"The market doesn't care but it wants to understand
And you can play the greed right into your hands"
Or, as they used to say, "the Capitalists will sell you the rope to hang them with".
I don't go that far.
posted by aleph at 11:51 AM on September 18 [1 favorite]
And you can play the greed right into your hands"
Or, as they used to say, "the Capitalists will sell you the rope to hang them with".
I don't go that far.
posted by aleph at 11:51 AM on September 18 [1 favorite]
Duolingo is an example of a company that's been using generative AI well, to create sentences for use in lessons, among other things. That kind of task makes sense for this technology.
It makes sense for the technology, as in, you start with your tech and go ‘what could this do?’. It doesn’t remotely make sense for the actual human needs of language learner and worker. A worker at Duolingo (well, let’s be honest, a subcontractor) will produce grammatically perfect example sentences in the language in question, complete with contextually correct intonation if it’s audio-based. That gives better language input to the learner. Duolingo’s AI offerings are dire, and I’m saying that as a once-fluent French speaker doing introductory Italian on the app (having visited Italy, so having spent time in an immersive language environment) and watching the changes over the last several months as they’ve woven in more AI into the course. The AI content is bad. And it’s only there because they want to pioneer shit and (nominally) maximise profit.
Again: you could start with your toy and see what it can do, or you could start with what actually matters.
posted by lokta at 12:43 PM on September 18 [7 favorites]
It makes sense for the technology, as in, you start with your tech and go ‘what could this do?’. It doesn’t remotely make sense for the actual human needs of language learner and worker. A worker at Duolingo (well, let’s be honest, a subcontractor) will produce grammatically perfect example sentences in the language in question, complete with contextually correct intonation if it’s audio-based. That gives better language input to the learner. Duolingo’s AI offerings are dire, and I’m saying that as a once-fluent French speaker doing introductory Italian on the app (having visited Italy, so having spent time in an immersive language environment) and watching the changes over the last several months as they’ve woven in more AI into the course. The AI content is bad. And it’s only there because they want to pioneer shit and (nominally) maximise profit.
Again: you could start with your toy and see what it can do, or you could start with what actually matters.
posted by lokta at 12:43 PM on September 18 [7 favorites]
That's interesting, yeah. I think I can kinda tell which sentences come from AI sourcing—it's a little annoying, because they're not the best sentences, but they're not that bad, either. That's how it is in the remedial Spanish I've been doing, anyway. (Spanish, which I already know really well, is my fallback when I'm too busy or tired to focus on Polish or Turkish, heh.) I wonder if the quality and relevance of sentences generated by AI differs per language, depending how many learners and thus inputs there are.
posted by limeonaire at 1:02 PM on September 18 [1 favorite]
posted by limeonaire at 1:02 PM on September 18 [1 favorite]
Nthing this Duolingo comment. I know enough French to know when something sounds wonky, and I don't want to brush up on my language skills using something that will make me sound like a fool when I'm working with native speakers in the middle of Canada. And if they've junked up the French, they've almost certainly junked up the Japanese, so there goes that as a next project.
I've stopped using Duolingo altogether because I can't trust it. Even if only 5% of the content is now AI-generated, it's a very unappealing prospect to know that whatever I get from it as a tool is going to be imperfect and it's on me, a non-expert, to sort that out. Nope! I'll find another service.
posted by knotty knots at 1:04 PM on September 18 [7 favorites]
I've stopped using Duolingo altogether because I can't trust it. Even if only 5% of the content is now AI-generated, it's a very unappealing prospect to know that whatever I get from it as a tool is going to be imperfect and it's on me, a non-expert, to sort that out. Nope! I'll find another service.
posted by knotty knots at 1:04 PM on September 18 [7 favorites]
A lot of the hype from Big Tech has been around the idea that they can significantly increase the power of the underlying models by scaling the training. The CTO of Microsoft gave a presentation comparing the compute being used for GPT-5 to a blue whale, with GPT3 being an shark, and GPT4 being a killer whale. NVIDIA has presented GPU's as the next Moore's law. These huge investments have been premised on exponential improvements.
One reason their latest release (o1) feels so uninspiring is that, while it is able to "reason" more effectively, it is an example of "unhobbling" (which refers to the use of prompt engineering and fine tuning to get better performance), not a sign that the base models have actually improved significantly. To me it suggests that, behind the scenes, GPT-5 is not yielding impressive results from scaled training, and what we're going to see from generative AI is more measured progress in using these tools for natural language processing tasks, which to Zitron's point is not transformative enough to merit the investments being made.
posted by fishhouses at 1:10 PM on September 18 [2 favorites]
One reason their latest release (o1) feels so uninspiring is that, while it is able to "reason" more effectively, it is an example of "unhobbling" (which refers to the use of prompt engineering and fine tuning to get better performance), not a sign that the base models have actually improved significantly. To me it suggests that, behind the scenes, GPT-5 is not yielding impressive results from scaled training, and what we're going to see from generative AI is more measured progress in using these tools for natural language processing tasks, which to Zitron's point is not transformative enough to merit the investments being made.
posted by fishhouses at 1:10 PM on September 18 [2 favorites]
The "locked up data" in audio and video seems like a good business for AI => transcripts-as-a-service
Yeah, a number of AI-based services work with audio transcription. I used to use Andy Baio's early method of outsourcing audio transcription, and now there are easier, cheaper ways (about 10 years ago, he detailed a method for using YouTube to do this, which is AI-based—I've found YouTube captions to be pretty hit or miss, though). I wonder what he's using now.
Maybe someday, someone will use AI to add transcripts (not just ephemeral captions) to podcasts and video hosted on the most popular platforms. I appreciate captions when people turn them on, but for me, they're not as good as being able to read a transcript. I actually really appreciate AI-generated Zoom captions/transcripts for accessibility. (Their existence also put me out of business as a captioner for events that a past workplace used to host, which was for the best.) But there are also serious privacy issues with some AI-based transcript-generating services, for instance Otter.
Like these services are better than nothing and can improve accessibility somewhat, but the services and discernment of real captioners are still super important, especially when the nuances of language interpretation are in the mix. (I love international TV and movies, and you can really tell the difference when the best captioners/interpreters are hired.)
posted by limeonaire at 1:23 PM on September 18
Yeah, a number of AI-based services work with audio transcription. I used to use Andy Baio's early method of outsourcing audio transcription, and now there are easier, cheaper ways (about 10 years ago, he detailed a method for using YouTube to do this, which is AI-based—I've found YouTube captions to be pretty hit or miss, though). I wonder what he's using now.
Maybe someday, someone will use AI to add transcripts (not just ephemeral captions) to podcasts and video hosted on the most popular platforms. I appreciate captions when people turn them on, but for me, they're not as good as being able to read a transcript. I actually really appreciate AI-generated Zoom captions/transcripts for accessibility. (Their existence also put me out of business as a captioner for events that a past workplace used to host, which was for the best.) But there are also serious privacy issues with some AI-based transcript-generating services, for instance Otter.
Like these services are better than nothing and can improve accessibility somewhat, but the services and discernment of real captioners are still super important, especially when the nuances of language interpretation are in the mix. (I love international TV and movies, and you can really tell the difference when the best captioners/interpreters are hired.)
posted by limeonaire at 1:23 PM on September 18
(Cf. the captioning of the latest All Creatures Great and Small, which stumbles all over the various British dialects in play)
posted by McBearclaw at 2:03 PM on September 18 [1 favorite]
posted by McBearclaw at 2:03 PM on September 18 [1 favorite]
This was one of the best and more interesting pieces I've been linked to from here. Thanks.
posted by outgrown_hobnail at 2:23 PM on September 18 [1 favorite]
posted by outgrown_hobnail at 2:23 PM on September 18 [1 favorite]
But every empirical indication is that it’s a technology that improves in performance linearly with exponential compute/data investment.
For training, yes. But once you have a trained model, scaling is linear with a slope well below 1.0.
I've heard this, or something like it, said in several different venues recently. But it seems to me that once folks have a trained model, they move on to training the next version of the model, which I haven't heard anyone acknowledge.
posted by nickmark at 2:30 PM on September 18 [2 favorites]
For training, yes. But once you have a trained model, scaling is linear with a slope well below 1.0.
I've heard this, or something like it, said in several different venues recently. But it seems to me that once folks have a trained model, they move on to training the next version of the model, which I haven't heard anyone acknowledge.
posted by nickmark at 2:30 PM on September 18 [2 favorites]
once folks have a trained model, they move on to training the next version of the model
Using a model finds its flaws, so there's got to be higher performance from larger token counts, larger data sets, more pre-training, more compute nodes, right?
Training models hits a hard wall we've not yet understood (slyt).
posted by k3ninho at 2:42 PM on September 18 [3 favorites]
Using a model finds its flaws, so there's got to be higher performance from larger token counts, larger data sets, more pre-training, more compute nodes, right?
Training models hits a hard wall we've not yet understood (slyt).
posted by k3ninho at 2:42 PM on September 18 [3 favorites]
I've heard this, or something like it, said in several different venues recently. But it seems to me that once folks have a trained model, they move on to training the next version of the model, which I haven't heard anyone acknowledge.
I think the “once you have a trained model” bit is meant in the sense of “once you are doing inference (i.e. using the model) instead of training.” “Scaling inference” could mean a few things though. Increasing the throughput of requests/tokens you can handle is one thing. Techniques that improve the quality of responses is another - what I actually had in mind when I was talking about logarithmic returns, besides training, was the plot included with the o1 announcement showing the effect of increasing “thinking time” on benchmarks.
Once folks have a trained model they do a bunch of things with it, including fine tuning or deriving smaller models. We are in the midst of a continuous race to train the “next model,” also, but I think that’s pretty well acknowledged.
posted by atoxyl at 4:28 PM on September 18 [1 favorite]
I think the “once you have a trained model” bit is meant in the sense of “once you are doing inference (i.e. using the model) instead of training.” “Scaling inference” could mean a few things though. Increasing the throughput of requests/tokens you can handle is one thing. Techniques that improve the quality of responses is another - what I actually had in mind when I was talking about logarithmic returns, besides training, was the plot included with the o1 announcement showing the effect of increasing “thinking time” on benchmarks.
Once folks have a trained model they do a bunch of things with it, including fine tuning or deriving smaller models. We are in the midst of a continuous race to train the “next model,” also, but I think that’s pretty well acknowledged.
posted by atoxyl at 4:28 PM on September 18 [1 favorite]
And if they've junked up the French, they've almost certainly junked up the Japanese, so there goes that as a next project.in fairness to Duolingo, their Japanese course has already been infamously bad for years
posted by DoctorFedora at 8:41 PM on September 18 [4 favorites]
> The shiny thing is already siphoning off valuable, sensitive data that people aren't worried enough about feeding to the hype machine. And the margins are already thin and people can't afford hype. But they're afraid they can't afford not to buy into the hype. It's almost like a Roko's basilisk situationYou're describing a good old Hyman Minsky situation (the part I highlighted) and I think that is also the basic premise of the article, even if Minsky wasn't mentioned by name. Minsky proposed that in financial markets, "stability is destabilizing", as good times reward speculative risk-takers who crowd out their prudent competitors, leading to a self-sustaining mania of pure speculation and outright fraud followed by massive deflation of imaginary wealth or even systemic crash. These boom-bust cycles happen with such regularity, everywhere, that it's hard to imagine an alternative endgame of the AI boom. But nobody can predict the timing and manner how this could happen.
posted by runcifex at 10:46 PM on September 18 [6 favorites]
>alternative endgame of the AI boom.
one possible alternative ending is that it finally delivers the goods, like how Steve Jobs closed out his long career of tech innovations with the iPad . . . I don't recall him mentioning it in the iPad release event, but this was basically what PARC was working towards 5 decades before 2010. chatpgt session on that
Now looking at the above session, I have no idea if every factual assertion in it is correct but it certainly "looks" correct to my relatively experienced eyes.
I do think this LLM stuff is still in the Hayes Modem days vs where we'll be in ~20 years. I wonder if they're trailing in the characters like that in live chat sessions to make it look that way.
posted by torokunai at 12:06 AM on September 19
one possible alternative ending is that it finally delivers the goods, like how Steve Jobs closed out his long career of tech innovations with the iPad . . . I don't recall him mentioning it in the iPad release event, but this was basically what PARC was working towards 5 decades before 2010. chatpgt session on that
Now looking at the above session, I have no idea if every factual assertion in it is correct but it certainly "looks" correct to my relatively experienced eyes.
I do think this LLM stuff is still in the Hayes Modem days vs where we'll be in ~20 years. I wonder if they're trailing in the characters like that in live chat sessions to make it look that way.
posted by torokunai at 12:06 AM on September 19
> one possible alternative ending is that it finally delivers the goodsTo be fair, a speculative bubble can (and often does) co-exist with real lasting value. These things are not mutually exclusive. The legitimate worry is that one, we do look like following a course parallel to the subprime mortgage crisis (itself a clear illustration of the Minsky dynamics of instability) and two, we may have passed the point where it would have been possible to change course. The precipitous collapse of "the bezzle" has always been traumatic because by definition, a collapse is a process of realizing and allotting losses, and we've repeatedly shown that the most vulnerable and the common weal are the sectors that take the most of the hit.
posted by runcifex at 12:36 AM on September 19 [6 favorites]
Minsky Moments blow up the wider economy when they reveal debtors who cannot repay their debts. In isolation this is not a problem, but becomes one when the other side of the ledger shows the savings that were lost in this bad investment given to and spent by the debtor.
Most people can't intelligently talk about the GFC simply because they don't understand its scale. https://fred.stlouisfed.org/graph/?g=1tUQa is YOY consumer debt take-on / total wages, showing debt growth being 20% of wages during the bubble peak.
This debt take-on was powering the entire global economy, so when the music stopped (home appreciation not longer rising, trapping borrowers who needed to sell to pay back the money), so went everything.
1990s Japan, 1930s Wall Street . . . similar dynamic.
We may have a misallocation of capital here, much like all the fibre-to-nowhere built out in the 1990s (and the 1800s railway mania as mentioned in that wikipedia article) but they key here (economically speaking) is I don't see the debt that is the true element of systemic danger.
given our Gini issues, we're awash in capital now.
posted by torokunai at 1:10 AM on September 19 [3 favorites]
Most people can't intelligently talk about the GFC simply because they don't understand its scale. https://fred.stlouisfed.org/graph/?g=1tUQa is YOY consumer debt take-on / total wages, showing debt growth being 20% of wages during the bubble peak.
This debt take-on was powering the entire global economy, so when the music stopped (home appreciation not longer rising, trapping borrowers who needed to sell to pay back the money), so went everything.
1990s Japan, 1930s Wall Street . . . similar dynamic.
We may have a misallocation of capital here, much like all the fibre-to-nowhere built out in the 1990s (and the 1800s railway mania as mentioned in that wikipedia article) but they key here (economically speaking) is I don't see the debt that is the true element of systemic danger.
given our Gini issues, we're awash in capital now.
posted by torokunai at 1:10 AM on September 19 [3 favorites]
It'll be hilarious if bankers begin using AI more seriously, chavenet. Imagine some bank employs some LLM for reading contracts, like by extracting bullet points. An adversary exfiltrates the banks' model, so they'll know how the bank responds for any given text. In fact, an LLM model is basically a bunch of linear functions, so the adversary could some unrelated contract reading model, and using linear programming or other techniques find inputs which return "good" bullet points from the bank's model, but return harmful thngs the adversary likes from the unrelated model. At this point the adversarey could search these input-output pairs for for contracts where courts would accept the adversaries side, so the bank ends up signing contracts designed to cheat them. lol
posted by jeffburdges at 3:25 AM on September 19 [3 favorites]
posted by jeffburdges at 3:25 AM on September 19 [3 favorites]
Murati out. Song weight for the Ed Zitrons of the world, though all the AI models remain, and three mile island is coming online.
posted by Going To Maine at 3:09 PM on September 25 [1 favorite]
posted by Going To Maine at 3:09 PM on September 25 [1 favorite]
Appears OpenAI has turnned for-profit, given Sam Altman massive amounts of stock, and claimed an enormous numbers of paying users.
Sam Altman is the Worldcoin guy, so expect roughly an ethereum defi style rug pull, adapted for whatever the stock market permits. lol
Funny: "WorldCoin" sounds a lot like that child molesting "WorldCorp" thing and they have really, earily similar looks with that deep purple and blue neon logo.
posted by jeffburdges at 11:55 AM on September 28 [1 favorite]
Sam Altman is the Worldcoin guy, so expect roughly an ethereum defi style rug pull, adapted for whatever the stock market permits. lol
Funny: "WorldCoin" sounds a lot like that child molesting "WorldCorp" thing and they have really, earily similar looks with that deep purple and blue neon logo.
posted by jeffburdges at 11:55 AM on September 28 [1 favorite]
New Zitron: “The Other Bubble” (slug line: SaaSpocalyse Now)
More of a philosophical rant than a recitation of numbers, but kind of what I’ve wanted since it covers more of what he considers to be the demand for AI investment: SaaS services for companies. It seems somewhat plausible to me, though that's a bit more alien to me, and I think helps to explain some of his neglect of coverage of things like hugging face, or trained models floating around on laptops, as counterexamples to his AI bust coverage.
posted by Going To Maine at 3:22 PM on September 28 [1 favorite]
More of a philosophical rant than a recitation of numbers, but kind of what I’ve wanted since it covers more of what he considers to be the demand for AI investment: SaaS services for companies. It seems somewhat plausible to me, though that's a bit more alien to me, and I think helps to explain some of his neglect of coverage of things like hugging face, or trained models floating around on laptops, as counterexamples to his AI bust coverage.
posted by Going To Maine at 3:22 PM on September 28 [1 favorite]
Signal’s Meredith Whittaker: ‘I see AI as born out of surveillance’ (recently)
Microsoft deal propels Three Mile Island restart, with key permits still needed
Three Mile Island nuclear plant will reopen to power Microsoft data centers
Amusing trolly problem: Do you sell aging nuclear plants to bitcoin miners or AI companies?
The bitcoin miners are more incompetent, more likely to cause accidents like meltdown, more likely to illegally hide nuclear waste, and more likely to flee the country, given they've all done so before.
The AI companies thinks hard math problems can be solved by doing even more incorrect math, and their entire goal is surveillance, so they'll stop nuclear plant whistleblowers.
posted by jeffburdges at 5:04 PM on September 28
Microsoft deal propels Three Mile Island restart, with key permits still needed
Three Mile Island nuclear plant will reopen to power Microsoft data centers
Amusing trolly problem: Do you sell aging nuclear plants to bitcoin miners or AI companies?
The bitcoin miners are more incompetent, more likely to cause accidents like meltdown, more likely to illegally hide nuclear waste, and more likely to flee the country, given they've all done so before.
The AI companies thinks hard math problems can be solved by doing even more incorrect math, and their entire goal is surveillance, so they'll stop nuclear plant whistleblowers.
posted by jeffburdges at 5:04 PM on September 28
« Older I'm rooting for the basketball | Israel Suspected in Lebanon Pager Attack Newer »
This thread has been archived and is closed to new comments
posted by evilDoug at 3:05 PM on September 17 [32 favorites]