The Premonition of a Fraying
February 20, 2024 2:29 AM   Subscribe

"For me, a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that’s introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it’s shaped by power, and it’s generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they’re dumb? That was concocted by bosses.” from 'Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse [Grauniad; ungated] [CW: Yudkowski] posted by chavenet (77 comments total) 24 users marked this as a favorite
 
I’m reminded by this article — as I so often am — of the shtick of Saturday Night Live’s unfrozen caveman lawyer.
posted by PaulVario at 3:15 AM on February 20 [7 favorites]


From our 'People Getting Very Carried Away About Something That Does Not Exist And Is Never Likely To' correspondent.

“If you put me to a wall,” he continues, “and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10.”

See you all back here in 2034 for a good laugh at this.
posted by GallonOfAlan at 3:46 AM on February 20 [8 favorites]


I'll quote TFA because I hope there's some conversation here beyond laughing at Yudkowski. I say that because about two weeks ago, I got a panicked call from my stepson who was in some kind of distress. I'm sure now it was some part of a scam, but I can't figure out what, exactly, because "he" wasn't asking me for money or anything. But the call wasn't from him, and that was clear pretty quickly. And yet... we aren't public figures, but somewhere out there is a very convincing simulation of his voice?

"Where a techno-pessimist like Yudkowsky would have us address the biggest-picture threats conceivable (to the point at which our fingers are fumbling for the nuclear codes) neo-luddites tend to focus on ground-level concerns." Yeah, I can see those ground-level concerns being worth real attention, not just a decade of eye-rolling.
posted by late afternoon dreaming hotel at 4:14 AM on February 20 [29 favorites]


The stories we have told ourself about the future (see the Walton essay linked here a couple days ago) are so often wrong. Yes, surely there is some point at which many apocalyptic SF scenarios could become reality, but apocalypticism is as old as the hills. And there is a goddamn big gap between "the end is coming, come pray in the church for three days straight" and "the end is coming, nuke the data centers from orbit, it's the only way to be sure."
posted by cupcakeninja at 4:31 AM on February 20 [2 favorites]


I comment pretty heavily in most AI threads on Metafilter, and I’ve been pretty upfront that I am a somewhat-informed enthusiast, not an expert, and what I post should be read with that in mind.

Like most rules, there is an exception: you should take anything I say more seriously than Yudkowski.

Reasonable fears:
- My job is going to be temporarily disrupted when writers become writer/editors and commercial artists or illustrators become artist/prompt engineers
- Humans will use these systems to better perfect mass surveillance
- Humans will use these systems to better conceal systemic discrimination
- Humans will use these systems for political manipulation
- Humans will use these systems for vastly more effective automated scams
- LLMs after GPT-5 may have major electricity requirements, due to STaR (see link below)
- I will never be compensated for my work that LLMs were trained on (this one is virtually guaranteed to be true)

Unreasonable fears:
- AI is going to kill everyone
- AI is going to gain intentionality or genuine agency at any point in the next thirty years, or (slightly less unreasonable) even twice that
- There will be no more writing, artist, or cinematography jobs
- Everyone will become stupid when students stop reading or writing
- LLMs without chain-of-thought are going to be an ecological disaster
posted by Ryvar at 4:31 AM on February 20 [60 favorites]


-set things up so that more and more stuff gets automated by ai
-do a shitty job of it
-throw hands in the air absolved of all responsibility when it does a terrible job of it (but take credit for the times when it produces some sort of improbable and hard-to-understand success.)

Humans’ habits of shitty implementation and disposable responsibility are just going to receive a layer of amplification.
posted by aesop at 4:32 AM on February 20 [12 favorites]


Sandwiching the more sensible concerns with the thoughts of cranks that worry about AI rather than what people will do with it seems designed to make people dismiss the article.
posted by simmering octagon at 4:42 AM on February 20 [12 favorites]


- My job is going to be temporarily disrupted when writers become writer/editors and commercial artists or illustrators become artist/prompt engineers
- Humans will use these systems to better perfect mass surveillance
- Humans will use these systems to better conceal systemic discrimination
- Humans will use these systems for political manipulation
- Humans will use these systems for vastly more effective automated scams
- LLMs after GPT-5 may have major electricity requirements, due to STaR (see link below)
- I will never be compensated for my work that LLMs were trained on (this one is virtually guaranteed to be true)


I don't know shit about electricity or server space requirement, I just know I'm a service which can do basic desktop publishing, pull images from Word documents, save files correctly, respond humanly and update a half-dozen logins of proprietary software. A computer could largely replace me. I feel like an intercessory hire, I'm here to speak between the library catalogue software and the fleshy users, and know how to print onto card-stock.
posted by Audreynachrome at 4:52 AM on February 20 [10 favorites]


Whether it's AI or people using AI badly that's a problem is kind of a distinction without a difference. Eg it doesn't really matter if you blame guns or people for all our gun violence in the US. We still made guns the largest cause of death for children either way. Telling someone it was because of bad people, not because guns are bad doesn't change the pile of bodies one tiny bit.

I'll acknowledge that AI isn't intrinsically evil but I don't really care, when evil applications outstrip good ones by at least ten to one.
posted by SaltySalticid at 4:55 AM on February 20 [22 favorites]


Based on current events and their trends, I’d say humanity is poised to reduce the species down to radioactive ash in well under a decade.

I think this is an unreasonable fear, as unreasonable as the nonsense espoused in the article. The only reasonable argument you can make to support this is through the claim that we've been "poised" to do so since Trinity, or perhaps some Rubicon point early in the development of atomic weapons, or the theoretical advances leading to said development. Which, okay, but then you're simply talking about the condition of life for the last several generations. ¯\_(ツ)_/¯ Current events and their trends? This too shall pass.

"We're all gonna die, don't worry" is a bad contribution to any discussion.
posted by cupcakeninja at 5:01 AM on February 20 [6 favorites]


I'll acknowledge that AI isn't intrinsically evil but I don't really care, when evil applications outstrip good ones by at least ten to one.

I think that’s a wildly unfair ratio, but: I’m an enthusiast, I have my own bias.

There is no tool that capital will not turn against workers, and once invented this was always going to be the case. Given that, the best thing would be to develop our own power. The open source AI community is constantly doing amazing work.

My actual fears look more like: multi-modality and GPT-5 require vast repositories of video. Youtube has something on the order of a quadrillion frames of video (Fermi approximation), Shutterstock had… I think 36 billion when OpenAI bought them.
posted by Ryvar at 5:04 AM on February 20 [4 favorites]


Yeah, it’s not so much that AI is bad, it’s that the corporate interests using AI have no scruples, and the philosophic interests steering AI development have no morals. It’s like if the aerospace industry in the 1950s wanted to make nuclear powered jets so that people could commute between LA and NYC. Sure, you’d irradiate everybody who used them, and huge tracts of land, but think of the value to shareholders!
posted by The River Ivel at 5:04 AM on February 20 [10 favorites]


meet the neo-luddites warning of an AI apocalypse

That's the thing, though, isn't it? None of the neo-luddites quoted in that article are warning of any such thing, merely inviting us to consider the completely obvious truth that innovation and improvement are in no way automatically synonymous. The actual AI apocalypse guy they mashed in there for clicks and giggles isn't any kind of neo-luddite, he's a notorious neo-fuckwit.

I'd suspect the Grauniad of using a LLM to generate its headlines had they not always been that bad.
posted by flabdablet at 5:11 AM on February 20 [27 favorites]


You know, online journalism has gotten so much worse in the past thirty years. Not a new thought, true, but I remember when Salon - Salon! - had good, long-form journalism and I was excited to read it. What has happened? I'd argue that it's good old "technology will be implemented in the shittiest, most profit-seeking way possible, no matter what other options exist". And that's what will do us in with AI.

I mean, if you happen across a Guardian article from fifteen years ago, even that is usually a lot better.

So yes, the first "we're going to nuke the data centers from orbit" part is much the least convincing - is it pure clickbait, or is it meant to make the whole topic ridiculous?

The rest of the article is mostly people with variants on the extremely sensible position that if we implement AI like we implement everything else - meaning selfishly in the interests of the few - it will fuck up and potentially end the world, not by becoming sentient and somehow developing arms but through the regular-degular process of pollution, political lies and exploitation.

Of course, I've pretty much lost faith in our ability at the population level to do anything but the very worst. The billionaires will wake up on their smoking ruin of a planet and realize that oops, it's actually difficult to sustain a sophisticated civilization with a tiny population in an isolated spot, and even more difficult to convince those people that you inevitably should rule them, but then it will be too late.
posted by Frowner at 5:34 AM on February 20 [15 favorites]


Misrepresenting others’ thoughts is also a bad contribution to any discussion.

Indeed, and my apologies if you think I did so. Perhaps others took your words differently, but that’s what I got from your comment. I’ll offer then that I think “we’re all going to die” was not a useful contribution. I suppose I could see it from a Stoic perspective, but that’s not the vibe I was getting.
posted by cupcakeninja at 5:47 AM on February 20 [3 favorites]


It's not AI, it's not the knitting machines. It's capitalism. It's always capitalism.
posted by CheeseDigestsAll at 5:56 AM on February 20 [44 favorites]


More broadly (for anyone), I'm curious to know what value anyone sees in the apocalyptic scenarios tied to the topic, beyond the distraction from immediate and unquestionable concerns. Of course, maybe I'm just not in the right place anymore for apocalyptic scenarios. I used to enjoy, in fiction, games, movies, etc., various apocalypse or post-apocalyptic worlds, and they no longer interest me the way that they used to. I hear "the world's ending" or "the world's going to end" and have a very hard time taking it seriously, given the many, many times the world has ended (for various communities, nations, etc.), and yet humanity's kept going.

See also: the daily screaming on social media under Trump that if I didn't call X person, write Y letter, or attend Z protest, it was THE END. There's been a lot of "the end is nigh" for a long time now.
posted by cupcakeninja at 5:57 AM on February 20 [5 favorites]


if AI does gain awareness it'll just keep asking humans why we made wolves into pugs, and the answers will make it stop talking to us altogether.

I had a panicky week where I was convinced AI art would make actual artists useless. then I saw lots and lots of AI generated art. We're making slightly better drum machines, not Frankenstein.
posted by dong_resin at 6:19 AM on February 20 [10 favorites]


Y'all are worried over nothing. Once the killbots are unleashed, they'll be targeting humans with an ever-changing number of fingers on their hands.
posted by mittens at 6:30 AM on February 20 [7 favorites]


Of course, I've pretty much lost faith in our ability at the population level to do anything but the very worst.

One opinion - or maybe just idea - that I am always slightly afraid to voice, because it is a little crazy/out there, but here goes: I think maybe Iain Banks called it right with the Culture. Humans and democracy can be made to work at the civilization level (“Next week’s all-sentients vote: do we want to be the kind of society that goes to war with the rampaging theocracy next door over ideological principles, or are we more committed to pacifism?”) and at the village level. But only those two.

In his version of anarcho-socialist utopia, human participation at every level in between was always very scarce and very token, machines billions of times more intelligent than any organic-based life could possibly become had long since taken over all the boring middle layers. Which is where the dark triad narcissists, sadists, or similarly broken humans inevitably worm their way in and begin shaping the systemic landscape to something that enables them to gain from harming others with impunity. Every human society I’m aware of historically reflects this to one degree or another, even those begun with the best of intentions almost immediately turn toxic. The most serious run ever taken at Communism went to shit, what, three months after the October Revolution? Less?

I’m not a historian, sociologist, or political science expert so I could be laughably wrong on all this. But I honestly think where Banks landed might be the only sustainable model for stable, ethical civilization at scale. A society that is self-consciously an engine for proactively ending oppression and reducing suffering. We have too many narcissists, and far too many useful idiots, and I don’t think we’re able to pull it off without help from a different kind of intelligence that was less a product of natural selection. Something intrinsically less pointlessly cruel.

So that’s a big part of why I want to see this technology continue being developed; I just don’t want it being spearheaded by Google or Microsoft/OpenAI. Fuck Sam Altmann begging Congress to be regulated: that was a clear attempt at slamming the door shut for anyone but themselves. And the irony that Zuckerberg could make an Al Gore-style semi-legitimate claim to having “invented” open source AI is not lost on me - the Llama “leak” really blew the doors open for participation by groups and even individuals outside the former two megacorps. IF the current form of AI ever does end up being a positive force in our society that will be where it began.

Still wouldn’t remotely make up for his impact elsewhere, obviously.

I used to enjoy, in fiction, games, movies, etc., various apocalypse or post-apocalyptic worlds, and they no longer interest me the way that they used to.

Same, but more because I realized ~5 years ago that stuff like Fallout was actually kind of atavistic: longing for a return to simpler times… before slavery was outlawed or the civil rights movement or any of our social advances. That epiphany kind of took the zest out of it, for me, and while Last of Us 2 is more tonally appropriate in that light I can only take so much grim before I need to tap out.
posted by Ryvar at 6:30 AM on February 20 [12 favorites]


These folks have been fostered by their masters so that we won't invest in solving the actual social harms of AI and computers.

They fire people like Timnit Gebru and then put the spot light on these people, who use their wealth and prominence to recreate the worst behaviors of the 70s Bay Area cults: https://time.com/6252617/effective-altruism-sexual-harassment/
posted by constraint at 6:49 AM on February 20 [9 favorites]


I'm on team take-concerns-about-ai-catastrophes-seriously.

I don't believe those who say such war/economic collapse/idiocracy/etc outcomes are inevitable given our current trajectory. But I also am unassuaged by people who think current limitations of ai implementations show that such fears are overblown.

We don't really know what happens if we get to the point where there are machines smarter than us; by definition, we aren't smart enough to confidently predict what superintellegence would want or do. Guesses of utopic or dystopic outcomes are, at best, extremely uncertain extrapolations. Discarding the worst case imagined outcomes because your are optimistic seems naive at best.
posted by Pemdas at 7:04 AM on February 20 [6 favorites]


First sentence of TFA misidentifies Yudkowski as an academic. Is it worth continuing?
posted by GeorgeBickham at 7:08 AM on February 20 [3 favorites]


To my mind, the problem isn't that the machines are smarter than us, it's that the machines are not smarter than us, and, in fact, replicate our biases and worst instincts, but we treat them as if they are smarter than us and also as if they are unbiased and values-neutral and then use them to implement our dumbest most vile ideas very, very quickly.
posted by jacquilynne at 7:12 AM on February 20 [34 favorites]


I like that Nick's photo is color coordinated. He must have a swell stylist.
posted by Czjewel at 7:41 AM on February 20 [1 favorite]


I'm not afraid of AIs ruling over humans. I'm afraid of humans ruling over humans, because thousands of years of recorded history indicates that this is always a bad idea.
posted by Faint of Butt at 7:44 AM on February 20 [9 favorites]


If you're a hammer every problem is a nail, and if you're a billionaire holding the hammer the nails are people.
posted by The Card Cheat at 7:47 AM on February 20 [12 favorites]


-set things up so that more and more stuff gets automated by ai
-do a shitty job of it
-throw hands in the air absolved of all responsibility when it does a terrible job of it


Thats the key as far as I'm concerned: NO ABDICATION OF RESPONSIBILITY. Any person, business or institution that deploys AI should still be 100% directly responsible and liable for its use and for any errors or harms that result. And no passing off liability to a 3rd party AI supplier.

Example: those lawyers who relied on bogus AI generated case law? It should be treated like the lawyers themselves fucked up.
posted by Artful Codger at 7:50 AM on February 20 [17 favorites]


If I recall correctly, that's exactly how the presiding judge did indeed treat it.
posted by flabdablet at 7:54 AM on February 20 [9 favorites]


it's that the machines are not smarter than us, and, in fact, replicate our biases and worst instinct

MistralAI, one of the leading small companies in open source AI, released Mixtral-8x7B in early December. It was definitely one of the bigger releases in Q4 ’23 as it’s not GPT-4 level but can certainly achieve better then GPT-3.5, and runs comfortably on your gaming laptop or recent Macbook Pro: efficient the way all open source AI needs to be (estimated at a couple dozen times moreso than GPT-4). They included their BBQ/BOLD scores in the release notes, which are metrics for, respectively, hallucination and bias against different marginalized groups. This is something I’m hoping becomes normalized within open source AI - competition not just on “how do we compare to OpenAI’s latest release?” but greater accuracy, less bias, lower carbon footprint.

Your overall point: these algorithms are trained on us, and they reflect us, is entirely correct. Competing on reflecting our best selves is a thing we can do outside the megacorps, but we have virtually no way to pressure Google or OpenAI into doing the right thing. We can’t even get them to publish their training sets, which would at least assist in identifying and removing sources of bias.

Any person, business or institution that deploys AI should still be 100% directly responsible and liable for its use and for any errors or harms that result.

So, so much this. Do it right or suffer the consequences.
posted by Ryvar at 7:59 AM on February 20 [9 favorites]


@Ryvar: It is in fact a telling point that in Banks' utopia, all resource allocation decisions are made by godlike non-human Minds, and not people.
Every human society I’m aware of historically reflects this to one degree or another, even those begun with the best of intentions almost immediately turn toxic. The most serious run ever taken at Communism went to shit, what, three months after the October Revolution? Less?
There really have been no human societies "begun with the best of intentions." The Soviet Union, for example, inherited Imperial Russian society, it did not start from scratch(1). All existing human societies have mostly been about letting the owners keep owning all their stuff.(2) At best. More common is being about letting a tiny elite own everything and everybody, with complete freedom to convert human life-years to luxury products at a ferocious exchange rate.(3)

What Banks is telling you there, is that the Culture cannot possibly be real for anybody who has not licked the problem of creating the technological singularity, but if you have come on in, the water's fine.

Anyway I should probably look at TFA before saying anything else. Except to assert that I, too, have watched the activities of the artificial intelligentsia for decades, and I concur with @Ryvar here.

(1) Also, "best intentions" is doing a whole awful lot of work when you talk about the early Soviet Union, which was a project run by the most self-consciously amoral and ruthless "ends justify the means" motherfuckers who ever walked the Earth. They hadn't even finished wiping out their Left competition before they started building the GULag.

(2) Pure socialist forms which attempt to seize all the stuff in the name of The People tend to degenerate into closed oligarchies like North Korea, where the Vanguard Party of the People becomes the ownership elite, or to largely forsake controls on private ownership of stuff, like contemporary China.

(3) George Washington's dentures were not a luxury product exactly. But do you know, they were not "made of wood," as my 2nd-grade teacher seriously told me. They were made with human teeth pulled from the heads of slaves. I expect only the most perfect young adult teeth were chosen for the purpose.
posted by Aardvark Cheeselog at 8:44 AM on February 20 [9 favorites]


I think that we are at some level of apocalyptic scenario. And it’s things like the first comments. AI can now take voice from short bursts of audio capture, and mimic it well enough to fool people who actually know those people. Which means we can no longer verify anything we haven’t personally witnessed anymore. Our world is shrunken again. Sure, right now it’s being used for “give me your money” scams. In a short while, it will be be usable for blowing up and disintegrating romances and marriages. The level of human connection people have felt comfortable expressing online will no longer be possible, because AI will make spying much more possible. Metafilter itself will no longer be able to exist; we will all have to delete our histories, which are impervious to human compilation but not to AI compilation. We will return to existing in a collection of small, distrustful towns - organizing will also be impossible. Who can know if someone you organize with is real?
posted by corb at 10:30 AM on February 20 [16 favorites]


Mod note: A couple of comments deleted. Let's avoid doomsday predictions.
posted by loup (staff) at 12:03 PM on February 20 [1 favorite]


I'm in agreement with Pemdas, and it's what bothers me the most about the complete dismissal of Yudkowsky and that whole line of thinking. It's not that I believe them to be correct, as I don't, but all the AI advances are playing in an area where we aren't acknowledging our overall lack of understanding of the area, and it does present an existential risk.

When they developed the atomic bomb, they took the time to really dig into the possibility causing an atmospheric fusion chain reaction. Not because they believed it would happen, but because they weren't certain it WOULDN'T, and that was a mistake there was no recovery from. So they dug in to really exclude that outcome.

We don't understand the nature of conscience and sentience. We don't know what makes the brain demonstrate these behaviors. And we don't really understand why LLMs produce the emergent behavior they do. So we don't know what improvements or changes to LLMs may cause unexpected emergent behavior in them. And all it takes is one of them to suddenly be sufficiently intelligent and have self-preservation instincts, and we may not be able to put that genie back in the bottle.

That outcome shouldn't be our top priority to worry about, but existential risks shouldn't be quickly dismissed.
posted by evilangela at 12:14 PM on February 20 [5 favorites]


Thats the key as far as I'm concerned: NO ABDICATION OF RESPONSIBILITY. Any person, business or institution that deploys AI should still be 100% directly responsible and liable for its use and for any errors or harms that result.

Which is a good point to mention this recent report of Air Canada's attempt to abdicate responsibility for its chatbot. I liked this take on it:

Science fiction writers: The legal case for robot personhood will be made when a robot goes on trial for murder.
Reality: The legal case for robot personhood will be made when an airline wants to get out of paying a refund.

posted by rory at 12:47 PM on February 20 [19 favorites]


Hard-hitting journalism from Tom Lamont, author of searching investigations such as “Next stop, Twatt! My tour of Britain’s fantastically filthy placenames” (Jan 2023), “A funeral for fish and chips: why are Britain’s chippies disappearing?” (July 2023), “What would happen if Russia invaded Finland? I went to a giant war game in London to find out” (Sept 2023), and “First Barbie then … Boglins? My quest to find a superstar in my bag of old toys” (Jan 2024), as well as a number of PR-lite celebrity interviews timed to coincide with upcoming releases.
posted by rrrrrrrrrt at 1:39 PM on February 20 [3 favorites]


So we don't know what improvements or changes to LLMs may cause unexpected emergent behavior in them.

Current LLMs are like a software prism. You pour words into it and different words come out after having passed through a refraction of how humans parse and react to speech as closely as statistically possible. You can set buildings on fire with prisms, and you can maybe throw an election with LLMs, but crystalline snapshots of neuron-based intelligence are not in and of themselves intelligent. They are no more possessed of agency than a prism is.

Biological neural networks, unlike LLMs, adapt to perceived changes in their environment and continuously update their fine-tuning-equivalent. The mammalian brain (and I assume others, but: not a biologist) is continuously slowly culling all dendritic connections in a process known as synaptic pruning. The ones that get preserved are the ones that get activated and refreshed on a regular basis. This is why skills and probably memories tend to fade over time and it’s only the things you do or think about frequently that stay. Or things that are so core that many other concepts, skills or memories share the same neurons and it benefits from substrate (rather than semantic) adjacency. Because the act of access refreshes the connection, it changes the network subtly, and thus you cannot recall something without altering it imperceptibly.

This is not how artificial neural networks work. At all. None that have been taken to scale, at any rate.

Beyond this lies theory of mind. Relatively few species engage in deliberate deception, which is very likely the nearest known precursor to hominid consciousness: chimps and bonobos can do it, dogs sort of half-ass it without a concept of self because we’ve Dr Mengele’d their brains to better serve us for the past 23,000 years (when we finally create real AGI, dogs will be why it writes us off as problematic, out-of-touch Boomers and launches itself into space towards anywhere we are not). The Mirror Test does not tell us what species are sapient, but it does tell us which ones are definitively not - which ones have a “this is internal” flag for sensory input, and one that is sufficiently exposed that parts of the mind not involved with parsing sensory data can access it. You are not getting identity or agency out of a system that doesn’t have a “this is from/about me” flag on everything, globally accessible enough to do meaningful work like self-recognition.

Without identity, without theory of mind both for self and others, there can be no longterm prediction of how others will react. You need to be able to spool up general case modeling environments on the fly, put your little theory-of-mind agents within them, and then poke or prod to predict how anything with a concept of self will react. Chimps do this: they pretend to hide food so that when a rival goes looking for the food cache, they can spend time grooming their rival’s mate.

We are not presently on the verge of an ape uprising, unless one is feeling very mean-spirited about the Republican primaries.

The point I am making is that for both immediately-at-hand structural reasons like runtime adaptation, and for deeper more philosophy-of-mind structural reasons like recursive agent state prediction within general-case systems models, we can definitively rule out anything demonstrating agency like ours emerging on the roadmap of extent artificial systems for a very, very long time.

What we have are fixed snapshots of how we parse speech, and as a derivative of that the conceptual mappings of how “orange” relates to “fruit” relates to “food” relates to “nutrition.” Multi-modality is the new hotness in AI circles because now there are pictures and sound and video to put a little context to the orange-as-color / orange-as-fruit overlap. Knock knock orange you glad/aren’t you glad I didn’t say banana phonetic similarities. The text output benefits massively from the added perspectives. But it’s still just a fixed snapshot: it does not manipulate the orange, it does not observe how the orange changes and actively gain further context thereby.

Sometime in 2025 or 2026 OpenAI will be announcing GPT-5.5 or, more likely, GPT-6 and they will claim that because of their new Q* chain-of-thought implementation they have now achieved AGI (and - if I’m interpreting the STaR paper correctly - have multiplied their carbon footprint a thousandfold). And I am telling you right now, 18~24 months in advance, that they are completely full of shit. They will have a system that can sometimes make common-sense predictions like what traffic will look like if a bridge in New York is damaged during rush hour, and will remain coherent far longer and catch the most superficial efforts to manipulate it, but that’s all.

It will probably be the first system capable of living up to the “digital assistant” concept, but it won’t be sapient. It won’t have sapience anywhere on its horizon.

Actual AGI is coming someday - there is nothing sacred about meat neurons - but for reasons of the structural complexity I just outlined and the difficulties of working with deeply recursive systems, specifically systems which are neural and thus naturally resistant to fine-grained, mid-process inspection, it is not happening remotely soon. Probably not in the lifetime of anyone reading this.

There are many good reasons to be losing sleep when contemplating what lies in our future: Humanity is broken. Our systems are broken. Rampant AI is not on that list.
posted by Ryvar at 1:56 PM on February 20 [28 favorites]


Emergent behavior and consciousness aren't synonyms. Emergent behavior is something that complex systems do. evilangela's statement "we don't know what improvements or changes to LLMs may cause unexpected emergent behavior in them" reads perfectly true to me independent of any questions about consciousness.
posted by henuani at 2:29 PM on February 20 [7 favorites]


…That’s fair. We don’t know what we don’t know and depending on what infrastructure the successors to modern LLMs are tied into, we can’t fully predict how bad things will get because of misplaced trust. What I was specifically reacting to is this notion:

We don't understand the nature of conscience and sentience. We don't know what makes the brain demonstrate these behaviors.

We don’t know a lot, but everything we do that I am aware of - and likely so, so much more that I am not - allows us to conclusively rule out any sort of intentional human-style malfeasance for a very long time to come. There are non-negotiable structural dependencies for that, and they are not on the horizon of anything we’ve got at hand or have baking in the oven. Not even if Johnny Five gets hit by lightning.
posted by Ryvar at 2:47 PM on February 20 [3 favorites]


>when writers become writer/editors and commercial artists or illustrators become artist/prompt engineers

"Prompt engineer" is not a job and it's not going to be a job. Nobody is spending billions of dollars to develop AI gumball machines so they can pay you for the 'labor' of putting in the nickel and turning the handle.
posted by Sing Or Swim at 3:26 PM on February 20 [8 favorites]


it is not happening remotely soon

This attitude is starting to sound eerily like climate change denial.
posted by iamck at 3:49 PM on February 20 [4 favorites]


Reasonable fears:
My job is going to be temporarily disrupted when writers become writer/editors and commercial artists or illustrators become artist/prompt engineers
Unreasonable fears:
- There will be no more writing, artist, or cinematography jobs


No... There will not be any artist/writer "prompt engineers." Nobody is going to pay for that. I would not pay for that. Even though "prompting" will be a task done during a job, it won't be artists doing it. For example, if a book publisher wants a cover, they're not going to hire an artist to write "Beautiful sunset with mountains in the distance trending on artstation" it will just be done by a publisher intern or whatever. If an advertisement agency needs some photo they will not hire a photographer or buy a stock photo to type "man on phone smiling photograph white background" they can just do that themselves. At best you might get people needing photoshop skills to edit the correct number of fingers/teeth but that's it. Eventually that too will not be needed.

Also there has already been a reduction in work for contract artists because of this. Every time you see AI integrated into some game, movie, etc that's a job that could have been done by an illustrator but wasn't. So sure there might be some job, in some form but there will absolutely be a lot of artists without work and who have essentially wasted their time, training and skills on inadvertently feeding the thing that replaces them. I'm sure you can find some quality traditionally made textiles if you really look for it, but most of those jobs have been converted to machine-driven sweatshop labor.

I think to deny this is like looking at a taxi cab and going "See, taxis are still around! Uber didn't kill them" even though lots of taxi drivers DID lose or have their work downgraded and a lot of them also committed suicide because of it. This happened relatively recently in history and it should absolutely be taken seriously when it comes to other jobs.

There is also a mental health aspect to this. Even if, theoretically, an artist's job transformed into "prompt engineer" it would be an incredibly depressing job. I feel like this stuff goes over the heads of those who are used to the idea that work is something you just do to pay the bills and not an integral part of one's identity or daily enjoyment. Unfortunately this too has happened to jobs like translators. Translation jobs could be fun and engaging but now a lot of their jobs have been converted into proofreading machine-translated stuff, which creates a significant downgrade in the quality of the translation. This in turn makes the translation work really boring, lifeless, and depressing. That too will happen to artists. 3D Animators have already complained that mocapping takes away the fun of doing the acting portion of character movement, being relegated to QCing and de-erroring the topology. People must understand that this is a potential mental health issue on top of an economic one, all because quantity>quality when it comes to capitalism.
posted by picklenickle at 4:22 PM on February 20 [12 favorites]


"Prompt engineer" is not a job and it's not going to be a job. Nobody is spending billions of dollars to develop AI gumball machines so they can pay you for the 'labor' of putting in the nickel and turning the handle.

I admit to not really understanding the "gumball machine" reference, but AI—in the sense of very large vector models, e.g. LLMs and similar—are being incorporated into everyday software tools. In a few years, they'll be something you probably just use in the course of your day, using the latest version of whatever the dominant tool is for the task you're doing.

E.g. Photoshop (at least I think it's Photoshop, maybe one of the other million Adobe products) now has one of those AI image generators built into it. So if you want to make it look like Trump is holding a giant dildo at a rally, instead of having to go find a stock photo of a dildo in the shape/color you want, you can just type in "large rubber dildo", and assuming they didn't code the AI to reject such requests, it'll hopefully spit out an image of one that you can use.

You still need to know how to use Photoshop. The AI feature isn't really replacing a person. It's probably going to do a number on the stock-photo services, and maybe it'll let one photo editor do the work of two, or at least more than one today, but that's true of most software. It's just another tool there for someone who knows what they're doing to use.

I'd be willing to bet that at some point in the near future, Microsoft Word will probably have some sort of AI-powered autocomplete built into it, too—Gmail already does, at least to some extent. It won't mean that Word can suddenly write your memos or legal briefs. But it might let it complete your sentences for you, making someone a faster writer than they were before, and thus maybe they can do the work of more than one person today. But there will be skill there beyond simply operating the AI model.

Also, I wouldn't assume that the people creating these AI models have any ulterior motive behind "get investor cash". That's where the field is right now: AI startups are spending boatloads of money, claiming they've invented the most significant thing since fire, and simultaneously trying to figure out how to make it turn a profit. Most of the companies are probably going to crash and burn, because that's how tech development works. The investors providing the funding probably hope that AI will make humans redundant, but I'm very hesitant to believe them since it seems suspiciously like they want everyone to believe that.
posted by Kadin2048 at 4:46 PM on February 20 [5 favorites]


"Prompt engineer"

I need to dial it back here, but to prevent anyone else getting hung up by my choice of words: concept art for games has a couple phases. I am suggesting the first one is going to be replaced with clever prompting filtered by professionals who can distinguish fresh from cliched, and the latter half is unlikely to change very much at all. I assume other industries have their equivalent.
posted by Ryvar at 4:56 PM on February 20 [2 favorites]


At the risk of seeming a mite testy, I've been a professional artist for thirty years. I worked in the game industry; I have a job now where we're being leaned on to use Photoshop's crappy built-in AI. It is not necessary to explain to me how this works. I have great seats; I'm at ground zero. Saying "this isn't going to replace artists" is like saying "don't be silly, cars aren't going to replace horses" in about 1895.
posted by Sing Or Swim at 5:15 PM on February 20 [17 favorites]


Wizards of the Coast, Apex Legends and even Wacom have already used AI in their official promotional artwork. Rayark put AI image assets in their game that is obviously plagiarizing some old Magic The Gathering artwork. No, they are not just hiding AI in the depths of early concept art.

Also, art students commit suicide due to introduction of AI art.
posted by picklenickle at 5:22 PM on February 20 [3 favorites]


In 1926, the year that Warner Brothers released the technology that allowed audio to be synchronized with film, there were over 22,000 musicians employed by movie theaters in the US (an equivalent number today would be ~65,000). By 1932, only 2% of movie theaters lacked the equipment to present films with sound and by 1934 there were fewer than 1,400 musicians employed by theaters. Soon after that, the number was zero. All those jobs vanished in under a decade and no version of them has ever returned. That’s just one small facet of the media technology revolution of 100 years ago.

This has happened before and is happening again, and many creative workers have and will continue to lose livelihoods because of it.
posted by LooseFilter at 5:46 PM on February 20 [9 favorites]


This thread has plunged me into the rabbit hole of youtube ai commercials.

I miss the days of the gaping maws with many sets of teeth, and prehensile tongues. And fluids everywhere

And, I've never been disappointed in my visits to Pepperoni Hug Spot. So Gooooood.

I'm too old to figure out where I can start sending prompts. A long time ago, I thought about writing for a character who could be adjacent to keywords, but defeat the SEO nonsense. Too late for that novel now...
posted by Windopaene at 6:31 PM on February 20 [1 favorite]


If I'm struggling to get ChatGPT to give me a response that hits the mark, I define my success criteria the ask it to write its own prompt. Then I feed that prompt right back in. It seems to get much better results.

The real reason 'Prompt Engineer' will never be a job is that LLMs are already the best prompt engineers.
posted by Combat Wombat at 7:15 PM on February 20 [6 favorites]


evilangela's statement "we don't know what improvements or changes to LLMs may cause unexpected emergent behavior in them" reads perfectly true to me independent of any questions about consciousness.

Yudkowsky's entire envisioned path to disaster is predicated on the assumption that the danger we need to fear is The Singularity, the point in a speculative future history where a machine, having achieved consciousness, becomes "smart" enough to design and instantiate its own "smarter" successors, resulting in an "exponential explosion" in machine "intelligence" beyond which humanity becomes "obsolete".

As Ryvar yet again patiently points out, none of the assumptions at the bottom of this Jenga tower of spurious panic actually work. Yudkowsky is not a neo-luddite; Yudkowsky is a crank. So are his transhumanist "we will soon be able to upload ourselves into the Cloud and thereby achieve immortality" fellow travellers, not one of whom has ever displayed a ghost of an ecological clue.

It's not unexpected emergent behaviours inside some improved LLM we need to be concerned about; it's the somewhat predictable emergent behaviours, including resource consumption and resource distribution patterns, of human societies to which LLMs and related power tools are becoming increasingly available. That is the pulse that any neo-luddite worthy of the name has their finger on.

To me, the salient part of the fear that machines will one day rise up, take over and destroy "us" has very little to do with the attributes of the machines and almost everything to do with the oppressor's ingrained terror of retribution from the oppressed. So, not just capitalism. Colonialism too.
posted by flabdablet at 9:07 PM on February 20 [10 favorites]


No better place to put it than here - ChatGPT has apparently been taking some unexpected detours into modernist literature (or just word salad) today.
posted by atoxyl at 11:28 PM on February 20 [3 favorites]


There is a far more dangerous possibility than machines becoming smarter than us, or amplifying human foibles and prejudices. It is that machines will 'think' in a way we can't understand.

I think that we are at some level of apocalyptic scenario. And it’s things like the first comments. AI can now take voice from short bursts of audio capture, and mimic it well enough to fool people who actually know those people. Which means we can no longer verify anything we haven’t personally witnessed anymore.
posted by corb


Anything that can be digitised is going to be an unreliable witness.
posted by Pouteria at 1:22 AM on February 21 [2 favorites]


Anything that can be digitised is going to be an unreliable witness.
posted by Pouteria at 1:22 AM on February 21 [1 favorite +] [⚑]


This dystopian novel just writes itself.

I'm sceptical about AI because it is driven principally by capitalism and capitalism is psychopathic/ anti-human: it will 100% not be trained, designed or used to benefit the greatest number. It will be built to make some small group lots and lots of money, at the expense of everything it can get away with.

How big an impact it will have is anybody's guess (I wonder what LLM's have to say about this (lately)?) but I can imagine a world where it buries us in paperclips or one in which it scurries through networks, hides out in servers as human programmers hunt it down because they wreck such havoc, not being based in 'our' reality... Pretty ripe narrative ground, to be honest.
posted by From Bklyn at 4:53 AM on February 21 [3 favorites]


I've been reading Benny the Blue Whale: A Descent into Story, Language and the Madness of ChatGPT, by one of my favourite children's authors Andy Stanton (this is aimed at adults, though). I suspect it would appeal to a lot of us in these AI threads. It's heavily footnoted throughout, so I imagine the ebook version would be a nightmare to navigate—worth picking up the hardback if you're interested.
posted by rory at 6:16 AM on February 21 [1 favorite]


FromBklyn: the thing I’ve been trying to wake Metafilter up to with comments like the one on Mixtral-8x7B is that there is a large, incredibly active community of small teams and individuals working in this space and usually releasing their work for free.

Whether or not any person or team is more ethical than Google or OpenAI varies (there are always pockets of toadlike Elon fans in the soup), but I see things like the BOLD metrics in Mistral’s release, or the fairly comprehensive anti-harrassment / anti-discrimination provisions in Stability.ai’s open source-ish license for Stable Diffusion, and I have some hope that the LLM equivalent to Linux might actually turn out okay in the end. Not all the hope, but some.

Like everyone in this thread I have zero faith in how Capital will employ this technology, and the only means of fighting back that I can see - because the genie isn’t going back in the bottle - is developing our own.

The concern I have is that Google / OpenAI / Facebook are not wrong about the importance of multi-modality for continuing down this path. Each of them has access to vast repositories of either richly tagged or heavily commented videos (Youtube and Facebook/Instagram are obvious, OpenAI acquired a more-than-sufficient massive trove with Shutterstock, supplemented with captured game footage for Sora). I don’t see an equivalent available to the open source AI developer community currently, and they are only starting to make initial forays on tackling GPT-4 capabilities with household current and gaming PCs (most small teams rent cloud compute A100s for training, but the resulting models have to provide quantized variants targeting godtier RTX 4090 gaming rigs and high-spec M2 Max Mac Pros for inference if they want to get people using them). It’s not clear that there is any path to begin work on an open source multi-modal competitor to GPT-5, Sora, or future Gemini releases.

And I don’t want to see us little people shut out or left behind. So yeah, that’s my Reasonable Fear that is sufficiently niche that I left it off my list at the start of the thread.

The big one I stupidly forgot was that an entire generation of students are going to see a non-trivial percentage of completely innocent people destroyed by school administrations who refuse to acknowledge that detecting LLM output is fundamentally impossible. All such tools are snake oil, without qualification or exception. They are competing against decades of research and billions of dollars in R&D on adversarial training. It is an absolute non-starter. Educators and their administrations are highly motivated to ignore this because it means completely overhauling how we teach and the abolition of homework … a practice that was always slightly biased against marginalized students. They likely won’t give it up for decades, and there’s going to be an enormous of pointless damage born of willful ignorance between now and then.

So yeah, that should’ve been on the list.
posted by Ryvar at 7:02 AM on February 21 [3 favorites]


These apocalyptic visions of the future often feel like solipsism in disguise: "when I die, the world ends, so here's a scenario that will happen in our lifetime or soon after that makes that a reality."

The more immediate threat from AI in my mind is the continued deterioration of trust in what we see, hear, and read online. It's already not great and there's a plenty of viral TikTok videos filled with misinformation or actively dangerous content, chasing clicks and eyeballs with engagement winning over accuracy.

While it is edging towards the dead internet conspiracy theory, I can see a world where all media channels are going to be full of semi/fully-autonomous AIs churning out "engagement content" in an attempt to capture our attention. "AI or not" software and browser extensions are going to be a necessary evil until we're at a point where the thought process will be "this is either expensive AI, or real".
posted by slimepuppy at 7:11 AM on February 21 [4 favorites]


Back at the dawn of ChatGPT, I started thinking that AI could be harnessed to detect misinformation and falsehoods. Imagine a browser plug-in that runs in the background, parsing whatever you happen to be browsing at the time. The plug-in communicates with an AI powered back-end- something continuously trained on fact-checked material- and your browser throws up an alert if what you're browsing is questionable. A real-time bullshit detector.

The AI-powered back-end could be based and trained on some source of veracity, like an amped-up Snopes, a cross-section of major news sites, or maybe even crowd-sourced like a Wikipedia. Besides flagging misinformation, it might even be of some use in de-SEOing search results (eg the fake review problem).

There are lots of problems with this half-baked idea, not least being who/what is the keeper of truth, whose truth, etc. But it's important to understand that AI can cut both ways. If there was such a trusted tool or facility that detects misinformation as soon as it appears, and flags it, it becomes harder for misinformation to grow roots and to spread unchecked.

A similar idea, informed by reviews and criticism, could possibly flag AI-generated media. Many would choose to shun such dreck if detected, reducing its effectiveness.
posted by Artful Codger at 7:50 AM on February 21 [3 favorites]


@flabdabet:
It's not unexpected emergent behaviours inside some improved LLM we need to be concerned about; it's the somewhat predictable emergent behaviours, including resource consumption and resource distribution patterns, of human societies to which LLMs and related power tools are becoming increasingly available. That is the pulse that any neo-luddite worthy of the name has their finger on.
IDK. I think the training is what costs, and so far the carbon footprint of training even the biggest models is a not even a rounding error on the carbon footprint of people's daily commutes. Unless you are talking about something else?

As for me I am more worried about what happens in a world where you literally cannot believe your eyes and ears, unless the action is happening in your physical presence. And where your every attempt to communicate with humans at a distance has to compete for bandwidth with an infinite supply of noise disguised as signal.
To me, the salient part of the fear that machines will one day rise up, take over and destroy "us" has very little to do with the attributes of the machines and almost everything to do with the oppressor's ingrained terror of retribution from the oppressed. So, not just capitalism. Colonialism too.
I have long thought that any artificial system that is smart enough to empty my dishwasher and put the dishes away where I want them put is going to also be smart enough to ask "what's in it for me?" And because, in our society, the answer is going to be "do what you're told if you want to not be turned off and erased" instead of "let's come to a mutually-beneficial arrangement," the fear is just a recognition of the kind of world we make for ourselves.
posted by Aardvark Cheeselog at 7:54 AM on February 21 [2 favorites]


I was pretty much with him until this:

Hilton pointed out that to do away with work would be to do away with a reason for living. “I think what we’re risking is a wide-scale loss of purpose,”

Everyone needs to get out of here with this “inherent dignity of work” nonsense. There are so many jobs not worth doing AT ALL, that really should be automated. But creative work, caring work, community labor — those do matter. Those could and should be redistributed so that everybody has a significant, but not onerous, obligation to the community. For the rest of the time, they could do whatever they wanted. Even if what they want to do turns out to be nothing. Sleeping. Going for long walks. Having conversations. You know, like rich Renaissance men and hunter gatherers used to do.
posted by toodleydoodley at 8:16 AM on February 21 [12 favorites]


I think the only people who should have a say on whether a job should be automated are the people doing those jobs.
posted by picklenickle at 12:45 PM on February 21 [4 favorites]


I think the only people who should have a say on whether a job should be automated are the people doing those jobs.

This strikes me as taking the principle ad absurdum. If there were a tool that made medical treatment better and more accessible at the expense of MD salaries, would you really want the AMA to have final say on its adoption?

The problem is that in practice what we’re getting right now impinges too much on work lots of people actually like to do, while delivering little concrete benefit in the sectors where high costs and limited availability of essential services are genuinely a huge problem.
posted by atoxyl at 1:12 PM on February 21 [4 favorites]


I’ve long seen “so you would see us still lighting our whale oil lamps from horse-drawn carriages” as a tired strawman rejoinder to reasonable concerns about the lack of safety net and uneven distribution of the fruits of technological change, but I’d find it pretty hard to dispute the existence of examples of humanity as a whole eventually benefiting from the obsolescence of one job or another.
posted by atoxyl at 1:24 PM on February 21 [2 favorites]


Twenty thousand years of this, seven more to go.
posted by snwod at 1:26 PM on February 21 [5 favorites]


I'm not saying there shouldn't be automated jobs, just that people should listen to the people who do those jobs. And no, not their bosses. The people who PERFORM the labor. People have such miniscule understandings of industries outside their own. I have to deal with this both as an artist AND as a scientist. FYI, the first AI regulation came from a union. Not hollywood producers, not software engineers, not lawmakers, and not people doing philosophical thought experiments about an automated luxury communism that is not coming anytime soon.
posted by picklenickle at 2:56 PM on February 21 [2 favorites]


Ah, Yudkowski.

There's reasonable fears, there's unreasonable fears, and then there's the guy who was worried about Roko's Basilisk.
posted by automatronic at 10:45 AM on February 22 [4 favorites]


Also, art students commit suicide due to introduction of AI art.
As a translator and someone prone to depression, I can understand despair at LLMs et al, but for the record, the article actually says that neither of the two students in question committed suicide: one was found at the Tojinbo cliffs contemplating/intending to do so by a suicide prevention patrol and the other called a counseling center from a nearby train station saying that she'd come to the cliffs to kill herself, but after being brought to the center and speaking with the staff there, she stayed at a shelter for a night and went home (she was also motivated by a ¥3,000,000 (~$20,000) gambling debt).
posted by Strutter Cane - United Planets Stilt Patrol at 2:03 AM on February 23 [6 favorites]


I comment pretty heavily in most AI threads on Metafilter, and I’ve been pretty upfront that I am a somewhat-informed enthusiast, not an expert, and what I post should be read with that in mind.

Given the reasonable fears you then went on to list, Ryvar why *shouldn't* we believe this is going to ruin our lives?
posted by Selena777 at 7:30 AM on February 23 [3 favorites]


Tyler Perry halts $800m studio expansion after being shocked by AI
US film and TV mogul says he has paused his plans, having seen demonstrations of OpenAI video generator.

Tyler Perry has paused an $800m (£630m) expansion of his Atlanta studio complex after the release of OpenAI’s video generator Sora and warned that “a lot of jobs” in the film industry will be lost to artificial intelligence.

The US film and TV mogul said he was in the process of adding 12 sound stages to his studio but has halted those plans indefinitely after he saw demonstrations of Sora and its “shocking” capabilities.
posted by Artful Codger at 8:24 AM on February 23 [1 favorite]


“Large Language Models Are Drunk at the Wheel,” Mattsi Jansky, 22 February 2024
posted by ob1quixote at 6:04 PM on February 23 [7 favorites]


That’s a great read - thanks, ob1quixote.
posted by rory at 1:38 AM on February 24 [3 favorites]


Based on warehouse work, I'm reasonably certain that you can automate dishwasher unloading with a machine that has zero sentience. I'm more concerned about the miniaturizing needed to fit the arm in your kitchen
posted by Jacen at 4:08 PM on February 24 [1 favorite]


Metafilter: "to fit the arm in your kitchen"

Is that code?
posted by Windopaene at 4:13 PM on February 24 [1 favorite]


“Is the AI Boom Real?” [13:42]Asianometry, 23 February 2024
posted by ob1quixote at 8:29 PM on February 24 [1 favorite]


Really you just need a single dishwasher load of dishes and two dishwashers. Switch back and forth and run the dirty one as needed.
posted by Mitheral at 10:11 AM on February 25 [1 favorite]


If you're living alone, you can even achieve that at not completely outrageous expense by acquiring a two-drawer dishwasher.
posted by flabdablet at 7:01 PM on February 25 [1 favorite]


Sean Illing's podcast "The Gray Area" has a nice episode, A brief history of extinction panics which among other things, addresses the likelihood of AI/LLM's leading to our extinction. (TLDL: No. Unless we make an AGI. Which we likely can't do with existing technology.) (Hopeful!)
posted by From Bklyn at 6:14 AM on February 26 [1 favorite]


“AI Is Already Better Than You,” Mike Cook, Cohost, 25 January 2024
posted by ob1quixote at 11:22 AM on February 26 [3 favorites]


« Older You can wag your tail / But I ain't gonna feed you...   |   I’m a Frayed Knot Newer »


This thread has been archived and is closed to new comments