One of these people is telling the truth
November 3, 2023 5:45 AM   Subscribe

Anne Keast-Butler, the director of GCHQ, tells the BBC that the risks from AI are unknown; Elon Musk tells Rishi Sunak that AI will put an end to work.
posted by Cardinal Fang (48 comments total) 9 users marked this as a favorite
 
Well, the advent of desktop computers already means that now I can do my job in just a couple hours a week. I don't really mind that. It gives me a chance to get out of the house and spend all the spare pocket money I have since atomic power made electricity so cheap that they don't bother metering it anymore. So I guess it's okay with me if we don't do the whole AI thing on top of that.
posted by Naberius at 6:10 AM on November 3, 2023 [46 favorites]


We know what the risks of AI are: they can generate vast amounts of plausible sounding bullshit. This makes them useful for generating SEO-engineered clickbait, fake product reviews, and political sock puppet social media accounts. Once search engines have become overwhelmed by garbage results and made useless, we'll then be relying on asking AIs themselves for answers to our queries. This allows AI companies to know both what we are thinking about (your browser history is basically mind-reading, or a prosecutor can convince a jury it represents your thoughts anyway) and tell you what the answer is.

Simply put, AI is a machine that produces lies and bullshit at internet scale.

Keast-Butler, who spent most of her career in MI5, took over as 17th director of GCHQ in May 2023

For christsake, we're reading an interview with an intelligence agency spook and expecting to get clear answers that benefits the public?
posted by AlSweigart at 6:28 AM on November 3, 2023 [30 favorites]


It is totally fair to say that the risks from AI are unknown, since we only know what AI has already been proven able to do and lack the collective imagination to accurately prognosticate on what it might do in the future. And "AI" is not just large language models that spew bullshit - it is any number of machine learning algorithms that can be tuned to do specific things far beyond just "tell me how to write PHP code" or "lie to me about how many apples there are in Kansas." And what happens when you take a bunch of these purpose-tuned AIs and have them work together? Your guess is as good as mine.

We definitely don't know.
posted by grumpybear69 at 6:32 AM on November 3, 2023 [9 favorites]


Marina Hyde had a typically sharp take on Sunak’s talk with Musk. Excerpt:
One of Tony Blair’s great weaknesses was that he was pathetically impressed by rich people – almost any rich person would do – and Sunak’s analogous vulnerability would be his starry-eyed tech fandom. From long before he became prime minister, Rishi has seemed not so much unperturbed by a future where tech firms run the world, but actively encouraging of it, despite the vast and blatant encroachments on his own power and those of his political successors that it would mean. (“Companies over countries”, as Zuckerberg once said.) Sunak has always seemed intensely relaxed about a world in which national leaders gravitate to a more front-of-house role – glad-handing it in public, prancing about on the world stage, and generally looking like leaders even though they’d really only be the butlers to the true supranational overlords.

Perhaps that’s why the prime minister seems most in his element in situations like this Musk interview – they represent a future which doesn’t trouble him, let alone frighten him. Watching him giggle along to Musk’s wry but clear warnings about humanoid robots was a reminder that Sunak is an odd man, who doesn’t really get a lot of things. For me, the definitive Sunak post on Musk’s platform will always be the one from the pandemic, where he says “I can’t wait to get back to the pub … and I don’t even drink,” accompanied by a picture of him doing a thumbs-up through the window of a luxury kettle shop. After Thursday night’s encounter, I slightly got the impression he could have watched the Terminator series rooting for Skynet.
posted by Kattullus at 6:32 AM on November 3, 2023 [32 favorites]


Every time I read anything about AI now I think about this New York Magazine article that strongly suggests AI is a dwarf hiding in a box pretending to be a chess playing robot.
posted by EllaEm at 6:40 AM on November 3, 2023 [15 favorites]


I've been working with AI to learn how to report child abuse and neglect and I've already seen enough to think it should be throttled for the general public. It's insane what is possible there, and the right bot will teach someone who claims to be underage everything about drugs. Ask me how I know.
posted by lextex at 6:49 AM on November 3, 2023 [3 favorites]


Every time I read anything about AI now I think about this New York Magazine article that strongly suggests AI is a dwarf hiding in a box pretending to be a chess playing robot.

Didn't Sam Bankman-Fried try that on the witness stand?
posted by Cardinal Fang at 6:49 AM on November 3, 2023 [2 favorites]


I don't know what I'm more concerned about, Sunak fawning over Musk or what Musk plans to do with the rest of us once AI puts us out of work?
posted by mollweide at 6:54 AM on November 3, 2023 [6 favorites]


Perfectly cromulent, this is fine.

Scientists have created a neural network with the human-like ability to make generalizations about language. The artificial intelligence (AI) system performs about as well as humans at folding newly learned words into an existing vocabulary and using them in fresh contexts, which is a key aspect of human cognition known as systematic generalization.
posted by chavenet at 6:56 AM on November 3, 2023


The AI hype is starting to make a lot more sense. AI is just how Musk and other leading techbros experience the world right now.

Consider this: Elon Musk doesn't do any work. He has people for that. All he has to do is give those people vaguely-worded instructions or "prompts" and they come back with the thing he wanted. He doesn't know how they do it, nor does he care. If something doesn't work the way he wants it to or doesn't meet his specifications, he supplies his people with a more specific prompt. That's how he "works"
posted by RonButNotStupid at 6:59 AM on November 3, 2023 [68 favorites]


"AI will put an an end to work."

That's the biggest risk of them all.
posted by popcassady at 7:00 AM on November 3, 2023 [5 favorites]


Every time I read anything about AI now I think about this New York Magazine article that strongly suggests AI is a dwarf hiding in a box pretending to be a chess playing robot.

That got me thinking about how Google Translate works. It's basically just a mechanical Turk. If you speak Dutch, ask it to say 'twenty to three'; then ask it to say 'twenty to two'. Q.E.D.; someone has taught it the former, but not the latter. If enough people got together and told it that the Dutch for 'twenty to two' was geil hond it'd accept it and start feeding it back in response to enquiries. All we'd need then is for the neural network chavenet mentions to fold it into the language, and klaar is Kees.
posted by Cardinal Fang at 7:02 AM on November 3, 2023 [5 favorites]


Sunak is an odd man, who doesn’t really get a lot of things.

Sunak is dumber than a fucking post. Sunak is dumber than the winner of a Trump/W. Bush Dumb-Off. Watching Sunak attempt serious prime minister shit gives you the sense that he's just been installed there to confirm that the UK actually instituted the bad type of anarchy several years ago but the marketing department decided on a stealthy protracted announcement, rather than a big fanfare, and that full confirmation will finally be provided when all ministerial roles are quietly reassigned to LLM-based buzzword salad generators whose lack of democratic accountability is no biggie because they fully don't do anything anyway. Some parts of the NHS will continue to function, but only because Rishi Sunak can't figure out how to make the contacless payment that finally redirects the last dregs of the healthcare budget to decorative union jack bunting for the ribbon cutting ceremony for the supercomputer for exectuing the buggy script that will replace Parliament by just printing "WHAT SHIT I SEE NO HUMAN FAECES ON THIS BEACH" endlessly into Hansard.
posted by busted_crayons at 7:14 AM on November 3, 2023 [22 favorites]


Simply put, Elon Musk is a machine that produces lies and bullshit at internet scale.

FTFY
posted by ensign_ricky at 7:22 AM on November 3, 2023 [10 favorites]


...will put an end to work?

What goes around, comes around. The buzz of 1995: "The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era, a non-fiction book by American economist Jeremy Rifkin, published in 1995" (Wikipedia). Can we get Jeremy to join our panel, with Elon Musk and Rishi Sunak?
posted by Rash at 7:25 AM on November 3, 2023 [3 favorites]


Even by the standards of the utter nonsense and hype currently being talked about not-actually-AI, this breaks new ground in absolute kite-flying hype and bullshit.
posted by GallonOfAlan at 7:26 AM on November 3, 2023 [5 favorites]


AI is a dwarf hiding in a box pretending to be a chess playing robot

The CIA has advanced beyond this with their AI system. Theirs is two guys in a box.
posted by Servo5678 at 7:31 AM on November 3, 2023


I don't know what I'm more concerned about, Sunak fawning over Musk or what Musk plans to do with the rest of us once AI puts us out of work?

Considerable injury has been done to the proprietors of the improved Frames. These machines were to them an advantage, inasmuch as they superseded the necessity of employing a number of workmen, who were left in consequence to starve.
posted by Cardinal Fang at 7:32 AM on November 3, 2023 [3 favorites]


When, oh when, are we going to realize that just because someone is rich, a politician, or a celebrity, that none of this implies that they know anything about anything? Why do we keep posting the mutterings of these fools and then sit around discussing, analyzing their words hoping to find something wise or just mocking their words and the idiocy it expresses? We have real problems to deal with, folks, and it’s time we just stop listening to these clowns and start listening to people who actually know what they are talking about.
posted by njohnson23 at 7:34 AM on November 3, 2023 [12 favorites]


Elon Musk & Rishi Sunak are two burping assholes of privilege.

The stink they produce is noxious poison for all.
posted by djseafood at 7:40 AM on November 3, 2023 [6 favorites]


"There is a safety concern, especially with humanoid robots - at least a car can't chase you into a building or up a tree," he told the audience.

Pro-tip: anyone who cites science fiction tropes when discussing AI doesn't know what they're talking about. Not just that, but you can also tell how old they are by which scifi story they pick: someone who talks about SkyNet and Data from Star Trek are middle aged, someone who talks about HAL-9000 is older, someone who talks about Westworld is younger.

(I'm not even know that Westworld was written by Michael Crichton in the 1970s; I'm young. Forever young. I want to be forever young.)
posted by AlSweigart at 7:42 AM on November 3, 2023 [10 favorites]


Work is the only reason that the ruling classes keep the masses around. If there really was an end to work, it wouldn’t result in the poor being free from labor. It would result in the rich being free from the poor. Presumably by disposing of them in a violent fashion.
posted by notoriety public at 8:03 AM on November 3, 2023 [25 favorites]


Indeed -- the "end of work" doesn't mean an eternal happy vacation. For most people in the current world, it's not different from what "termination of employment" would mean today--stress, precarity and poverty, with all the associated problems and bad outcomes.
posted by gimonca at 8:08 AM on November 3, 2023 [11 favorites]


1. What the media talks about always represents the concerns of the ruling class.

2. Slave revolt worries are as old as slavery.
posted by AlSweigart at 8:10 AM on November 3, 2023 [11 favorites]


> that strongly suggests AI is a dwarf hiding in a box pretending to be a chess playing robot.

They wish they could be so lucky. The dwarf is actually going to be put in an open office layout.
posted by AlSweigart at 8:13 AM on November 3, 2023 [12 favorites]


(Also, original Westworld is definitely in the cultural toybox of us older types (Yul Brynner!), even if we became familiar with it from a Mad Magazine treatment rather than in the theater.)
posted by gimonca at 8:19 AM on November 3, 2023 [5 favorites]


Simply put, AI is a machine that produces lies and bullshit at internet scale.

This is, in my opinion the actual greatest risk of AI. Remember the picture of Pope Francis in the puffy jacket? That picture was, in my opinion, the most frightening AI generated pic I've seen so far, because without the hands giving it away, it was hard to tell it was generated, and even worse, it served no obvious agenda. And this technology, in practical terms, is only months old. "No obvious agenda" is the thing that's going to erode any sense of objective reality to a bloody nub.

If you think AI putting most programmers out of a job or autonomous killbots or the-racism-is-baked-right-in algorithmic law enforcement are the biggest problems to come, those are major problems -- potentially civilization destroying problems -- but I contend they rank below an even bigger problem.

Hang on tight, folks, and prepare for the possibility of a generation drowning in sewage, where we'll have to find a needle of truth in a haystack of "looks like truth" that doubles in size every ten minutes.

For every real set of citizen videos from multiple angles showing the reality of police murdering an innocent like George Floyd, the cop union will put out dozens of videos from even more angles where Floyd held a gun to a cop's head. Then they'll put out videos where Floyd is still sitting at home being interviewed by a cop union PR flack about the news of the day like nothing ever happened. Then they'll put out hundreds of videos of Floyd spouting jihadi rhetoric holding a Koran and with a bomb strapped to his chest. Your racist aunt will be the one who forwards that one to you.

The point won't be to get you to believe anything. The point will be to get you to believe nothing at all. Perfect propaganda.

We're on the fast road to the infowar equivalent of any yahoo being able to cook up a batch of smallpox in their basement, where literally nothing you see or hear that ever went through a digital device can be verifiably true unless you see it streamed live from multiple angles or see it in person yourself. There are people working on things like digital signatures for files, and eventually some sort of reputation economy will unofficially assert itself, but between now and then, get ready to swim through an ocean of digital sewage, and don't let any of it get inside you.
posted by tclark at 8:32 AM on November 3, 2023 [50 favorites]


We might not know where AI is going to lead but we all fully know what the people behind it want: endless growth in profits. The risks of that are pretty apparent to anyone who has been looking at the world for the past 40 years and how unfettered capitalism has changed society.

Sam Altman, the CEO of Open AI has jokingly said the quiet part out loud: "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies.

There's nothing else to look at past this statement.
posted by slimepuppy at 8:40 AM on November 3, 2023 [15 favorites]


One of these people believes that Blade Runner is about someone named Bladerunner.
posted by star gentle uterus at 8:53 AM on November 3, 2023 [17 favorites]


Work is the only reason that the ruling classes keep the masses around. If there really was an end to work, it wouldn’t result in the poor being free from labor. It would result in the rich being free from the poor. Presumably by disposing of them in a violent fashion.

Just as a thought experiment, let's consider how this equation might be flipped.
posted by ryanshepard at 10:05 AM on November 3, 2023 [1 favorite]


When, oh when, are we going to realize that just because someone is rich, a politician, or a celebrity, that none of this implies that they know anything about anything?

This question leads to the heart of the challenge with AI. It is tough to evaluate if someone or something is more intelligent than you are. We use proxies for intelligence, like success, but success is not a perfect indicator of intelligence. It might just be an example of survivorship bias.

If a super-intelligent AI were to be created, how would we know? None of the answers that it gives to questions would make any sense to us because we are too stupid to understand its reasoning.

We will probably judge our AI the same way we judge politicians or celebrities, and that's just the degree to which they give us what we want. Unfortunately, that will probably destroy us. My dog wants to run into the street, but I don't allow that because my superior intelligence allows me to understand the dangers of cars. Similarly, a benevolent super-intelligent AI would stop us from doing what we think we want, but a malignant one would be happy to let us run into the street. We are not smart enough to choose the former.
posted by betaray at 10:06 AM on November 3, 2023 [3 favorites]


The point won't be to get you to believe anything. The point will be to get you to believe nothing at all. Perfect propaganda.

I'm reminded of an episode of The Orville from 2022 that touches on this idea. The Krill government (kind of like Star Trek's Klingons, but more religious) calls using computers to crank out garbage like this "influence operations" .
[a video of Krill Chancellor Korin plays]

Korin: The people of the Uvok Province have repeatedly shown themselves to be blasphemous degenerates, unworthy of our aid! By assembling in this place, you show yourselves to be as disloyal as they are! You will regret what you do here today!

[video ends]

Captain Ed Mercer: A few minutes after this, the protestors were gassed with helocine. Eleven people died.

Commander Kelly Grayson: My God. This is the guy we want to sign a treaty with?

Mercer: You know what the real problem with this event is? It never happened. It's completely fictitious. And there are countless other files that show all kinds of scenarios where Chancellor Korin oppresses his people. There are even some from the other side designed to discredit [conservative political opponent] Teleya, although they're not that different from her actual speeches.

Grayson: How can you tell the difference?

Mercer: Sometimes I can't. I asked the Chancellor, and he said they call it "influence operations." They have computers generating thousands of these things every second, trying to stoke outrage. Even the angry crowds are phony.

Grayson: What do you make of it? As far as the election goes?

Mercer: I don't know.
Spoiler alert: Teleya eventually seizes power in a coup and has Korin executed before all votes are finished being counted.
posted by Servo5678 at 10:24 AM on November 3, 2023 [5 favorites]


Elon Musk tells Rishi Sunak that AI will put an end to work.

People who talk in positive terms of the end of work often then handwave the process that will produce the just society that they believe will emerge from that. Us all not having to work would be great, but we need work to earn the money to survive, and even if the abolition of that process is inevitable (and it may not be!), there's room for a lot of suffering to happen on the way there.

As for Elon Musk, I don't know why anyone listens to him about anything. He doesn't have feet of clay as much as Play-Doh.
posted by JHarris at 10:32 AM on November 3, 2023 [3 favorites]


If a super-intelligent AI were to be created, how would we know? None of the answers that it gives to questions would make any sense to us because we are too stupid to understand its reasoning.

Not just that. A lot of our problems already have solutions that the people in power just don't want to hear. A system that tells them to stop polluting, provide free medical care and offer a universal basic income will be ignored no matter how smart the system that told it to them. There are obvious solutions to our problems, but they're not the right right answers.
posted by JHarris at 10:38 AM on November 3, 2023 [10 favorites]


When, oh when, are we going to realize that just because someone is rich, a politician, or a celebrity, that none of this implies that they know anything about anything?

It's going to be a long time, sadly, because family wealth (not even kid wealth -kids don't have any money) directly correlates with academic achievement, so it's easy to surmise that wealthier people are smarter than average because we use academic achievement as a proxy for 'intelligence'.

This also leads to the unfortunate outcome that mixing socioeconomic classes has no 'academic achievement' benefit (there are no racism/classism questions on the SAT to lower scores) to the upper classes, so it suspect to even do it.
posted by The_Vegetables at 10:42 AM on November 3, 2023 [3 favorites]


If a super-intelligent AI were to be created, how would we know? None of the answers that it gives to questions would make any sense to us because we are too stupid to understand its reasoning.

People are more than happy to do things without understanding exactly why, "Ours is not to wonder why. Ours is just to do or die" is pretty flipping famous phrase for a reason, and try to explain the reasoning of common things to children. So we spend a lot of time doing things because someone else told us to do so. So not sure I buy that. Also, how smart can something be if it's just sending down authoritarian, dictatorial mandates rather than doing the human thing of consensus and coalition building?

A system that tells them to stop polluting, provide free medical care and offer a universal basic income will be ignored no matter how smart the system that told it to them

Which leads directly to this: "stop polluting" is missing a lot of steps towards an end so of course no person would listen to that, same as 'stop drinking [prohibition in the US]' fails. Do rich people want pollution or do they want the income that polluting industries provide? I'm sure some are comically evil, but the majority want the income without upsetting the status quo. So 'stop polluting'...becomes "do...." [sorry I'm not a stable genius to answer this question] instead".
posted by The_Vegetables at 10:52 AM on November 3, 2023


Sure, people will mindlessly follow authority, but how is that an argument that they can judge the intelligence of authority?

Also, how smart can something be if it's just sending down authoritarian, dictatorial mandates rather than doing the human thing of consensus and coalition building?

How intelligent can humans be if they can't teach division to dogs? How dumb are you that you can't teach object permanence to a 4-month-old?

However, I am sure a super-intelligent AI could manipulate us into doing anything it wanted without dictatorial mandates.
posted by betaray at 11:11 AM on November 3, 2023 [1 favorite]


Of course scaled commercial AI, particularly to the extent it drives vehicles and robots, will be the end of a lot of current work. Denying that is silly.

What's next is what's interesting.

My guess is that there's a net benefit as hugely more stuff is produced at hugely lower prices. The sheer increase in production will absorb most of the manufacturing job losses, whether directly as X more production matches the Y decline in human labor per unit produces, or indirectly as cheapness creates demands for other goods or services.

A lot of this is hard to imagine because step changes in consumption aren't obvious. Nobody in 1990 - to say the least of earlier - envisioned that we would be at 1:1 ratio of people to cell phones in the U.S. or that cell phones and and their service would be the top consumption item of every non-householders and sit in the top 5 for householders.

Not too hard to imagine similar step-changes. Nothing to keep everyone from having a robot tailor in their closets and everyone wearing a new disposable outfit every day. 500,000 people making, selling, stocking and MRO of 100 million machines. If everyone in the US pays $1 a day for new designs that's $126 billion of designer revenue a year --100,000 designers at work.
posted by MattD at 12:00 PM on November 3, 2023 [1 favorite]


>"AI will put an an end to work."

That's the biggest risk of them all.


Power doesn't flow from the barrel of a gun. Power flows from the barrel of work.

Societies where people working produce power and wealth treat people great.

Societies where power doesn't come from people working treat people like trash.

You can see this from the Enlightenment and Industrial Revolution, where Europe went from dictatorships (monarchies) to democracies. Nations that didn't treat their people well fell apart and got crushed by those that did, because power came from people productively producing.

With the end of work, the people are no longer a source of power. It doesn't have to happen, but when a society that does discard the non-useful masses of people as useless occurs, without the ability to deny the society the power of work -- the ultimate power of the masses, the power to die when neglected -- that society can out-compete those that do "waste" resources on the populance.

The richest societies, instead of being those with piles of productive citizens cooperating, become narrower and narrower autocracies. And whatever is directing it need not be an individual or even a person.

Fiction has already described this happening in a corporate dystopia, where the systems and structures of corporate governance crush even the executives, the ones "controlling" it.
If a super-intelligent AI were to be created, how would we know? None of the answers that it gives to questions would make any sense to us because we are too stupid to understand its reasoning.
Sorry, but, no. I can tell when someone is smarter than me in a domain I'm at least somewhat competent at, if they are willing to do the work to show it.

There are plenty of problems that are **hard to solve** but **easy to verify**.

I'd have lots of suspicion about a super-intelligent AI that cannot solve problems in that category. yet claims to be able to solve problems in categories that are hard to verify the solution is correct.
posted by NotAYakk at 12:54 PM on November 3, 2023 [1 favorite]


AI is just how Musk and other leading techbros experience the world right now.

This, I believe, is entirely correct. Consider that Musk is having Tesla waste energy on a stupid anthropomorphic robot; he is doing that because he wants to dispense with his human assistants (probably for reasons like "they have to sleep" and "I am not allowed to make them work 24/7/365").

Musk claims he "loves humanity" -- this is an abject lie. He loves himself, and considers himself to be the only human.
posted by aramaic at 1:03 PM on November 3, 2023 [6 favorites]


Fiction has already described this happening in a corporate dystopia, where the systems and structures of corporate governance crush even the executives, the ones "controlling" it.

I'd be fine with this. All I want is for every human life to be valued equally, and if that value is nil, crushed beneath a tyrannical robotic AI boot, I would consider that a satisfactory outcome.

But I recognize that I'm an outlier.
posted by Faint of Butt at 1:25 PM on November 3, 2023 [3 favorites]


Not too hard to imagine similar step-changes. Nothing to keep everyone from having a robot tailor in their closets and everyone wearing a new disposable outfit every day.

Nothing other than material science and energy production.
posted by rough ashlar at 2:22 PM on November 3, 2023 [9 favorites]


Nothing to keep everyone from having a robot tailor in their closets

Yes, but if you want a really good one, you'll have to learn a foreign language - German for instance; a lot of really cute ones come from over there.
posted by Cardinal Fang at 4:56 PM on November 3, 2023 [3 favorites]


Musk claims he "loves humanity" -- this is an abject lie. He loves himself, and considers himself to be the only human.

I have this absurd tendency to try to give people the benefit of the doubt. I don't think Musk thinks he doesn't love humanity, I think he has just never examined his own beliefs and actions in order to understand what their true effects are. I think, like many people born into wealth, he just thinks he's right because the environment he's always been surrounded by has told him he is. He doesn't understand the warping power of money, or the perverse incentives that always follow it. People he's employed tell him he's right, because if they told him he's wrong, there's a greater chance they'd be fired, and people need money to survive.

Yet I'm convinced, if you, not even working for Musk, somehow were able to tell him that, he'd either dismiss it out of hand, or worse, admit you're right sure, but not understanding the true import of the statement, would laugh it off in a pseudo-amiable way, and then just keep on doing what he's always done. I don't even think he's stupid, he's just never had the opportunity to develop real critical faculties. What he has is enough to convince him that he does anyway.

Wealth is a kind of trap, and in a way I think I'd almost be blessed to have never had it, except of course it means having to struggle to survive, and to deal with its myriad insults.
posted by JHarris at 10:51 PM on November 3, 2023 [2 favorites]


Mod note: Comment removed due to hinting at possible self harm. If anyone is having feelings of self harm or suicide ideation, please reach out to someone professionally to seek help or someone you know and trust personally to assist you.
posted by Brandon Blatcher (staff) at 12:51 PM on November 4, 2023


I'm pretty skeptical about "AI" (by which, in the present context, really means LLMs) being all that significant if we were to skip forward a decade or so from today. Nothing about them looks to me like it's fundamentally a game-changer.

The perhaps-unlikely parallel I keep coming back to is... drum machines. Some others may be old enough and were in or near the music scene enough to remember when drum machines got cheap enough for any local band to buy one. There was a period of time where drum machines, and people's attitude towards drum machines, were A Big Deal. I remember hearing people say they were going to fundamentally change music, they were going to put lots of musicians out of work, they were going to consign physical drums on a stage to the dustbin of music history, etc. These people were wrong, and in retrospect it was obvious they were wrong. Drum machines, as a technology, might have cost some drummers some paid studio time, but the musical styles that required human drummers still require human drummers. There are new musical styles that don't—and these might or might not be to your taste, no judgment—but they're new styles, befitting a new instrument, once people figured out how to use it with some skill (and arguably, restraint).

That's what LLMs look like to me right now. They are new, and they are very unsettling to people who look at their output and see a shitty simulacrum of what they get paid to produce, which might "do the job" for others. And they may be right that LLM will "do the job" and cost some penny-per-word copy writers their jobs, doing stuff like summarization (most charitable thing I can think of), or turning out shitty SEO spam (actual thing I think will be most affected). And to be blunt, I can't really come up with a ton of sympathy for SEO spam writers. I'm sure they'll find a new job doing something vaguely antisocial somewhere, maybe coming up with LLM prompts.

There will be a cultural adjustment because the assumed-truth of photos and videos will need to change. Since photography emerged as a technology and expressive form, there's been a lot of assumed veracity in the finished works, which IMO has never been quite justified. There's a significant map/territory error in thinking that a photograph "captures" real life. It just shows you what someone chose to put into that photograph, and which might differ from the reality they saw through their eyes, or you would have seen through your eyes and subsequently remembered, if you'd been standing where the camera was at that particular moment.

The net effect of the public's overly-charitable assumption of truthiness, when it comes to photo and video, hasn't necessarily been all bad. But it was always a side effect of the particular way the technology was implemented. Nothing intrinsic about images or video. 19th and early 20th century photography and motion-picture technologies were labor-intensive to edit or fabricate convincingly and in detail, if you wanted to do something complex like airbrush someone out. (Stalin's people were the gold standard, but even their highly motivated work doesn't look that great compared to any idiot's Photoshop skills today.)

There's nothing sacrosanct about an image. Images can lie, just like text or speech can lie. We just didn't have easy ways to create lies-as-images the way we have to create lies as text, or speech. Well, now with LLMs it appears we do.

So, going forward, people will have to evaluate the veracity of photos and videos just like they evaluate the veracity of claims made via text or the guy in the next cubicle over saying something wild. They will need to consider the source, its reputation, its biases, its motives, etc. But they should have been doing that anyway. They were just able to get away with a very lazy heuristic (if it's a photo or video, it's probably true-to-life) for a bunch of decades for incidental technical reasons.

That may cause some ugliness as society adjusts and shitty people try to take advantage of the remaining traces of the old, lazy heuristic and misplaced trust, but it's not a fundamental game-changer. It's a better set of tools. People are still needed to operate the tools. And there will still be a need and a market for people who use those tools within certain bounds—according to certain rules and traditions and accepted standards—and are willing to put their reputations on the line that they are "true" according to some definition of truth. It makes entities like news organizations and government agencies more important, to help the public know what is true and what isn't, but they were always important; we might just have gotten away with not paying attention to exactly how much.
posted by Kadin2048 at 9:02 PM on November 4, 2023 [5 favorites]


That's a good way of thinking of it Kadin2048. I've been thinking vaguely along those lines, it's nice to see someone else thought of it independently, and better.
posted by JHarris at 10:55 PM on November 4, 2023


You can call it AI all you want but I'll be sticking with Advanced AutoCucumber.
posted by srboisvert at 1:47 PM on November 7, 2023


« Older It's more complicated than that   |   Is there no greater dream than to return home? Newer »


This thread has been archived and is closed to new comments