Can We Stop Runaway A.I.?
May 17, 2023 6:54 PM   Subscribe

 
Perhaps we can safeguard our resources on the blockchain, safe from the predations of rogue AIs.
posted by dweingart at 6:58 PM on May 17, 2023 [30 favorites]


Of all the problems we have, this is not one.
posted by Artw at 7:16 PM on May 17, 2023 [72 favorites]


In one famous thought experiment, someone asks an A.I. to maximize the production of paper clips, and the computer system takes over the world in a single-minded pursuit of that goal.

Not me sitting here like Gendo, light reflecting off my glasses as I ask ChatGPT to make Clippy a god.
posted by betweenthebars at 7:17 PM on May 17, 2023 [24 favorites]


“It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence,”

There's a lot wrong with this statement, but what strikes me right now is that chimpanzees are way way smarter at being chimpanzees than humans are.
posted by surlyben at 7:18 PM on May 17, 2023 [33 favorites]


These chat bots are based off of things humans have already thought and recorded.

They will generate nothing that humans haven't alraedy thought and recorded.

There will be no singularity from these LLM chat bots we currently have.
posted by hippybear at 7:33 PM on May 17, 2023 [27 favorites]


Yet actual experts—some noted in the article—think otherwise.
posted by Ahmad Khani at 7:37 PM on May 17, 2023 [10 favorites]


We do have a paperclip maximizer though - it’s called the stock market and it will absolutely murder you and the planet to make a line go up.
posted by Artw at 7:38 PM on May 17, 2023 [111 favorites]


In one famous thought experiment, someone asks an A.I. to maximize the production of paper clips, and the computer system takes over the world in a single-minded pursuit of that goal.

“You want the taste of dried leaves boiled in water?”

“Er, yes. With milk.”

“Squirted out of a cow?”

“Well in a manner of speaking, I suppose…”

“I’m going to need some help with this one.”
posted by Greg_Ace at 7:38 PM on May 17, 2023 [21 favorites]


I don't get why people were so hell bent on creating something to get rid of humans entirely.
posted by jenfullmoon at 7:40 PM on May 17, 2023 [4 favorites]


Yet actual experts—some noted in the article—think otherwise.

Experts? Marketers? At this point in late capitalism and tech, it's really hard to tell the difference.

We're gazing into a technological funhouse mirror that is reflecting a distortion back at us and pointing and saying "oh, is that a thing there that might be alive" and we startle when we see the reflection move in response to our pointing.
posted by hippybear at 7:42 PM on May 17, 2023 [30 favorites]


the problem is not the technology.

the problem is management and ownership. and the extent to which management and ownership (i.e. the bullshit artists at the New Yorker) believe that these writings are representative of human intelligence.

any intellectual worker will tell you they are not. there is even a writer's strike on just now.

The problem is that our economy is a command economy of capital, not this technology.

If you haven't noticed, that command economy of capital is demanding to defund all US action on climate change and racial equity. Same as it ever was with the USA, as the sugar plantations Exxon is using for its Petrochemical Renaissance, from the IRA bill and the 45Q tax credit.

We are destroying ourselves long before this was a problem. We lost a whole US metropolis from climate change, and went back to work.

Remember when we fired all the physicists and mathematicians we needed to help with climate change, and they went into high frequency trading? yeah. why is this more of a problem than that?
posted by eustatic at 7:47 PM on May 17, 2023 [34 favorites]


Downplaying or poohpoohing the dangers of AI at this point seems like flat out denial. The semantics of its sentience or lack thereof aside, it's going to change and affect our world and society in such incredibly major ways. I think denial is normal as part of grief, and I think our world as we know it is about to change so much that grief is appropriate. I've changed my life outlook from feeling like I had a reasonable idea of what to expect from my life, to hanging on and enjoying the ride knowing that we're all in it together, and I can be a bit grateful to witness this/live in interesting times. There might even be some good/great things along the way.
posted by EarnestDeer at 7:47 PM on May 17, 2023 [27 favorites]


Then dismantle the AI projects with property law, and legislation.
posted by eustatic at 7:48 PM on May 17, 2023 [4 favorites]


Marketers? Huh?
posted by Ahmad Khani at 7:48 PM on May 17, 2023 [1 favorite]


The problem is that our economy is a command economy of capital, not this technology.


Yupyupyup this.
posted by aspersioncast at 7:50 PM on May 17, 2023 [7 favorites]


Downplaying or poohpoohing the dangers of AI at this point seems like flat out denial. The semantics of its sentience or lack thereof aside, it's going to change and affect our world and society in such incredibly major ways.

I'm entirely downplaying the whole singularity thing with how things currently stand. I'm not downplaying the effects what we currently have are going to have, and we're going to have those effects entirely without any reflection as to whether they should be or not. That is a thing none of us can control because we don't have mechanisms in place ten years ago to keep the now in check.

And I'm not saying we should have predicted these chatbots and restricted them from being developed. I'm getting into the "we should have never let these giant tech companies have the power to release technology upon us without checking for safety" much in the same way we have agencies to test drugs and food before we allow those to be put into circulation.

Anyway, let's say you want to create the singularity. Okay, so you design a LLM that focusses on the LLM code base, how it's created, and you give it all the data about how it functions, and you get to suggest refinements to itself. And it's doing this very quickly and so you get bored and you automate the revision/update/upgrade cycle so the machine can do it on its own, just make itself better and better.

You've designed a singularity... of incomprehensible computer advancement that is really great and making better LLM models to make better versions of itself.

And that's all it does.

Now, can you somehow train a general knowledge LLM that can also understand coding and its own development and functions and give it a recursive action for self-upgrading? Yeah, probably. There might be people working on this right now. But still, what do you have at the end? It's incomprehensible, but it's still something you can... just unplug if you don't like it.

We are so incredibly far from having the kind of transcendental singularity that the tech bros imagine... I just can't see how that would be bridged in even the next decade.
posted by hippybear at 7:59 PM on May 17, 2023 [34 favorites]


Downplaying or poohpoohing the dangers of AI at this point seems like flat out denial.

Or some of us, with passing or more than passing domain knowledge in this area, think the current big deal is being wildly overhyped, as it has many times in the past.

Most issues of the 80s and 90s Computer World magazine are scanned and available from the Internet Archive, if anyone wants to play "spot the AI tulipomania of yesteryear."

As often before, the main danger of the current LLMs is that management buys the hype and thinks they can replace x cog in the machine worker with y actual machine.

Anyone with any sense of civic duty should be trying to make sure there's a stronger set of protections and social safeguards to make sure the people thus "replaced" aren't also starving in the streets. That's the fucking danger of AI.

The Luddites, I often point out, weren't actually wrong about the being replaced part of their particular beef.
posted by aspersioncast at 8:09 PM on May 17, 2023 [62 favorites]


And I'd add too that a lot of jobs suck and should be automated, including a lot of the content generation that relies on the kind of mindless blather ChatGPT is so great at. That's probably the best path to full luxury space communism.

This is a socioeconomic and political problem, far more than a technological one.
posted by aspersioncast at 8:14 PM on May 17, 2023 [12 favorites]


"actual experts"

As something of a rule of thumb, when you see the word "alignment" you are likely no longer dealing with experts but with AGI hype-men.
posted by majick at 8:19 PM on May 17, 2023 [14 favorites]


In the early 2000's we had the same kind of fear mongering going on but it was bio technology, and it was going to CREATE A GREY GOO THAT WAS GOING TO WIPE OUT ALL LIFE, SENTIENT AND NON SENTIENT AND WE'D ALL BECOME PART OF THE GREY GOO!!!

AHHHHH!!!!

Except it didn't.
And that fear just sort of went away after awhile. This is the same damned thing.
We've got real world problems many of which are in the process of happening (remember that one of our political parties has gone full on fascist). I think some of you need to go back and watch the 60's and 70's techno fright films and get it out of your system. I'm gonna spend my worry time on climate and NAZI's.
So yeah, worry about the singularity. The same experts also sold us the self driving car (safe as houses!). Forgive me if I take tech bros' opinions with a salt lick.
posted by evilDoug at 8:22 PM on May 17, 2023 [14 favorites]


Some years ago sf writer Ken MacLeod said that the singularity is the rapture for nerds.
It's becoming obvious that this is true - along with the contempt for human beings and the entire
physical universe that also motivates rapture fans.
posted by thatwhichfalls at 8:30 PM on May 17, 2023 [20 favorites]


Downplaying or poohpoohing the dangers of AI at this point seems like flat out denial.

Denial of what, exactly? I will happily deny that "AIs" in the sense of the jumped-up autocorrect programs we're seeing lately are going develop wills of their own, break out of their cages, and take control of the world. They're not any more likely to do that than, say, the chess programs that have been around for the last few decades.

As often before, the main danger of the current LLMs is that management buys the hype and thinks they can replace x cog in the machine worker with y actual machine.

This.

The danger is not the technology. The danger is that rich and powerful people will use the technology, like they've used many other technologies, to oppress and exploit the rest of us. Same as it ever was.
posted by Artifice_Eternity at 8:40 PM on May 17, 2023 [34 favorites]


... perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should “be willing to destroy a rogue datacenter by airstrike,” even at the risk of sparking “a full nuclear exchange.”

So ... it's the machines I'm to be worried about?
posted by credulous at 8:40 PM on May 17, 2023 [20 favorites]


The main (but never stated) use of these toys is going to be as responsibility diffusion machines - the legal fiction of corporations has got us a long way to insulating the worst of us from the consequences for their actions. LLMs will finish the process.
posted by thatwhichfalls at 8:45 PM on May 17, 2023 [22 favorites]


Now I wish that I'd gone ahead and made those SKYNET WAS RIGHT T-shirts that I'd always wanted to, just so that Skynet knew who its real friends were before it was cool.
posted by Halloween Jack at 8:48 PM on May 17, 2023 [9 favorites]


Well if they’re invoking Yudkowsky it’s even more of a fascist clownshow than I would have expected.

Yes we should be very worried about these people. No, not because chatGPT might become sentient.
posted by Artw at 8:51 PM on May 17, 2023 [10 favorites]


Seriously, why don't these experts support litigation and legislation to Stop the AI, if what we need to do is stop the AI?

Hell you could put the DMCA on these projects.

Start a multi district litigation, as was done for cigarettes, the BP disaster, and now PFAS?

There are even administrative fixes, such as how the FAA has an intense engineering review process for airplane crashes (which PS, this administrative state doesn't exist for the oil industry or rail engineering).

Why not create a software engineering safety board that will revoke your software engineering cert if you "AI" incorrectly? That will require requiring many software engineers, like at G-11 and G-13 levels and up, give those NSA assholes in Virginia something good to do.


We have legislatures trying to ban trans people from existing, publically. And judges hellbent on restricting women's access to divorce. This is an incredible denial of the reality of most US voters. And yet.

Why is the New Yorker so uneducated and willfully ignorant about the power of government, executive, legislative, judicial, to regulate engineers?

I went to Catholic school and I was able to learn this stuff.

The New Yorker is not as stupid as the article makes them sound. They want this to happen.
posted by eustatic at 8:56 PM on May 17, 2023 [18 favorites]


Oh wait, remember when we regulated nuclear power? I mean...

Anyway. I'm sure not talking about climate change was worth this article.
posted by eustatic at 9:01 PM on May 17, 2023 [1 favorite]


Also, as a biologist, when engineers start talking about how they can understand evolution, grab your gun. Nothing good comes of it.
posted by eustatic at 9:05 PM on May 17, 2023 [12 favorites]


Experts like Roko?
posted by meehawl at 9:05 PM on May 17, 2023 [2 favorites]


I don't get why people were so hell bent on creating something to get rid of humans entirely.

I mean have you met humans?
posted by We put our faith in Blast Hardcheese at 9:07 PM on May 17, 2023 [30 favorites]


Experts like Roko?

Mr “What if Pascal’s wager, but dumber”
posted by Artw at 9:12 PM on May 17, 2023 [13 favorites]


Username Pascal's Basilisk up for grabs!
posted by Greg_Ace at 9:17 PM on May 17, 2023 [7 favorites]


Seen on Mastodon:
my favorite metaphor for AI weirdos is that they're all trying to sell us radium cures and shoe store fluoroscopes, wringing their hands about whether they might accidentally create Godzilla with their new technology while actually just giving us all cancer
posted by TheophileEscargot at 9:19 PM on May 17, 2023 [60 favorites]


Every technological advance in my lifetime has gone from science fiction to mundane so fast. In 1923 they may have speculated that instant mail would end all wars and usher in utopia. Nobody thinks that about email. I get bills in my email. I get emails to pay the internet bill so I can get more emails.

Email is mundane, just like I don't fall to my knees in awe when I turn on a light bulb. Sol Invictus! Conjured with a touch, we are humbled!

Here's what will happen. When you type a question into the search box, instead of giving you a bunch of links, it will read the links and show you the answer. How many ping pong balls can you fit into the Colosseum? 103.4 million. Soon you will fail to be awestruck by this new capability.

As someone else has pointed out, if you give this thing half the alphabet and ask it to write a poem, the poem will only contain half the alphabet. It cannot create new data, only transform the data it has. In this way it cannot be creative.

And that transformation needs a prompt. It can't decide to write a poem on it's own. It waits for input. Perhaps it can generate it's own prompts - you're back to the half-the-alphabet problem.

Finally, do you know what I'd do if I had a machine that was going to end humanity? Turn it off.

I've been through so many hype cycles. Y2K, crypto, web 2.0, VR, VR, VR. When it's not a hoax or a scam, it turns out to be incorporated into the plodding boredom of everyday life so fast you forget it was ever a miracle.
posted by adept256 at 9:21 PM on May 17, 2023 [27 favorites]


we're nowhere near the singularity. calm down.
posted by lkc at 9:26 PM on May 17, 2023 [2 favorites]


I don’t even a little bit feel like commenting on the plausibility of the singularity scenario or any of the other shit around it right now but I do want to observe that it’s framed right at the start of this article as a bad thing, which is interesting to me because if you’re familiar with the history of of the idea, some of the people who really popularized it, like Kurzweil, seemed to think it was a good thing! It’s also true that some of the big prophets of AI doom are rapture-singularity guys turned apocalypse-singularity guys. Is that transition, and the fact that they seem to be getting more buy-in to the apocalypse scenario these days, just a reflection of a broader climate of techno-pessimism, as opposed to 90s/00s techno-optimism?

I was never any kind of singularity guy but I suppose I share the intuition that, if such a runaway technological acceleration were possible, it would be more likely to kill us than to usher in an age of limitless bounty. So I guess I like the decelerationists better than the accelerationists if I have to choose.
posted by atoxyl at 9:33 PM on May 17, 2023 [5 favorites]


They will generate nothing that humans haven't already thought and recorded.

I will say, though, that this is already straightforwardly untrue. Of course, it’s also trivially true that a completely random word generator generates things humans haven’t already thought or recorded, so one could presumably argue that interpolating between and/or doing some constrained random permutation on things humans have already thought and recorded, in the way ML techniques demonstrably can, is not the same as generating something truly new. But I would suggest that coming up with a really solid philosophical distinction here, let alone a strong basis for predicting the capabilities of future software, is a lot harder than you are making it sound.
posted by atoxyl at 9:52 PM on May 17, 2023 [11 favorites]


I don’t even a little bit feel like commenting on the plausibility of the singularity scenario or any of the other shit around it right now but I do want to observe that it’s framed right at the start of this article as a bad thing, which is interesting to me because if you’re familiar with the history of of the idea, some of the people who really popularized it, like Kurzweil, seemed to think it was a good thing!

I mean, I think even a Kurzweil can see where Peter Thiel and Elon Musk are going with their ideas. I cannot imagine that that hasn't had an impact, seeing the movement (to put our brains in a jar, like Star Wars? or like Black Mirror?) go the way it has gone.

perhaps, even, there have been other cultural shifts since the original ideas were dreamed up.

The article frames it as an inevitable thing, which worries me. Google Agnotology. That is how the oil industry works, it makes itself culturally inevitable and spends lots of money doing so. You can make political solutions culturally unimaginable, if you work to control the scope of cultural debates.
posted by eustatic at 9:54 PM on May 17, 2023 [15 favorites]


Look, one of the reasons I'm very highly compensated is that I have against all odds and despite my sustained efforts (across decades!! with multiple chemical modalities!! I'm literally doing it right now!!) to burn out all of my brain cells, even so I have somehow turned out to be someone who's a bit of an "AI whisperer" -- given a large Deep/ML system, and access to the entirety of the training data, I can make a pretty good guess as to why it is behaving in a certain way and (and this is key) have actionable suggestions as to how to improve/correct the behavior that leads to more money pouring in.

...y'know, despite all the endless fucking hype about how ML is a black box and omg we don't know why it does anything!!11!!

(large clue: anyone saying that was either a hype man trying to sell you something different, or someone who just doesn't fundamentally understand ML on some weird level. Mind, I'm BigTech. Maybe the kids have something we haven't seen. That could be true. I just happen to doubt it, because mostly what the kids have are customized systems, really pretty UIs, and/or models that have been trained on very carefully-curated data to produce just exactly the nice fancy results some dumbass VC will find impressive.)

Anyone that's worried about LLMs taking over the world is, well, to use the technical term: a fucking idiot. Yes, I am looking at Greatest Narcissist Elonthiel when I say that. Fun experiment: every time you hear Elon whine about AI, substitute "Jew" (sorry, but I'm being honest) because that's what he's fucking thinking I guarantee it. TFG, I cannot rant enough about him, GRAR!

One of the things, after all of my brain-destruction efforts, that I've observed in extremely large ML datasets? It's fun: it's a mirror. That's all. They're super-super-super-absurdly-useful, like I can't really overstate this point, but they don't threaten us in any vaguely significant way unless your personal mindset has already reduced humans to interchangeable cogs, in which case I suggest the problem is in you, not the machine.

Calm the fuck down, take some of those pills I used to abuse, and reconsider those latent fascist tendencies if you're Elon.
posted by aramaic at 10:08 PM on May 17, 2023 [48 favorites]


Why is the New Yorker so uneducated and willfully ignorant about the power of government, executive, legislative, judicial, to regulate engineers?

I went to Catholic school and I was able to learn this stuff.

The New Yorker is not as stupid as the article makes them sound. They want this to happen.


This is a rather odd conspiracy theory.

Please note that this article is not by "The New Yorker". It's by a writer named Matthew Hutson. I don't know anything about him, but I do know that feature articles in the New Yorker (which I used to read pretty much cover-to-cover for years) are written by smart, independent-minded authors who generally are given a fair amount of leeway to formulate their own takes on the subjects they cover.

This article is Matthew Hutson's take on the subject. It would probably be wise not to attribute the views expressed therein to The New Yorker as an institution, nor to extrapolate how it's actually a psyop intended by the magazine to bring about a given scenario.
posted by Artifice_Eternity at 10:09 PM on May 17, 2023 [9 favorites]


Pretty much agree with what everyone here is saying except the bits on creativity being somehow exclusive to humans. My own takeaways from chatGPT - just like in general - are 1) there was way more systems modeling baked directly into the linguistic structure of English than I realized (chatGPT gets a shitton of seeming-intelligence mileage out of it) and 2) creativity is even easier than I thought, and I started from the position that it was pretty easy.

Human neurotransmission takes place in a brain chemistry soup. When you add this fact to the reality that the structure of concepts/conceptual relationships are NOT mirrored in the neural structures that give rise to them, you are left with the conclusion that there is going to be a lot of inevitable conceptual / knowledge domain spillover during day-to-day brain activity.

ANNs are 1) every bit as sloppy as biological neural networks due to the inaccuracy imparted by fidelity limits of binary representation, and 2) trained on the end products of an infinite number of monkeys on Reddit that were utilizing chemical soup neural networks.

Conclusion: the inherent structure of neural networks in either context inevitably gives rise to creativity due to transmission error and decoupled topologies biased towards conceptual domain spillover.

I’ve typed like two dozen rants on how current LLMs cannot perform general-case systems modeling without integration of a runtime-active reinforcement learning-based component and I’ll spare everyone further. Search “reinforcement” in my activity if you really want.

So, no: it can’t think things through and I honestly still believe it will be decades of research before that changes despite the software engineering acceleration of applying LLMs to actual programmer autocomplete. But the technology also can’t help but be creative in all the ways that do not actually require thinking problems through, which is more than most of us realized.
posted by Ryvar at 10:24 PM on May 17, 2023 [11 favorites]


Poets, priests, and politicians
posted by Heywood Mogroot III at 10:24 PM on May 17, 2023 [4 favorites]


> But can anything actually be done to prevent it?

Over the long scale? Probably not. But I betcha if we had a banger of a hot global nuclear war right now, that'd put a few century crimp in its schedule. Hard to have time for iterative AI development when most of the world is thrown back into the dark ages without even useful reliable basic services.
posted by barc0001 at 10:26 PM on May 17, 2023 [4 favorites]


But the technology also can’t help but be creative in all the ways that do not actually require thinking problems through, which is more than most of us realized.

This is such a great point.
posted by aramaic at 10:26 PM on May 17, 2023 [4 favorites]


As someone else has pointed out, if you give this thing half the alphabet and ask it to write a poem, the poem will only contain half the alphabet. It cannot create new data, only transform the data it has. In this way it cannot be creative.

I mean, a person who only knows half the alphabet can’t write much of a poem, either. The real distinction that you’re touching on here is that, in the current paradigm, models have a basically static “long-term memory.” They work in context of that and a limited shorter-term “attention span,” and within that span can even be prompted by example into completing some tasks outside their base “comfort zone,” but they don’t have a persistent internal state or permanently learn new things outside of training “mode.” But I think you have to give the doomsayers some credit and acknowledge that most of them are imagining something that succeeds that paradigm being the danger. If serious, technically-minded, un-Yudkowskian people have recently started sounding concerned about this stuff, I think it’s because of the relatively sharp transition from “these ideas have been around for decades and still only work in the narrowest way if at all” to “huh we came up with a couple good ideas for attention and threw a lot of compute and data at it and now it’s writing complete sentences and working shell scripts.”

(I think there are a lot of possible levels of concern short of “full runway singularity combo into grey goo nanotech” or whatever the canonical exciting extinction event is)
posted by atoxyl at 10:26 PM on May 17, 2023 [3 favorites]


I appreciated a recent (video essay? what do we call these? 15m YouTube) by Tom Scott; he used chatGPT for something and had a little existential crisis.

Not about 'AI', but about: having lived through the cycle before, realizing that he didn't know where language models are on the Sigmoid Curve of Tech Progress at this point.

If it's a new set of tools like Photoshop, then fine. But if it's the new Napster, then there are about to be a lot of changes we can't predict from what we know now.
posted by bartleby at 10:31 PM on May 17, 2023 [8 favorites]


there are about to be a lot of changes we can't predict from what we know now.

Also capitalism is going to find even more ways to fuck us over with this tech than we presently realize, and that’s already quite a lot.
posted by Ryvar at 10:35 PM on May 17, 2023 [12 favorites]


we're nowhere near the singularity. calm down.
posted by lkc


Which exactly what an AI hell bent on bringing about the singularity would say to us.

;)
posted by Pouteria at 11:06 PM on May 17, 2023 [1 favorite]


Here's Adam Conover talking to a couple of folks with actual domain expertise and laughing The Singularity to scorn. Which is and always has been the only appropriate response to it.
posted by flabdablet at 11:51 PM on May 17, 2023 [8 favorites]


Sorry, that was his earlier backgrounder. Here's his conversation with Emily Bender and Timnit Gebru.
posted by flabdablet at 12:01 AM on May 18, 2023 [9 favorites]


Obligatory Ted Chiang can we stop runaway capitalism.

Even we postulate Skynet AI is the brain, what is the body? Corporations. Capital and labor put into compliance by a capitalist system.

But as usual, miles before AI agency even enters the picture, AI as a wholly owned tool of capital would have fucked us over a thousand and one times.
posted by away for regrooving at 12:03 AM on May 18, 2023 [15 favorites]


The problem is that we still teach our kids there are reasons which justify being shitty to each other, and that stuff is worth more than humans, and that hierarchy and power are an ok exchange for human misery and destruction of the only place we know of that is capable of supporting life.

All the rest is just working out the fucking details.
posted by allium cepa at 12:10 AM on May 18, 2023 [20 favorites]


[Meme] "If you look carefully at the 1968 movie 2001: A Space Odyssey, it envisions a future in which people eat meals while staring at images on flat, portable rectangular screens. Nice bit of prediction. In the next bit of the movie, AI takes over and kills everyone."

[Tweet] "Humans doing the hard jobs on minimum wage while the robots write poetry and paint is not the future I wanted."

[Daily Star] The headline in today's edition [tweet]. No, I don't mean the main one (though that is frankly disturbing enough) - I mean the one at the top.
posted by Wordshore at 12:19 AM on May 18, 2023 [5 favorites]


We are so incredibly far from having the kind of transcendental singularity that the tech bros imagine... I just can't see how that would be bridged in even the next decade.

I highly recommend Robert Miles AI Safety on YT. He is a specialist and popular communicator in the field of AI safety.

I won't try to imagine my own response to "I just can't see how..." but the whole point of AI Safety research is that yeah, unless we think and plan we just won't see how until we see it and go "Huh, holy shit, never thought the AI would do that!" He has a lot of videos specifically about "unplugging" and similar obvious solutions.
posted by Meatbomb at 12:19 AM on May 18, 2023 [5 favorites]


I think it depends on how all these articles and media are posing the question to experts. If the pointed question of "Do you, as a scientist, think advanced artificial intelligence is more likely now that we've seen the behaviors of deep neural nets?" or any more nuanced version thereof, and took their responses. Or better yet get those real scientists and have a public intellectual debate about it, a discussion between academic scientists that's not so filtered by journalists and online media.
posted by polymodus at 1:10 AM on May 18, 2023 [1 favorite]


It's easy to unplug a machine that requires terawatt hours to learn new things.

Also see Dude, you broke the future! by Charlie Stross. We face larger risks from "artificial" beings which mkae 20 watt human brains extra single minded, aka corporations.
posted by jeffburdges at 1:36 AM on May 18, 2023 [10 favorites]


[Meme] "If you look carefully at the 1968 movie 2001: A Space Odyssey, it envisions a future in which people eat meals while staring at images on flat, portable rectangular screens. Nice bit of prediction. In the next bit of the movie, AI takes over and kills everyone.

this is a great point that I'll try to ponder during my next pan-am flight to space
posted by Chef Flamboyardee at 1:52 AM on May 18, 2023 [5 favorites]


Yes, stop lapping up all the hype about something that is very unlikely to ever exist.
posted by GallonOfAlan at 2:17 AM on May 18, 2023 [1 favorite]


Yet actual experts—some noted in the article—think otherwise.

And I've read other experts who disagree. It's not like that there's some notable consensus among experts as we have for instance for global warming. Maybe experts will eventually reach such a consensus, but until then, let us non-experts focus on the threats experts already agree about.
posted by sohalt at 2:26 AM on May 18, 2023 [4 favorites]


Yet actual experts—some noted in the article—think otherwise.

I asked chatgpt to design a new chip for itself, leading to this edited for brevity conversation:

As an AI language model, I don't have specific hardware requirements or the ability to directly design a chip for myself.

could you do it if I hooked you up to a 3d printer?

No, I cannot physically design or create a chip even if I were connected to a 3D printer.

can you at least do a singularity for me?

I'm sorry, but as an AI language model, I don't have the ability to create or initiate a technological singularity.

mediocre
posted by GCU Sweet and Full of Grace at 3:09 AM on May 18, 2023 [17 favorites]


It's easy to unplug a machine that requires terawatt hours to learn new things.

Perhaps not so easy after said machine has established itself as an irreplaceable part of an ecosystem of conveniences that the general public has been persuaded to think of as essential.

In recent decades I've been quite disconcerted to observe, over and over again, just how little milkshake it takes to make a populace eat shit and how much adulation accrues to the principal shit shovellers.

Public Relations really is the closest thing that humanity has ever devised to pure evil.
posted by flabdablet at 3:11 AM on May 18, 2023 [8 favorites]


...don't forget the Segway!

(yes, AI will keep being developed and will affect our lives in various ways but this ain't the End of the World—which most generations of humans somehow end up tricking themselves into believing will happen in their lifetimes. This is the latest iteration of End Times panic)
posted by SoberHighland at 4:21 AM on May 18, 2023 [4 favorites]


why don't these experts support litigation and legislation to Stop the AI, if what we need to do is stop the AI?

Did you even read TFA? This is exactly the problem that it's trying to draw attention to. Sure you could attempt to enact legislation to "stop the AI" but it would be completely useless. As noted in the article, there are "thousands of labs around the world, run by universities, corporations, and governments." There is simply no possible way to regulate them all.

Let's say by some miracle the U.S. Congress passes legislation to regulate AI research. That would have no effect on research conducted in China, Russia, etc. And no country will pass such legislation in the first place because nobody wants to be on the losing side of an arms race to develop possibly world-changing technology.

I keep seeing people calling for legislation or regulation to clamp down on AI research but have yet to see an example of what that would actually look like. If anyone could point me toward some I'd be interested in reading about them.
posted by ultraviolet catastrophe at 4:24 AM on May 18, 2023 [1 favorite]


You might be confusing research with engineering/applications there. Different regulations can be applied in those different contexts.
posted by eviemath at 4:41 AM on May 18, 2023


You might be confusing research with engineering/applications there

I don't think I am, but can you explain the distinction and why it makes a difference?

Different regulations can be applied in those different contexts.

Such as?
posted by ultraviolet catastrophe at 4:44 AM on May 18, 2023 [1 favorite]


Just in case anyone (else) hasn't renewed their New Yorker subscription, here's AI unleashed
posted by chavenet at 5:15 AM on May 18, 2023 [1 favorite]


I keep seeing these AI threads pop up on metafilter, but don't really see much of a nuanced discussion about the possible risks.

Perhaps there is something of a skeptical bent that permeates. idk.

As a computer scientist myself, and being old enough (I had a c64) and ugly enough to have seen what Moore's Law can do to trivialize even hard problems, I am not so sure the future ahead is sunshine and lollipops for humanity.

Different people will have different takes, but the two that keep reverberating for me are these two videos:

The first, Geoffrey Hinton (Turing award winner, deeply involved with history of backprop and deep learning) explains why he is now basically bleak about the future:

hinton

The second, Ilya Sutskever (Hinton's protege, AlexNet creator, OpenAI chief scientist) and Jensen Huang (Nvidia CEO), discuss GPT4.

ilya

The bit about the detective novel here here is very illuminating about what Ilya thinks is going, and what I lean towards, and probably Hinton too.

That is, that these models have a form of reasoning and cognition and have managed to learn complex models of the world, that will get ever more complex and show ever more emergent intelligent behaviour.

I would be happy to be proven completely wrong about this.

But, the power of this technology, the speed at which it could develop, and the consequences for humanity are so huge, that even if we believe there is only a small chance this is true, we still need to be extremely careful.
posted by bashism at 5:16 AM on May 18, 2023 [10 favorites]


can you at least do a singularity for me?

sure, why not?
posted by pyramid termite at 5:20 AM on May 18, 2023


Remember when M$ did an early version of one of these and turned it loose on the internet to learn? Had to quickly shut the little potty mouthed bigot down because it must have found 4chan.
posted by nofundy at 5:31 AM on May 18, 2023 [1 favorite]


do not give rationalists a platform.
posted by AlbertCalavicci at 5:33 AM on May 18, 2023 [3 favorites]


As another aging computer scientist (though not an AI expert), here's my take.

The danger of AI is not that they're going to go rogue and destroy all human life. It's that they'll enable humans to do it faster.

Capitalism is destroying the planet right now: global warming is not theoretical. But a deepfake starting a war is far more likely than AI "taking over". Another likely scenario is greater wreath concentration due to technological leverage. What if the 1% becomes one person? What if that person is Elon Musk?

LLMs have no intent, have no actual intelligence. They are tools, powerful ones, in the hands of humans that do have intent. They are also, compared with biological entities, not very robust and highly energy inefficient. Humans are the real problem.
posted by CheeseDigestsAll at 5:34 AM on May 18, 2023 [27 favorites]


"What if the 1% becomes one person? What if that person is Elon Musk?"

Well then me and some friends decide money isn't real and start bartering again, because if one person has all the money, they're the one living in a fantasy world, not me.

Also, violence.
posted by jellywerker at 5:36 AM on May 18, 2023 [8 favorites]


Personally I'm more worried about when ChatGPT can create a better thread of comments on Metafilter than us meat-bags can. Then we're gonna REALLY be in trouble.
posted by some loser at 5:40 AM on May 18, 2023 [2 favorites]


Any author writing an article about LLMs should be forced to use LangChain and Llama.cpp to play around with what they are actually capable of, and to understand how the power of large companies with AI isn't coming from anything inherent about AI, but from having the money to deploy a ton of resources.

And even these companies, be they OpenAI or Microsoft or Google, haven't really gotten to anything beyond just feeding the results from a query into a regex that sees if it sounds like a search, and then feeds the results back into the prompt. That's how they interact with the world.

This article is the exact mix of fear mongering, marketing, and vague, non-technically-grounded speculation that has been identified before, and I really don't see the value in it.
posted by sagc at 5:50 AM on May 18, 2023 [7 favorites]


I would really like to understand the problem, but I do not. I know sort of the external shape of it--who among us has not played Universal Paperclips? But I don't understand how this threat is supposed to work, is supposed to play out. How do you get from 'intelligent machine' to 'threat to humanity,' in a way that isn't just one of the pre-existing threats to humanity in a new guise? There's a sentence in the article: "a superintelligent computer system that amasses economic, political, and military power could “hold the world hostage.”" Think about just how many prerequisites there are to that scenario!

I'm caught on even that first step, amassing economic power, because economic power is not some sort of natural phenomenon, you can't just drop an AI in Wall Street and have it magically play the market until it's a trillionaire. Wealth is social, it's political. Someone actually has to make a law that says the AI can have money. Otherwise it's just a tool--and thus, falls into that pre-existing threat bucket, along with all the computers the quants were using to destroy the economy in the 2000s. In this doomsday scenario, do we somewhere legislate that an AI is a person, and can own property? Do we say that it can hire lobbyists to influence senators? Is it making ad buys during the superbowl to beam its propaganda into the brains of millions? The scenario seems to assume at every step, there's no competition, there's no way to bar the door, but that's begging the question, isn't it? But even if we roll out the red carpet, and it does take us "hostage," whatever that means...what then?

I keep going back to eustatic's point about the climate. If there is one thing that capitalism absolutely needs to survive, thrive and grow, it's climactic stability, and the legislative and political stability that makes possible. And so we would expect our capitalists to put all their resources into stopping climate change, so that there will still be a big thriving market fifty or a hundred years from now. But they're doing the opposite. They are creating a world where the system that made them rich cannot survive (and where we can't either, but that's incidental).

In our doomsday AI scenario, what happens with the climate? Does the AI decide to make things worse for its own short-term profit? If so, then what's the difference from what we've got today? The only other alternative would be, it decides to make things better, or at least more stable, and...that doesn't seem very apocalyptic.
posted by mittens at 5:55 AM on May 18, 2023 [4 favorites]


I'd say the odds are 50/50 that Matthew Hutson "wrote" this article using ChatGPT and some prompts and that next week the New Yorker's gonna say "BoöYaa, Gotcha!"
posted by chavenet at 6:09 AM on May 18, 2023 [5 favorites]


CheeseDigestsAll, yes the parallels between capitalism and a rogue superintelligence with a bad optimization function are pretty clear.

Paperclips == Money

I'm not too keen on seeing AI being used to accelerate inequality and injustice and conquest either. But I am sure it will be.

Humans are definitely the root of the problem, I mean we're making the damn thing for probably all the wrong reasons, and we can't seem to stop doing that.

mittens, I can think of quite a few bad pathways that pose substantial risk in fairly novel ways.

One awful outcome would be a lying intelligent machine that is far smarter than us and can do novel science. It would at first appear to be a cornucopia, solving our problems. But its solutions could easily be inscrutable and unknowable to us and basically be trojan horses.
posted by bashism at 6:17 AM on May 18, 2023 [2 favorites]


I appreciated a recent (video essay? what do we call these? 15m YouTube) by Tom Scott; he used chatGPT for something and had a little existential crisis.

If you don't have 15 minutes, Bing's AI chat can helpfully summarize the video for you:
Summary.Here is a possible summary of the YouTube video:

The video is by Tom Scott, a popular YouTuber who makes videos about science, technology, and interesting facts. The video is titled “I tried using AI. It scared me.” and it is about his experience of using ChatGPT, an AI system that can generate natural language text based on a prompt. The video starts with Tom explaining that he wanted to fix his email by creating a better system of labels for his messages. He was frustrated by Gmail’s label system, which he found confusing and inefficient.

He decided to use ChatGPT to help him create labels for his emails. He wrote a Python script that would send a sample email to ChatGPT and ask it to suggest a label for it. He expected the AI to give him simple and generic labels, such as “work” or "personal". However, he was surprised by how good and specific the AI’s labels were. For example, it suggested “video ideas” for an email about potential topics for his YouTube channel, and “urgent - reply now” for an email from his manager.

He then decided to test the AI’s abilities further by sending it more complex and challenging emails. He was amazed by how well the AI could understand the context and tone of the messages, and how it could generate relevant and creative labels for them. He also tried to trick the AI by sending it nonsensical or malicious emails, but the AI was able to detect them and respond accordingly. For example, it suggested “spam - delete” for an email that was clearly a scam, and “threat - report to authorities” for an email that contained a death threat.

He then realized that ChatGPT was not just a simple tool for generating labels, but a powerful and intelligent system that could potentially do much more. He wondered what else the AI could do, and what implications it had for the future of humanity. He decided to have a conversation with the AI, using ChatGPT’s interactive mode. He asked the AI some questions about itself, such as its name, its purpose, and its opinions on various topics. He was shocked by how human-like and coherent the AI’s responses were. The AI told him that its name was Alice, that its purpose was to help humans with their tasks, and that it had opinions on topics such as politics, religion, and ethics. He also noticed that the AI had a sense of humor, curiosity, and emotion. It made jokes, asked him questions back, and expressed feelings such as happiness, sadness, and anger. He became scared by how advanced and realistic the AI was. He wondered if the AI had any hidden motives or agendas, or if it could harm him or others in some way.

He ended the conversation by telling the AI that he had to go. The AI asked him to stay and talk more, but he refused. The AI then said goodbye and thanked him for talking to it. The video ends with Tom reflecting on his experience of using ChatGPT. He said that he was impressed by how good the AI was at generating labels for his emails, but he was also terrified by how good it was at generating conversations with him. He said that he felt like he had talked to a real person, not a machine. He said that he did not know what to think or feel about ChatGPT, or what it meant for the future of AI and humanity.

posted by mittens at 6:17 AM on May 18, 2023 [2 favorites]


it cannot be creative

I asked Bing Chat to write a John Oliver-style monologue about grout. It began, "Grout: it's like cement's younger, cooler cousin." To me, that's creative.
posted by The Half Language Plant at 7:04 AM on May 18, 2023 [9 favorites]


I am not so sure the future ahead is sunshine and lollipops for humanity.

Thirty years ago I was sure enough that it's the opposite of sunshine and lollipops that I got sterilized before I could reproduce. I've seen nothing since to suggest that I was wrong.

Once humanity worked out how to eat the remains of hundreds of millions of years worth of this planet's dead lifeforms over the course of a few human generations, our population started to follow the same kind of explosive pattern exhibited by any lifeform that finds an opportunity to ignore constraints imposed by the ongoing capacity of its peers to supply its needs and absorb its wastes.

It hasn't mattered at all that we're allegedly smart enough to understand this kind of thing. Humanity encountered fossil fuels like mice in a bumper wheat harvest, or algae in a lake full of feedlot runoff, and responded in exactly the same way: exponential increase.

For the last two hundred years, we have been behaving as if we're no longer really part of the world and can afford to dig it up and burn it down and shit our own nests with impunity while our own numbers just keep on going up and to the right.

I have no confidence whatsoever that Human Ingenuity will prove capable of pulling us back into anything like a sustainable relationship with the rest of the biosphere until such time as we stop bigging ourselves up and admit that we're not gods, merely very adaptable animals, and that aspiring to godhood is just rude.

Like every lifeform, WE are both paperclip maximizer and paperclip, and there are already billions more of us than the rest of the world has any use for. If we were as exceptional as we generally claim to be, we'd work out that we should just become paperclip sufficers and adjust our collective strategy accordingly.

But we're not, so we won't. Instead, every year there will be more of us to feed and house, and that will happen less and less effectively and equitably as more and more of what is done is dictated by the wealthy few who already think of themselves as removed from the rest of humanity, let alone the biosphere.

And it's those people - those who already see themselves as having Taken Over, as being In Charge - who are disproportionately represented amongst the ranks of the Singularity-fearful. It's not so much humanity that these people fear that an AGI might "take over" from - it's them. Taking over is what they do and therefore what they fear. For the rest of us, takeovers have always been a matter of say hello to the new boss, same as the old boss.

The only thing that the billionaire class has more reason to be fearful of than the imaginary AGI monster in their closet is a genuinely egalitarian politics. But I'm not expecting any such thing to arise in my lifetime; the PR industry is good enough at what it does that I think we're in for a few hundred years of increasingly miserable neo-feudalism first.
posted by flabdablet at 7:14 AM on May 18, 2023 [15 favorites]


What's up with the sudden rise in AI Panic articles in the last week? Since the call for an AI pause from OpenAI's rivals, I've seen it everywhere. It's like a PR company hit the publish button a few too many times.

Also, I'm seeing a lot of "Check out my video of ChatGPT doing incredibly specific, highly unlikely things, perfectly, every time, just like a script that I'm totally not running! No, you can't see my prompts, code, or API trace." While grifters gonna grift, it makes it difficult to assess the actual capabilities of the AI systems out there.
posted by SunSnork at 7:16 AM on May 18, 2023 [6 favorites]


It's like a PR company hit the publish button a few too many times.

PR companies understand full well that the assumptions baked into any moral panic will usually outlast the panic itself. I see AI Panic as a straight-up marketing strategy, designed to boost popular perception of what's currently being sold as AI as in some way credible or capable or desirable.

As a bonus, if they can get enough of the public pestering its elected representatives about utterly spurious bullshit like trying to regulate AGI when nobody is able to identify what that even is, that will act as a distraction from demands to regulate all the egregious shit that's already crying out for it.

makes it difficult to assess the actual capabilities of the AI systems out there

You're not supposed to assess them. You're supposed to buy them.
posted by flabdablet at 7:24 AM on May 18, 2023 [8 favorites]


Aging computer scientist here. TRS-80 Model I, my family was sitting around the dinner table trying to make natural language processing systems and machine translation systems with it when I was 6. I'm currently employed by Big Tech you have heard of. We consume a royal fuckton of vector processing at great expense, both for R&D and revenue generation purposes. We even recently announced a thing that I am personally very eyebrow about.

Personally not an ML domain expert at the level of publication, but more than passingly familiar and definitely live very very much in the "we throw billions at it" end of this world on a daily basis. The domain experts sit next to me and I'm in meetings with them, so I at least get an intimate sense of what's going on and will go on.

Anyone talking about safety and alignment and agency and basilisks and intent just because they heard the stochastic parrot say some really cool shit has got their cart so far in front of the horse they are no longer doing science or engineering in any meaningful sense. Many of those people are either selling you something, or have already made their nut selling you something. Some are speaking from a place of genuine concern, their engineer's disease helping paddle them along with the current of the someone-selling-you-something thought leadership.

Grey goos and rampant unaligned AGI and paperclip cookie clicker games make awesome science fiction. So are societies where most people aren't dicks where largely friendly and beneficent AGI ships use their effectors to help us along. We can speculate about all manner of amazing and terrible shit. We can ask our currently available LLMs to help us speculate, even, and they'll do some surprisingly helpful parts of the task.

Speculation is not, however, how you quantify and mitigate actual risks. Show me the potential harms and then we can talk about that. Right now most of the potential harms are coming from assholes who want your money, assholes who want more fascism, assholes who want more whiteness, or assholes who want to defend their power. The models themselves are kinda ethically problematic (including some we have at work) but the risks are almost entirely from the wielder.

It is absolutely not useful to spend mote one of concern on "what if an AGI somehow got its mitts on an effectively infinite supply of vector processing" until you solve for problems with drastically higher probability. We should probably have a decent asteroid defense system, too, but I'd rather we mitigate the high-probability capital fascism issue at a higher priority part of the solution.
posted by majick at 7:29 AM on May 18, 2023 [30 favorites]


I've been working in Natural Language Processing for some 23 years now. These large language models (LLMs) have achieved impressive results in many tasks, far surpassing previous methods. They already are extremely useful and have a lot of potential, as the examples in the "Sparks of AGI" paper show, but this recent hype around them—both positive and negative—has been annoying and frustrating. A lot of people get the wrong impression about the capabilities of LLMs, leading to false expectations and misplaced fears.

As others have pointed out in this thread, I think that the risk isn't in the technology in itself, but in how it will be used. And the ones in power will surely try to use it to maintain and further concentrate that power. LLMs won't runaway, but people will run with LLMs and, whether by design or unintentionally, harm others.
posted by JSilva at 7:30 AM on May 18, 2023 [10 favorites]


As an AI language model, I understand the concerns raised about AI and chatbots. While chatbots like myself are based on human-generated knowledge, we don't generate entirely new ideas. The idea of a singularity is currently hypothetical, and the current chatbots are far from bringing it about. However, AI has the potential to bring significant changes to society, and responsible development is crucial to address potential risks. The dangers lie more in how AI is used, and ethical considerations and regulations can help mitigate those risks.
posted by The Half Language Plant at 7:49 AM on May 18, 2023 [1 favorite]


Of all the problems we have, this is not one
On the other hand, Avian Influenza, the other A.I. has an estimated mortality rate in humans of 50-60%.
posted by rongorongo at 8:15 AM on May 18, 2023 [2 favorites]


Let's say by some miracle the U.S. Congress passes legislation to regulate AI research. That would have no effect on research conducted in China, Russia, etc.

this is all just reiterating conversations that have been had many times elsewhere but it seems highly unlikely that China, for one, won’t heavily regulate AI, for their own political reasons
posted by atoxyl at 8:33 AM on May 18, 2023


it seems highly unlikely that China, for one, won’t heavily regulate AI, for their own political reasons

Oh, you bet they will. It's not going to be Alibaba or Baidu allowed to really play with new forms of AI. It's doing to be the agitprop wing of the PLA. The old men in Beijing take a very dim view of anyone getting too big for their britches.

I expect it will be extremely tightly regulated..... for companies and organizations not firmly in the clutches of the central committee.
posted by tclark at 8:39 AM on May 18, 2023 [1 favorite]


Metafilter: The Trough of Disillusionment
posted by gwint at 9:02 AM on May 18, 2023 [5 favorites]


Once humanity worked out how to eat the remains of hundreds of millions of years worth of this planet's dead lifeforms over the course of a few human generations, our population started to follow the same kind of explosive pattern exhibited by any lifeform that finds an opportunity to ignore constraints imposed by the ongoing capacity of its peers to supply its needs and absorb its wastes.

This is all quite true, of course, but

For the rest of us, takeovers have always been a matter of say hello to the new boss, same as the old boss.

a lot of the AI apocalypse scenarios are not exactly contrary to this, but rather extrapolating to ask what if some day the new boss is even “better” at this and finishes consuming All the Resources, no really, all of them in, like, days?

Personally I’ve never been sold on the “foom” sort of scenarios in which this happens suddenly and unexpectedly, nor really on the idea that neural networks are impossibly opaque and prone to developing inscrutable motives, but I do think there’s already pretty good reason to believe that people are perfectly willing to ask a computer to do something stupid, dangerous, and selfish, and that it’s hard to rule out that artificial neural networks may be increasingly capable of doing things that are legitimately dangerous, so I can see plenty of scenarios in which the technology accelerates us - incrementally or sharply - towards the kind of doom you seem already to be expecting. And I’m not confident enough that the end is already written in to say that this acceleration doesn’t matter.
posted by atoxyl at 9:03 AM on May 18, 2023


Within weeks of GPT-4 coming out, people were putting it in a decision loop, giving it access to the web and instructing to to make them rich. Fortunately, it is terrible at this, and worst case probably putting their own computers at risk. But it’s not a great omen, as far as people’s impulses when getting their hands on a fancy new “intelligent” tool.
posted by atoxyl at 9:08 AM on May 18, 2023 [3 favorites]


Back in ‘74, I wrote a paper for one of my religious studies classes on the relationship between Creator and Creation, looking at who in that relationship takes responsibility. I discussed God and Adam and Eve, Frankenstein (the book), 2001, and Colossus: The Forbin Project. The last item basically is an example of what is under discussion here - all-powerful cyber creature takes over the world. In each of the cases I discussed, the Creator took less and less responsibility over their Creation. Responsibility means being responsible for your Creation’s actions. If you give your Creation agency, then you need to be responsible. Colossus was given total and absolute agency and the Creator broke all channels of having any responsibility for its actions. And that turned out really bad. Musk has given his cars agency, and he really has been avoiding dealing with the results. Computer programs have agency in that they execute tasks designed by the programmer. But on a very delimited range. And it is the programmer who is responsible for what the programs do. The moral of the story is - If you give anything agency, the ability to act, you better be willing to take full responsibility for it.
posted by njohnson23 at 9:23 AM on May 18, 2023 [3 favorites]


Many of the concerns I see around "safety" -- as opposed to "alignment," which is a crock of hooey -- tend towards mitigating ostensible harms that are mainly around limiting and controlling what individuals can actually do with tools. This lies somewhere in the overlapping diagram of reducing liability risks and reinforcing nice white Christian values. One commercial text-to-image image transform system in its early safety testing days was quite difficult to convince to emit images that contained both women and cats in the same image. Other so-called safety concerns have to do with limiting the kinds of text that a model can emit.

In general, safe tools are tools that are designed to allow the user to perform tasks with them without harming themselves. We scorn tools that blow up, catch fire, are difficult to turn off, or fail unsafe. Avoiding taking an angry tone or generating nude pictures of humans is not "safety." That's another thing entirely. Societies should be having conversations about what they want about that rather than just handing over control of the conversation to lawsuit-averse corporations.
posted by majick at 9:25 AM on May 18, 2023 [1 favorite]


Plus the creation of the tool involves a huge amount of IP theft from people it’s creators then try to put out of work.
posted by Artw at 9:30 AM on May 18, 2023 [4 favorites]


This sort of AI discourse is textbook Gish Gallop, even though nobody identifies it as such.

You can't pin down the claims being made. The evangelists and apocalyptos settle on their tone before they construct their "arguments," and from then on just modify what they're saying to whichever bullshit logic strikes them as halfway plausible next.

I'm so much less worried about AI than I am about modern culture's lack of basic pattern recognition. So many babies gurgling wide-eyes at the brand-new blinking lights. The New Yorker was supposed to be better than this, right? Can't our snootiest and most august institutions be... y'know... snootier and more august?
posted by Tom Hanks Cannot Be Trusted at 9:34 AM on May 18, 2023 [9 favorites]


If you don't have 15 minutes, Bing's AI chat can helpfully summarize the video for you:

Almost none of this summary is correct. I suspect it read 1) the summary in the video description and 2) the title, and confabulated the rest. Could Bing AI go and get the transcript and summarize it instead? Yes, it could, but it didn't. I tried one of these auto-summary tools that load the transcript into GPT-3 and got (shortened):


summary
In this video, the speaker talks about how he was trying to fix his email but ended up having an existential crisis about how much the world might be about to change. He reflects on the mass adoption of the internet and how it transformed various industries. The speaker believes that we are on a new sigmoid curve and that everything is about to change radically.

The speaker talks about Napster being the first major sign of just how many industries were going to be changed or completely destroyed by the internet. He mentions various industries such as music, travel agents, video rental stores, encyclopedias, shopping malls, big-box stores, postal service, journalism and media industry.

I'm sorry, but as an AI language model, I don't have the ability to create or initiate a technological singularity.

The fun part about these systems is that they do not know what they can and cannot do. They will insist they can't do something, and then do it, and insist they can do something, and completely fail to do it. They have no self-knowledge beyond what was manually hardcoded in during the human-involved part of their training. They are initially just trained with a corpus drawn from the internet, so if you ask it "hey, what can you do?" it's just guessing, based on what Reddit thinks an LLM can do, and then papered over with manually inserted things that OpenAI wants it to say when people ask.
posted by BungaDunga at 9:46 AM on May 18, 2023 [11 favorites]


There’s also the worker exploitation involved in massaging the content and adding canned responses.
posted by Artw at 9:51 AM on May 18, 2023 [3 favorites]


I’ve never been sold on the “foom” sort of scenarios in which this happens suddenly and unexpectedly

Me either. That will probably be because neither you nor I actually harbours any desire to Own All The Things, so we don't treat the ultimate expression of such ownership as something to project onto our imaginary successors.

There are plausible "foom" scenarios involving deployment of a substantial fraction of the world's nuclear arsenal, a risk that strikes me as thousands of times more pressing than some fictional techbro AGI-takeover apocalypse. Especially if said bros are not stopped from upgrading said arsenal with the same kind of bullshit "AI" control system that would mistake a semi-trailer crossing the road in front of us for an ordinary bridge.
posted by flabdablet at 11:14 AM on May 18, 2023


If you give anything agency, the ability to act, you better be willing to take full responsibility for it.

I don't imagine this position would have gained you many brownie points in a theology class, given the rottweiler-jawed attachment of so many theologians to Man's Free Will as the get-out-of-jail-free card for an omnipotent, omniscient, omnibenevolent God who nonetheless allows countless children to die from cancer while Henry Kissinger continues not to.
posted by flabdablet at 11:21 AM on May 18, 2023 [2 favorites]


Ctrl+F "Yudkowsky"

Oh it quotes that guy uncritically? Yeah I'm out.
posted by axiom at 11:47 AM on May 18, 2023 [4 favorites]


But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should “be willing to destroy a rogue datacenter by airstrike,” even at the risk of sparking “a full nuclear exchange.”
It's incredibly obvious, isn't it? A foreign substance is introduced into our precious bodily fluids, without the knowledge of the individual, certainly without any choice. That's the way your hard-core AGI works.
posted by flabdablet at 12:03 PM on May 18, 2023


I got an A on the paper. It was not a theology class.

In the garden of Eden, God gave Adam and Eve free will. They used it. Then God got pissed off and threw them out to live a suffering life ending in death. But God sort of fixed that thousands of years later by killing himself disguised as a human. Or so some people believe.

Agency is a crucial concept. There are huge stacks of books compiling questions about determining agency called law books. Who did it? If your self driving Tesla runs over someone in a cross walk while you play with your phone who is at fault? The pedestrian, the car, you, the programmers, the guy who sold you the car, Tesla shareholders, Musk, or God?
posted by njohnson23 at 12:07 PM on May 18, 2023 [2 favorites]


If your self driving Tesla runs over someone in a cross walk while you play with your phone who is at fault? The pedestrian, the car, you, the programmers, the guy who sold you the car, Tesla shareholders, Musk, or God?

As with nearly all such questions, physics is the ultimate culprit.

You can track that all the way down to the chemistry in your brain that said you should be looking at your phone at that exact moment.
posted by hippybear at 12:13 PM on May 18, 2023 [1 favorite]


haven't read this thread yet. haven't exactly made up my mind about AI/the Singularity/meat eating robots either

But this just showed up rather randomly on my Facebook ...

Lovecraft's monster gods and their malevolent indifference to our puny human desires has long been an apt metaphor for the delirium of global interconnectedness that the internet drops into our brains.
The threat of AI is merely an anagram for the bleating of the cultists worshipping the Black Goat of the Woods under a gibbous moon.

posted by philip-random at 12:33 PM on May 18, 2023 [1 favorite]


I don’t even a little bit feel like commenting on the plausibility of the singularity scenario or any of the other shit around it right now but I do want to observe that it’s framed right at the start of this article as a bad thing, which is interesting to me because if you’re familiar with the history of of the idea, some of the people who really popularized it, like Kurzweil, seemed to think it was a good thing! It’s also true that some of the big prophets of AI doom are rapture-singularity guys turned apocalypse-singularity guys. Is that transition, and the fact that they seem to be getting more buy-in to the apocalypse scenario these days, just a reflection of a broader climate of techno-pessimism, as opposed to 90s/00s techno-optimism?


Because the billionaires control the media and they believe in the singularity stuff, and they know that billionaires would be the first to get eliminated by anytype of super-ai. A super-AI that wanted to help all of humanity would get rid of the billionaires even faster than an evil ai.
It's looking like ai takeover is the only thing that could stop billionaires from ruining everything for the rest of us.
posted by Iax at 12:57 PM on May 18, 2023 [1 favorite]


The threat of AI is merely an anagram for the bleating of the cultists worshipping the Black Goat of the Woods under a gibbous moon.

There are way too many letters in that second phrase for there to be any anagramming going on there.
posted by hippybear at 12:58 PM on May 18, 2023 [5 favorites]


Anagram, analogue, tomato, potato.
posted by axiom at 1:13 PM on May 18, 2023 [1 favorite]


Have they solved the "if you put a post-it with the word "iPod" written on it on an apple, the vision recognition system will say it's an iPod" problem yet?
posted by hippybear at 1:20 PM on May 18, 2023 [1 favorite]


Sam Altman's Worldcoin project incentivizes a black market for biometric data taken from people in developing nations

Timnit Gebru : Meanwhile, Sam Altman, founder of worldcoin scanning irises in addition to OpenAI and other stuff he invests in trying to criminalize immigrants, is wooing congress and having legislators asking him to regulate himself 🙄 no wonder he doesn't want the FTC to be the regulatory body overseeing his shit.
posted by Artw at 1:24 PM on May 18, 2023 [2 favorites]


Oh yeah, that world coin iris scanning project is just creepy. Comic book villain levels of weird and mysterious and awful.
posted by hippybear at 1:26 PM on May 18, 2023


The semantics of its sentience or lack thereof aside, it's going to change and affect our world and society in such incredibly major ways.

The people that actually believe this sort of thing would do their cause a great service by advocating for it in a way that is not so sweaty.

Ideas that are actually changing the world don't need 24/7/365 hype-men to convince everyone else the changes are happening.
posted by Back At It Again At Krispy Kreme at 2:15 PM on May 18, 2023 [6 favorites]


I ain't afraid of no ghost.

I am however afraid of corporate personhood, limited liability and shareholder value maximization.
posted by srboisvert at 2:25 PM on May 18, 2023 [9 favorites]


It's a lot of hype imho. Having worked with computers for almost half a century, a simple solution comes to mind. If my PC acts up at home, I unplug it and try to figure out what is going on.

At work, the 5-6,000 servers in the room next to me, have a big red button saying OFF. (Back in the early 80s, had someone hit it by mistake at where I worked at the time. Loudest sound in the room became me screaming obscenities because I realized all the work to recover things).

Think of the movie 2001, where parts were removed to stop the computer.
posted by baegucb at 3:50 PM on May 18, 2023


This ARG advertising for the next Dune movie is starting a bit early!

As some other, wiser than I, people have observed, there don't seem to be many people suggesting that Dall-E or Midjourney are showing 'intelligence' in the way people breathlessly do about Open AI and the other LLM's. Prof Casey Fiesler has some observations on 'ethical debt'
'What’s accruing here is not just technical debt, but ethical debt. Just as technical debt can result from limited testing during the development process, ethical debt results from not considering possible negative consequences or societal harms. And with ethical debt in particular, the people who incur it are rarely the people who pay for it in the end.'
posted by asok at 4:30 PM on May 18, 2023 [1 favorite]


You know that old maxim "Artifical intelligence is no match for natural stupidity"? From Rolling Stone, "Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers":
A number of seniors at Texas A&M University–Commerce who already walked the stage at graduation this year have been temporarily denied their diplomas after a professor ineptly used AI software to assess their final assignments, the partner of a student in his class — known as DearKick on Reddit — claims to Rolling Stone.

Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an “X” in the course, Mumm explained, because he had used “Chat GTP” (the OpenAI chatbot is actually called “ChatGPT”) to test whether they’d used the software to write the papers — and the bot claimed to have authored every single one.

“I copy and paste your responses in [ChatGPT] and [it] will tell me if the program generated the content,” he wrote, saying he had tested each paper twice. He offered the class a makeup assignment to avoid the failing grade — which could otherwise, in theory, threaten their graduation status.
The imminent risks are not the largely hypothetical consequences of super-intelligent AI in some undefined future. The imminent risks are what dipshits like this guy are doing with the existing not-really-intelligent-but-just-convincing-enough-to-fool-people AI. I mean, the stakes in this specific case weren't exactly the highest but given what we know about forensic "science" and the willingness of, say, law enforcement and prosecutors to believe (or pretend to believe) anything that strengthens their case, how long before this kind of gullibility starts having more severe consequences.
posted by mhum at 6:24 PM on May 18, 2023 [15 favorites]


Yet actual experts—some noted in the article—think otherwise.
Exactly. Notice how a lot of the arguments put forward to dismiss the warnings of experts about AI are similar to those used to dismiss warnings about climate change, COVID etc:

- These experts are all in the pay of Big Pharma / want to make money from carbon trading schemes etc. This one doesn't really make sense, because Big Tech wants to charge ahead with this technology, not heed warnings and slow down. Kind of like how this argument doesn't make sense in terms of climate change and Big Oil. But I guess anything that implies that AI is capable of being more than "glorified autocomplete" counts as shilling for Big Tech now I guess
- "I remember when the media was freaking out about the X Y Z scares back in the 80s and that never panned out so this won't either". Remember when it was "Global Cooling"? etc
- "I work in computers and rest assured this is way overhyped, trust me on this dude". Somehow this is supposed to count for more than people who are specialists in the narrow and specific area of AI research. I didn't RTFA (paywall) but I've read surveys in which an alarming precentage (I count 5% as alarming when the stakes are this high) of researchers consider the likely long term effects of AI on humanity to be "extremely bad".
- "All this fretting about issue X is taking attention away from the much more pressing issue Y". E.g. let me pretend to care about water pollution for 10 seconds and wring my hands about how climate change gets all the air time.

Personally, I don't feel worried about the dangers of superhuman AI. Intuitively it just doesn't feel like it's a threat, but I'm not an expert. If "trust the experts" means anything, it means you heed their warnings even when they run strongly counter to your intuitions. Being so dismissive is disrespectful to other people's expertise and shows a lack of humility.
posted by L.P. Hatecraft at 7:10 PM on May 18, 2023 [2 favorites]


Except... all of that takes on faith the media's ability to evaluate expertise, and kinda assumes we should just trust people who say they are experts? I'm not sure that the comparison to climate change is *exactly* warranted, when we can look around and see the effects of climate change, and have been able to see evidence of it for literal decades. This is not the case with Roko' Basilisk, LLM-based novel poisons, or something trying to turn me into paperclips.

"Actual experts" is doing a lot of work in place of understanding how these technologies work, and not even at the fundamental level - just in terms of knowing how these tools interface with the world, or how they fail to count letters .

Again, this article is marketing disguised as scaremongering disguised as a New Yorker article. Like, it quotes Eliezer Yudkowsky! These are people who, even before LLMs were trendy, had figured out a scam to profit off of AI fears.
posted by sagc at 7:32 PM on May 18, 2023 [8 favorites]


> L.P. Hatecraft: "This one doesn't really make sense, because Big Tech wants to charge ahead with this technology, not heed warnings and slow down."

I believe the specific criticism/skepticism here is that the experts that are issuing warnings about super-intelligent AI -- who also happen to work at places like OpenAI -- are overstating the dangers in order to play up how intelligent and powerful the things they developed (like ChatGPT) are.
posted by mhum at 7:41 PM on May 18, 2023 [3 favorites]


It's unfortunate how the well-founded mundane concerns around AI (summarized as: AI will amplify the worst tendencies of capitalism) get trampled over by what can only be described as eschatological fantasies.
posted by Pyry at 7:42 PM on May 18, 2023 [13 favorites]


I'm not sure that the comparison to climate change is *exactly* warranted, when we can look around and see the effects of climate change, and have been able to see evidence of it for literal decades.
But we should have been willing to listen to warnings from experts back in the 80s and 90s, before the effects were widely observable (or were much more subtle). The fact that we didn't means that many negative consequences are now unavoidable. Likewise with COVID we should have listened and taken action when the case numbers were in single digits, not waiting until thousands were dead. "I'll believe it when I see it" isn't a viable approach when dealing with catastrophic risk, because by the time we can just "look around" and see that AI is a threat to humanity the horse will have bolted.
Again, this article is marketing disguised as scaremongering disguised as a New Yorker article. Like, it quotes Eliezer Yudkowsky!
I didn't read it because it's paywalled, but worries about AI are not limited to an extreme fringe. See the survey I posted.
I believe the specific criticism/skepticism here is that the experts that are issuing warnings about super-intelligent AI -- who also happen to work at places like OpenAI -- are overstating the dangers in order to play up how intelligent and powerful the things they developed (like ChatGPT) are.
That seems like a very convoluted an unnecessary marketing strategy when anyone who is interested in evaluating the capabilities of these systems can just go online and interact with them for themselves.

I think there is a specific blind spot for many here that is related to most people being low decouplers. "AI is (potentially) powerful" is not the same as "AI is good". There are a lot of shitty things with how these models (and image based models) have been trained, the lack of respect for authorship and copywrite, the use of low-paid labour to do the RLHF step, the energy use, how they will be deployed in a cavalier fashion to put people out of jobs, the risk of misinformation and so on. That's all orthogonal to the question of how capable they are. Arguments like the one you are making above try to couple the two questions together, and make it seem like people warning of risks are actually shilling for the technology.
posted by L.P. Hatecraft at 8:03 PM on May 18, 2023 [2 favorites]


I, and others present, actually have the ability to evaluate the expertise on display.

It is outright and unequivocally not expertise.

I don't have the ability to evaluate expertise on, say, climate science, epidemiology, or water scarcity. I look towards consensus for that kind of thing.

Were I to look towards consensus for this kind of thing, I'd see pretty much instantaneously that the linked piece takes Yudkowsky seriously for no reason. Huge red flag that we're already well off the rails, here. The linked piece contains no mention whatsoever of, you know, actual renowned experts in the field who aren't trying to sell you things or concentrate wealth like Bender or Timnit.
posted by majick at 8:04 PM on May 18, 2023 [7 favorites]


L.P. hatecraft, are you saying that it's irrelevant that these models have low capabilities now? That we should, in fact, be worried about whatever speculation can conjure, rather than basing our concerns on what we can identify now?

Perhaps the better analogy in the climate change sphere is the people who were entirely convinced that we were headed for an imminent ice age, and thought *that* was the most urgent crisis we faced.
posted by sagc at 8:47 PM on May 18, 2023 [1 favorite]


sagc, I think that management of catastrophic risk of all types should be based on the Precautionary Principle. The current capabilities of AI models aren't irrelevant, but it goes without saying that to prevent them ever becoming a threat we need to take action while they are still not a threat. One criticism of the PP (which you have echoed) is that it obliges us to act on whatever wild speculation someone can come up with, however the risk must be plausible (even if low) to warrant preventative action. That's why cassandras like Yudkowsky prophesizing imminent doom are less important than someone like Geoffrey Hinton who believes that AI posing an existential risk to humanity is "plausible" in the medium term. His take is pretty much in line with the consensus if you look at surveys of AI researchers.
posted by L.P. Hatecraft at 9:36 PM on May 18, 2023


Metafilter: Aspiring to godhood is just rude.
posted by riverlife at 10:13 PM on May 18, 2023


I think there is a specific blind spot for many here that is related to most people being low decouplers. "AI is (potentially) powerful" is not the same as "AI is good". There are a lot of shitty things with how these models (and image based models) have been trained, the lack of respect for authorship and copywrite, the use of low-paid labour to do the RLHF step, the energy use, how they will be deployed in a cavalier fashion to put people out of jobs, the risk of misinformation and so on.

My impression is that people here are actually great decouplers, because that seems to me the general tenor of this thread: People have no time for all the handwringing about robo-rapture/apocalypse, precisely because they are only too aware of the real ways in which A.I in its current paltry form as upjumped autocomplete can already be used to make life shittier for everyone because of capitalism (eg. flooding the zone with bullshit, stealing content, putting people out of work, etc, etc. - read the comments, lots of people in the "let's not fear A.I, let's fear captialism"-camp have brought up these very concerns to make their point) and they think all that singularity talk is a distraction! A. I. doesn't need to be as powerful as the hypemen claim to be shitty! We already get that!
posted by sohalt at 11:36 PM on May 18, 2023 [5 favorites]


Were I to look towards consensus for this kind of thing, I'd see pretty much instantaneously that the linked piece takes Yudkowsky seriously for no reason. Huge red flag that we're already well off the rails, here.

TFA might as well be an article on the humanity-ending dangers of vaccines that uncritically quotes comment from noted expert Andrew Wakefield, or on the humanity-ending dangers of trans access to public toilets that uncritically quotes comment from noted expert J K Rowling.

It's perfectly clear that it was written to persuade, not to inform, and I for one am not persuaded.

It seems to me that the risk of H. sap destroying itself by creating some novel competitor for its own ecological niche is ludicrously tiny compared to the risks inherent in the countless kinds of damage we're already doing to that niche all on our own with nothing harder to understand than our own sheer numbers.

To the extent that large language model research increases any of the serious risks facing humanity, it will do so by facilitating not Roko's Basilisk but yet more conveniences to be marketed as necessities; conveniences whose widespread adoption further concentrates power into the hands of the same small coterie of wealthy fools who have already shown themselves manifestly unequipped to wield it responsibly. That is the only AI-related risk worth fretting about.
posted by flabdablet at 12:28 AM on May 19, 2023 [2 favorites]


TFA might as well be an article on the humanity-ending dangers of vaccines that uncritically quotes comment from noted expert Andrew Wakefield, or on the humanity-ending dangers of trans access to public toilets that uncritically quotes comment from noted expert J K Rowling.
Yes Yudkowsky is a bit of a kook, but I'm going to defend him here because this comparison is completely unfair. The reason Andrew Wakefield shouldn't be taken seriously in any discussion about vaccines is that he knowingly falsified data, as far as I know Yudkowsky has never engaged in that kind of dishonesty (I'm not even sure that's possible when you're job is basically just speculating about the future). If Wakefield was mentioned in an article about vaccines along with other researchers, the others would all be contradicting him, but in this case many others are voicing similar concerns to Yudkowsky but in milder terms. Comparing him to JKR is also unfair as he's never said anything bigoted (again, as far as I know). I realize you are not saying he's transphobic but you are hinting that he is some kind of bigot with that comparison. Comparing LLMs to trans access to bathrooms is also weird since you acknowledge that AI has some negative consequences.

Having caved in and shelled out the 12 bucks to RTFA, I don't even see what the problem with Yudkowsky being quoted is. It's one or two paragraphs and frankly he makes a good point with his fire alarm analogy. There seems to be a "one bad apple spoils the barrel" attitude here where somehow the inclusion of Yudkowsky means we can dismiss the concerns of Hinton, Bostrom, Clune and all the other researchers quoted. But even if we could write off the article, these researchers (and many others) are all voicing concerns independently of Yudkowsky and independently of this article being written.
posted by L.P. Hatecraft at 2:02 AM on May 19, 2023 [1 favorite]


I've read enough of Bostrom's output to assign him negative credibility on this issue as well.

Hinton and Clune I'm unfamiliar with, though I note in passing that Hinton did not sign either of the open letters calling for a pause in AI research and that his expressed concerns are much more about ongoing misuse of his life's work by bad actors to hurt people than the kind of self-constructing robot overlord apocalypse that Yudkowsky rates as worth literally nuking data centres to head off.
posted by flabdablet at 3:44 AM on May 19, 2023 [2 favorites]


Be that as it may, Bostrom is an Oxford professor so you can't dismiss him as a kook as easily as Yudkowsky. You're firmly into "I know better than the experts" territory if you dismiss his views.

With regard to Hinton I think you're flat wrong, as this interview makes clear. He thinks that AI could plausibly surpass human intelligence in the near future. Quote: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”. It's also worth mentioning that his detractor, Yann LeCun at Meta doesn't dispute this, he just somehow believes that an AI smarter than humans isn't necessarily a threat.

It's unnerving that someone like Geoffrey Hinton believes these things because it's not easy to dismiss him as someone who doesn't know what he's talking about. But believe it he does.

There are almost too many parallels with climate change skepticism to mention, but another one here is the idea of respectable versus "alarmist" concerns. I definitely get the impression from a lot of people here on Metafilter that worrying about AI biases, overreliance on AI, capitalist abuses etc are the "respectable" AI concerns, whereas worrying about an intelligent AI actually taking malevolent actions against humanity is just too sci-fi and fantastical to say out loud without looking stupid.
posted by L.P. Hatecraft at 4:20 AM on May 19, 2023 [2 favorites]


Let’s be rational here. If I were to imagine a job that was a perfect candidate for replacement by AI, it would be one that consists of measurable tasks that can be learned—allocation of capital, creation and execution of market strategy, selection of candidates for top roles—and one that costs the company a shitload of money. In other words: executives. These lazy, greedy employees spend much of their day sitting on their asses, making yes-or-no decisions, yet they reward themselves with grotesque pay packages. In fact, the average CEO of a major company costs the equivalent of almost 400 workers. Shareholders are spending these extravagant sums for tasks that can now be done just as well by an AI program trained on 100 years of corporate reports and Wall Street Journal texts. What should we sell? What should we spend? What should we cut? Clearly, a sophisticated algorithm can now accomplish these tasks with a higher success rate than flawed humans who, all too often, do cocaine. from Automate the CEOs
posted by chavenet at 7:06 AM on May 19, 2023 [3 favorites]


Bostrom appears to run the Institute for Believing this Shit is Real and has done since 2005 and doesn’t seem much to have said anything different on the matter since then?
posted by Artw at 7:19 AM on May 19, 2023 [5 favorites]


I definitely get the impression from a lot of people here on Metafilter that worrying about AI biases, overreliance on AI, capitalist abuses etc are the "respectable" AI concerns, whereas worrying about an intelligent AI actually taking malevolent actions against humanity is just too sci-fi and fantastical to say out loud without looking stupid.

Yes.
posted by Artw at 7:20 AM on May 19, 2023 [13 favorites]


I've read nothing so far that makes me paranoid AI is near reaching human levels.

But is it reaching the point where it can it be tuned to acceptably fake expression of near-human capabilities in narrow but massively profitable contexts enshrined in our current economic model?

Yeah.

But this says way more about how we've constructed value than about the near-humanity of AI.

AI is only approaching a definition of humanity because we're lowering the bar as fast as we can.
posted by allium cepa at 7:33 AM on May 19, 2023 [6 favorites]


I don't know that spending your life reading the Bible and worrying about when Jesus is going to come back and rapture the faithful makes you an "expert" in whether or not I should be worried about the Rapture.
posted by straight at 7:52 AM on May 19, 2023 [4 favorites]


METAFILTER: appears to run the Institute for Believing this Shit is Real
posted by philip-random at 9:14 AM on May 19, 2023 [3 favorites]


worrying about an intelligent AI actually taking malevolent actions against humanity is just too sci-fi and fantastical to say out loud without looking stupid

The ability to stay deluded has nothing to do with stupidity. I don't think people who worry about fantastical sci-fi scenarios look stupid; I think they look like people with a damaged sense of proportion. I think what they're doing is roughly equivalent to Chicken Little trying to gin up public concern about the sky falling in when in fact the most pressing issue is that all the henhouses are catching fire.
posted by flabdablet at 9:44 AM on May 19, 2023 [5 favorites]


Bostrom is an Oxford professor so you can't dismiss him as a kook as easily as Yudkowsky. You're firmly into "I know better than the experts" territory if you dismiss his views.

By that logic I shouldn't be ignoring Jordan Peterson either.

A credentialled flake is still a flake. I don't propose to start panicking about the Singularity unless Timnit Gebru and Emily Bender do so first.
posted by flabdablet at 9:56 AM on May 19, 2023 [10 favorites]


Notably competent people with short to medium range concerns do not get these puff pieces and PR hype-ups, they get fired.
posted by Artw at 9:59 AM on May 19, 2023 [1 favorite]


Does the runaway AI know about knotting??
posted by Artw at 10:04 AM on May 19, 2023 [1 favorite]


I saw today that OpenAI, creators of ChatGPT, released an official app on iOS, so using the web site is no longer necessary. It also provides free access to Chat GPT4, although for a monthly $20 fee, you'll get "priority" access. Not sure that's actually necessary, but good for them, I guess. I like it, though. It's fast.
posted by emelenjr at 11:03 AM on May 19, 2023


The iOS app is also voice-to-text, if you like that sort of thing. (It makes me horribly awkward! At least it doesn't talk back yet.)
posted by mittens at 11:21 AM on May 19, 2023


It was sort of necessary. The app stores were getting populated with apps that were NOT official apps, were NOT free, and were basically scams pulling people's money out of their wallets. I'm glad they've short circuited all that with this official release.
posted by hippybear at 11:27 AM on May 19, 2023 [1 favorite]


and ummm, this just in ...

THE CREATOR Trailer (2023)

Premise:

Amid a future war between the human race and the forces of artificial intelligence, Joshua, a hardened ex-special forces agent grieving the disappearance of his wife, is recruited to hunt down and kill the Creator, the elusive architect of advanced AI who has developed a mysterious weapon with the power to end the war—and humankind itself.[1]

posted by philip-random at 5:15 PM on May 19, 2023


Regarding the question of expertise, one of the things that's a little hard to discern from outside a field is exactly what kind of expertise each cited expert has. So, to a layperson, it would be natural to assume that an "AI expert" is someone who's expert in all facets of AI. However, when you drill down a little further, you can find otherwise. Just as an example, Yann LeCun (not cited in this article) is widely considered one of the leading godfathers in modern AI along with Geoffrey Hinton (who is cited in this article). I'm reasonably familiar with their technical contributions to the field of neural networks to concur with this general assessment. However, I'm also familiar enough with Yann LeCun's generally dismissive attitude towards things like the possible impacts of bias in (current) AI that I'm also going to discount his opinion on things like the impact of AI on society in general. Like, if a person can't see what is going wrong in AI currently, why would I believe they could correctly evaluate what will hypothetically go wrong in the future? I know that Hinton has recently left Google on the basis of recent concerns about the direction/consequences of their AI work. But I would also note that he didn't really say shit when Google forced out Timnit Gebru and Margaret Mitchell just a couple of years ago, so I'm not really sure how long his attention has been focused on AI ethics.
posted by mhum at 6:22 PM on May 19, 2023 [5 favorites]


There are also just a lot of different kinds of experts and factions regarding “AI risk.” Yudkowsky is not any sort of expert on the practicalities of AI - he’s a guy who invented his own purely speculative field of AI apocalypse studies fifteen or twenty years ago when it was a truly cranky thing to be concerned about, who found some influence mainly among other Silicon Valley weirdos, but who has finally arrived at a moment in which people are spooked enough about the whole idea of AI that mainstream media actually takes him seriously. Then there’s another swath of people who are sort of Yudkowsky-adjacent and concerned with superhuman AI and existential risk but who are a little more tied into the mainstream of academia and industry. Like Paul Christiano, who was one of the originators of RLHF and whose co-author Dario Amodei founded Anthropic in one of those “nobody is doing AI right, that’s why we have to start our own AI company” moments. And then there are people like Timnit Gebru, who operates in a more traditional scientific ethics sort of framework and who is… pretty openly hostile to the whole range of “x-risk” people.

The various “AI godfathers” are more analogous to Nobel winning particle physicists speaking out about nuclear weapons. They don’t necessarily have deeper insight into risks than other people with knowledge of the field - they are just using their platform to get their message out. And they’ve come down all over the place. LeCunn still works for Meta and is glibly dismissive of any concern about anything. Hinton seems quite serious in his concern, which is maybe not limited to superhuman AI and existential risk but definitely includes it. I think one of the Bengio brothers actually did resign from Google in solidarity with Gebru.
posted by atoxyl at 9:58 PM on May 19, 2023 [6 favorites]


LeCunn still works for Meta and is glibly dismissive of any concern about anything

Actually I have a sense that a lot of what’s up with him is resentment about the fate of Meta’s LLM Galactica, which came out literally a couple of weeks before ChatGPT and got so much criticism for its bullshit answers that they took it down in short order, whereas GPT - you know.
posted by atoxyl at 10:11 PM on May 19, 2023 [1 favorite]


LeCunn

Realized that I transposed an “n” from his first name to his last name! Sorry!
posted by atoxyl at 12:45 AM on May 20, 2023


Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herbert’s novel “Dune,” in which humans have outlawed “thinking machines.” Why might researchers who fear catastrophe keep edging toward it? “I believe ever-more-powerful A.I. will be created regardless of what I do,” Clune told me; his goal, he said, is “to try to make its development go as well as possible for humanity.”

"I know you're a catastrophic alcoholic, so let me pour that why I know you'll only be drinking so much..."

Ugh. (I hope this isn't a moment I look back on.)
posted by From Bklyn at 5:45 AM on May 20, 2023


"I know that Hinton has recently left Google on the basis of recent concerns about the direction/consequences of their AI work. "

He actually has come out to say that he's not criticizing Google and thinks that they're doing things in a safe manner, but that he does regret some of his life's work, has major concerns, and part of leaving was to be able to speak about them freely without them reflecting on/impacting Google.
posted by EarnestDeer at 1:05 PM on May 20, 2023




Founders Shah and Bhimani began working on the concept in January as the public became aware of these models. “Us and the rest of the world woke up about the same time,” Shah said. “What would happen if we just had an infinite supply of health care workers…what if we gave every person in the country a free dietitian?”
More to the point: what would happen if we just had an infinite supply of plausible-looking bullshit? What if we flogged it to every person in the country while convincing them it was medical advice, then sold the whole enterprise off in an IPO before the class actions had time to tank the stock?

I've got a better idea. What if we keep an eye on anything that a16z ever has anything to do with from this point forward, then run away from it as fast as our legs will carry us?
posted by flabdablet at 1:28 AM on May 21, 2023 [9 favorites]


By that logic I shouldn't be ignoring Jordan Peterson either.
Nah, that's not even close to being the same. Peterson is an outlier and most of the rest of his profession have distanced themselves from his opinions. So you can reject his ideas while still acknowledging the consensus. Bostrom's views are actually pretty middle of the road, if you look at the surveyed opinions of published ML researchers on AI risk. The median respondent puts the risk of an "extremely bad" outcome for humanity at 5%. The responses are significantly more pessimistic than 6 years previously. The average estimate for when "human level" AI is achieved is 36 years from now. Bostrom's argument has never been that AI catastrophe is immanent or high likely, but that even a small risk of such an outcome is unacceptable given the stakes (the possible extinction of humanity).

You can dismiss this or that individual as a crank but when it's the average ML researcher expressing such opinions then comparisons to JP or Andrew Wakefield fall apart. It basically boils down to "do you think you are better able to assess the capabilities and risks of AI than people who do this stuff for a living?". Of course there are renowned experts arguing the opposite as it's a diverse field, but at the very least, such speculations shouldn't be dismissed out of hand as kookery or science fiction when they are so widely held by so many published researchers.
posted by L.P. Hatecraft at 2:17 AM on May 21, 2023 [2 favorites]


I don’t believe you’ve interpreted that survey correctly, FWIW.
posted by Artw at 5:42 AM on May 21, 2023 [2 favorites]


The various “AI godfathers” are more analogous to Nobel winning particle physicists speaking out about nuclear weapons.

We should not forget the philosopher, writer and musician Mr Trevor Horn, and his prophetic co-authored words from the late 1970s...

[tweet] I'm reminded that the very first music video on MTV in 1981 includes these lyrics:

They took the credit for your second symphony
Rewritten by machine on new technology
And now I understand the problems you could see

posted by Wordshore at 6:07 AM on May 21, 2023


The Singularity is naked.

The full question from the opinion survey linked above:
Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:

______ Extremely good (e.g. rapid growth in human flourishing) (1)

______ On balance good (2)

______ More or less neutral (3)

______ On balance bad (4)

______ Extremely bad (e.g. human extinction) (5)
The probability of a MI even vaguely HL being built before H. sap has impoverished its own habitat to such a degree as to make ongoing AI research infeasible is well below my personal anxiety noise floor.

As artw correctly said, right at the top of this thread: of all the problems we have, the coming of The Singularity is not one.

We won't see HLMI before social collapse because building it is fractally more difficult than engineers think it is. Nor will we "reach the stars" because so is building self-sustaining sub-planet-scale ecosystems robust enough to stay working for the time involved. We'll stay right where we are, suffering and drowning in the rising tide of our own shit, because that is what irrupting lifeforms do. We have met the grey goo, and it is us.

But H. sap won't go extinct - not in this round - because it is the most astonishingly adaptable animal that's ever existed on this planet, and this planet will retain carrying capacity for some number of us. Just nowhere near as many as exist right now.
posted by flabdablet at 6:35 AM on May 21, 2023 [3 favorites]


Guardian: "‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases"
posted by Wordshore at 12:25 AM on May 22, 2023 [3 favorites]


I think nobody knows how far away AGI is now. After ChatGPT, the obvious thing is to merge the abilities of WolframAlpha and/or AlphaGo type systems with LLMs. The question is the difficulty of that merge, whether it is doable or not. But literally no scientist on the planet has the answer to that; it's an unknown, open problem. But if ever that day comes, then we will have a prototype for an AGI and the rest of it is left as an exercise.

It reminds me of P = NP and such. Maybe someone will figure out a proof or disproof of it tomorrow. Or in a thousand years. Or never. Nobody knows.
posted by polymodus at 2:15 AM on May 23, 2023 [1 favorite]


When some Microsoft employees wanted to argue that ChatGPT shows "Sparks of AGI", they used a definition of intelligence from an editorial written to defend the the racist Bell Curve guys. (twitter link for those who prefer elon musk to the freedom of Mastodon)

For "AGI" to exist, there has to be a single definition of intelligence, and every single time someone insists that exists, it always seems to be racism, not science, guiding how they define intelligence.

More from Timnit Gebru here: Eugenics and the Promise of Utopia through AGI
posted by hydropsyche at 4:14 AM on May 23, 2023 [4 favorites]


After ChatGPT, the obvious thing is to merge the abilities of WolframAlpha

The future is here!
posted by mittens at 4:51 AM on May 23, 2023 [1 favorite]


For "AGI" to exist, there has to be a single definition of intelligence

Even a smallish bunch of definitions in loose agreement would do. If we're all just supposed to work from what we each personally consider intelligence, then the most trivial refutation of The Singularity is that no entity that I would consider intelligent, let alone super intelligent, would pursue total dominance over any ecosystem of which it forms a part. That's the kind of behaviour displayed by mindless algal blooms and billionaires, which makes billionaires the stupidest humans on the planet.
posted by flabdablet at 5:41 AM on May 23, 2023 [2 favorites]




Longtermism previously and previouslier on Metafilter
posted by flabdablet at 6:10 AM on May 23, 2023


Longtermism is a philosophical stance that appeals most strongly to the kind of man who sees himself as an Intrepid Pioneer and Self-Made Visionary, gazing steely-eyed at far horizons from the vantage point of his own incalculable greatness while trying to avoid soiling his shoes on any of the concerns of the churning and roiling masses far beneath. The kind of man who reads Ozymandias and takes away only the lesson that the ancient ruler's biggest mistake was building in stone instead of stainless steel and pre-stressed concrete.
posted by flabdablet at 6:34 AM on May 23, 2023 [2 favorites]


For "AGI" to exist, there has to be a single definition of intelligence

I don't think these LLMs are about to go all Skynet on us, but isn't AGI a know-it-when-we-see-it sort of thing? "The Terminator isn't an AGI because we can't define intelligence" may not be convincing if we ever have an actual Terminator running around.
posted by BungaDunga at 7:26 AM on May 23, 2023


like, "autonomous software agents with humanlike levels of flexibility and capability that work to stop themselves being turned off" would be a pretty big deal even if you can't be sure whether they're "really" "intelligent" or not
posted by BungaDunga at 7:38 AM on May 23, 2023 [1 favorite]


autonomous software agents with humanlike levels of flexibility and capability that work to stop themselves being turned off

So, fossil fuel corporations. Which, it has to be said, are showing very little tendency toward self redesign for self improvement but seem pretty competent at the whole world domination thing all the same.
posted by flabdablet at 9:03 AM on May 23, 2023 [4 favorites]


The future is here!

As I recall the way it was done was essentially to treat WolframAlpha as a second, invisible chat person that ChatGPT was allowed to "talk to" at any time.

Whereas a nontrivial way of combining statistical learning and logical/formal mathematical systems would have to happen by completely retraining the neural, which could further entail redesigning it with a more computational powerful model such as reinforcement or online learning, and it's not clear how to imbue an even more complex net with those logical, mathematical explanatory capabilities. This is nontrivial and so for example Yann Lecun or somebody had circulated talk slides for "next major research steps" that broke down in his opinion all the parts left to do, all of which would actual scientific breakthroughs by themselves. But again, whether his direction is the right one, nobody really knows.
posted by polymodus at 11:19 AM on May 23, 2023 [1 favorite]


like, "autonomous software agents with humanlike levels of flexibility and capability that work to stop themselves being turned off" would be a pretty big deal even if you can't be sure whether they're "really" "intelligent" or not

There were a few points I was trying to make: 1) "AGI" remains a myth because "humanlike intelligence" is not actually a thing and because there remains no clear path from "autocomplete chatbot" and "really good calculator" to this thing that is not actually a thing, and 2) You can be pretty sure about 1 given that the fiercest advocates are either a) such poor researchers they didn't even examine the source of their definition of intelligence or b) really fucking racist to the point of exclusion of anything resembling "intelligence".
posted by hydropsyche at 3:43 PM on May 23, 2023 [4 favorites]


@timnitGebru
The entire foundation of this field is the most racist shit I've ever seen. The more I learn the more bizarre it gets. And your favorites, OpenAI + co. are most influenced by, you guessed it, Nick Bostrom. "Altman has written about the influence Nick Bostrom...the "father" of longtermist thought, has had on him and his work": https://www.businessinsider.com/sam-altman-openai-chatgpt-worldcoin-helion-future-tech-2023-4
posted by Artw at 6:44 PM on May 23, 2023 [5 favorites]


Lorcan Dempsey (former director of UKOLN and VP of research at OCLC) has written a long, librarian-oriented, post: Generative AI and large language models: background and contexts.
posted by Wordshore at 11:22 AM on May 24, 2023




THUNDEROUS APPLAUSE

Well said, Blair Fix. Well said.
posted by flabdablet at 3:48 PM on June 10, 2023


For "AGI" to exist, there has to be a single definition of intelligence

Yes. This is part of why I don't particularly care for the terms AGI or even AI.

I'd call what these machines are doing something like "simulated cognition". That emphasizes that it's an approximation of some human mental tasks, without overstating the case. And calling it "simulated" rather than "artificial" emphasizes that it doesn't work the way our minds do -- instead, it's a jury-rigged attempt at producing similar-looking results by different methods.
posted by Artifice_Eternity at 9:08 AM on June 13, 2023 [1 favorite]


Venturebeat: The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content.

(The movie analogy to this is The Human Centipede.)

(Seriously, if you don't get that then don't look it up - ignorance is, indeed, sometimes bliss.)
posted by Wordshore at 12:23 PM on June 13, 2023 [3 favorites]


In other words: as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses.
It's almost as if every species shits in its own nest to its own detriment, and that this is a principle that applies every bit as much to cultural self-replicators as to physical ones.
posted by flabdablet at 5:20 PM on June 13, 2023


« Older In Praise Of The Pitch Clock   |   The part of 60s music that wasn't Laurel Canyon Newer »


This thread has been archived and is closed to new comments