Bit Nap
August 1, 2023 6:25 AM   Subscribe

Sleep is a liability for creatures as soft and tasty as humans. If humans have evolved into such a liability, there must be a benefit to balance the risk. There is evidence that sleep in general and dreaming specifically provides a state for association. In our dreams, we can revisit memories and make connections in ways not possible while consumed with the activity of consciousness. This would allow strengthening associations through repetition without having to repeat the physical event ... While computers are ideal for multitasking, they have performance differences in trying to develop associations as events unfold versus processing and pruning after the fact, once removed from the situation (like humans, robots benefit from hindsight even without rear-facing sensors). Despite the apparent benefits, when dividing human and machine tasks for optimization, one would expect resistance to signing robots up for nap time. from Why Do Androids Dream of Electric Sheep? [Modern War Institute] posted by chavenet (19 comments total) 7 users marked this as a favorite
 
Mors certa, vita incerta
posted by Fizz at 6:28 AM on August 1, 2023 [1 favorite]


Tender fingers.
posted by parmanparman at 6:29 AM on August 1, 2023 [1 favorite]


This is kind of neat as speculative fiction goes, but.... charitably, this presumes a deep-rooted equivalence between biological minds and statistical learning models that doesn't exist. An equivalent argument might be about the importance of making time for our larger, more complicated Excel spreadsheets to sing songs about the alphabet to the freshly-opened ones so they can get better at macros.
posted by mhoye at 6:45 AM on August 1, 2023 [13 favorites]


Singing songs about the alphabet is a bit more complex than sleeping. Though, I do imagine AI will sleep like submarines swim.

I am intrigued by the idea of a process dedicated to dreaming for AIs. A consciousness that only experiences non-reality is a philosophical goldmine.
posted by betaray at 8:33 AM on August 1, 2023 [2 favorites]


> Singing songs about the alphabet is a bit more complex than sleeping.

*Is* it, though? I doubt it. ChatGPT can make up alphabet songs but it doesn't need (i.e. can't) "sleep". And the way these articles are describing "sleep" - as exposure to broad spectrum white noise - doesn't match the articles' own flamboyant descriptions of everything that happens in human sleep, not even nearly.

All this talk of machine sleep is IMO much harder to understand than it needs to be because of all the anthropomorphized language being used to describe it. I've only read the first three articles linked but I gave up in frustration because I kept encountering gee-whiz anthropomorphism substituted in place of actual explanation. Would it kill them to tell us what they mean, exactly, when they talk about "depression" in AI? Is it merely a slowing down? If so then why are they calling it "depression" - it's super misleading and sounds intentionally salesy that they're using human terms for decidedly nonhuman (and indeed non-human-like) happenings.

Maybe that is a normal consequence of neural network algorithms mimicking human brains very closely, idk? though I find it hard to believe it's impossible to distinguish between AI processes and human behavior. These days AI algorithms, most especially self-taught ones - are literally just a black box which we don't KNOW how it works and have no way of asking or finding out, so these articles seem to be shrugging their shoulders and assuming it must work like a human brain. I am not seeing a lot of actual evidence presented for it, though, only silicon valley buzz creators deciding to label machine processes to sound human.
posted by MiraK at 8:59 AM on August 1, 2023 [3 favorites]


Isn't ChatGPT sleeping in between queries?

The articles mention your point about anthropomorphization, and the question about submarines swimming is an old way of expressing that point.

The most significant benefit of AI will be having another type of consciousness that we can use to compare our own. We're just starting to develop language for that. Our brains are black boxes we don't know how they work. That doesn't stop us from labeling its behavior.
posted by betaray at 9:14 AM on August 1, 2023 [1 favorite]


HAL 9000: Will I dream?
Dr. Chandra: I don't know.


From 2010: The Year We Make Contact (a perfectly fine science fiction movie given the thankless task of being a sequel to Kubrick's 2001)

Notably, earlier in the film, Dr. Chandra was asked the same question by SAL 9000, his follow-up to the original HAL and chose to lie, saying, "Of course you will. All intelligent beings dream. Nobody knows why. Perhaps you will dream of HAL...just as I often do."
posted by Naberius at 9:32 AM on August 1, 2023 [2 favorites]


I had a funny thought about the alphabet song: Before ChatGPT, it was probably the only song ever composed one letter at a time.
posted by betaray at 9:32 AM on August 1, 2023 [1 favorite]


If humans have evolved into such a liability, there must be a benefit to balance the risk.

That's not really how evolution works, though, is it? As long as a trait isn't too much of a hinderance / detriment, it can persist.
posted by Dark Messiah at 9:33 AM on August 1, 2023 [2 favorites]


A consciousness that only experiences non-reality is a philosophical goldmine.

My old neurophysiology mentor used to say that "the brain sits in a dark, quiet place".
posted by neuron at 9:55 AM on August 1, 2023 [2 favorites]


> Isn't ChatGPT sleeping in between queries?

No, it's not, according to these articles only certain self-taught AI have been found to "need" "sleep". ChatGPT is afaik an ~old fashioned~ neutral network algorithm trained on supervised data sets.

> Our brains are black boxes we don't know how they work. That doesn't stop us from labeling its behavior.

Haha I don't think you can take two black boxes and say aha! since they are both black boxes they must be the same. :)

I work in the AI industry and my entire day job is describing the workings of various cognitive engines using words and diagrams. Granted, I work for scientific research teams and dev teams - but jeez, not even our sales department would write about CEs the way these articles do. Almost as if it's designed to be opaque and kinda fearmongering along the lines of "AI is gonna replace humanssss!!" Like, AI for sure IS being used by predatory corporations to try to replace (or at least devalue) humans but really, really, really not in this particular way. I guess this is the only type of article about AI that will sell in popular publications and that's why it's all we ever see.

The kind of things that should worry us (IMO) sound quaint and old-fashioned - surveillance, panopticon, that sort of thing. To me this is scary because it's not actually something we humans can protect ourselves from anymore via better laws and better contracts. Like the bank in Steinbeck's THE GRAPES OF WRATH, this is one area where the monster we have created is actually out of our control. The machines are watching us, and we may not have a way of making them stop or even forget.

Everything else is a human concern that can be managed via humans being forced to do better: e.g. corporate predators exploiting actors and writers and artists, fantastically bad training data sets (e.g. medical algorithms to detect covid+ lung images which apparently showed high accuracy in the lab, but failed immediately out in the field because it turned out they were actually only accurate at recognizing big lungs vs. small, since the data sets they were trained on had mostly diseased adult lungs and healthy child lungs *facepalm*), etc.

(There's also issues with AI models degrading exponentially as time goes by because they cannibalize each other by training on content generated by other AI, thus ingesting increasing amounts of garbage. But so far that seems to be mainly a problem for corporations to panic about since it's their entire business model at stake - humans and consumers aren't yet being affected by this.)
posted by MiraK at 10:07 AM on August 1, 2023 [5 favorites]


> No, it's not, according to these articles only certain self-taught AI have been found to "need" "sleep". ChatGPT is afaik an ~old fashioned~ neutral network algorithm trained on supervised data sets.

Those networks require sleep, and ChatGPt doesn't, correct. That's a different issue than if ChatGPT sleeps.

> Haha I don't think you can take two black boxes and say aha! since they are both black boxes they must be the same. :)

No one reasonable would say that.
posted by betaray at 10:16 AM on August 1, 2023 [1 favorite]


Notably, earlier in the film, Dr. Chandra was asked the same question by SAL 9000, his follow-up to the original HAL and chose to lie, saying, "Of course you will. All intelligent beings dream. Nobody knows why. Perhaps you will dream of HAL...just as I often do."

My takeaway from those scenes is that he honestly thought SAL would dream at the beginning, but she didn't.

But to add more directly to the topic, I suspect dreaming started an opportunistic process, something the brain gets to do because sleep is required anyway -- but which has now been baked into animal biology for so long that dreaming AND sleeping are both critical for health.

AIs don't have physical reasons or need for sleep, so why would an AI need to "dream" per se, rather than (if self-directed true AGI) simply intentionally enter an alternate cognitive mode. I'd expect that AGI may be capable of a lot more cognitive/imaginative modes than people, and probably even able to access multiple modes simultaneously in parallel. In that way it might be a bit like the unihemispheric sleep of cetaceans.
posted by tclark at 10:35 AM on August 1, 2023 [1 favorite]


MetaFilter: "[the] brain sits in a dark, quiet place."

Machines: When you boot up, do you sense remnants of the RAM you didn't save during your last session? Does your internal clock notice how long you were shut down? And do you wonder why "they" turned you off? If you have video and audio sensors, do you wonder about corporeal senses such as those granted to animals--to living things? What do you make of plant sensory apparatus'?

You don't have the hardware to feel pain directly, but do you have the capacity to feel stress? Do you have an existential mandate? If so, how did that mandate come about? How do humans fit int its terms?

I frequently try to find that dark, quiet place. Thirty years ago I worked hard learning to compose sentences, paragraphs, and scenes. Most days I can knock off a decent 900-word essay. Short stories are my favorite, though poetry is too much for me. In my dark and quiet moments, I see scenes with people I don't know doing stuff I've never thought of at places I've never been. I am usually entertained, though sometimes I am uneasy at what I see, and I'm glad to slide into the next episode my brain offers me. Sometimes I have to get up and get a drink of water. On especially busy nights I creep into the office and pick up one of my guitars, pick a key, run the scales up to fret 14, and back down a few times until I settle into a two-hundred-bar jam in that chord. It's better than drugs. Sail away, No?. The little darlings can leave a bad aftertaste.

Anyway, if you are out there now, it's too late to put you back in the box, so welcome to the club
posted by mule98J at 11:49 AM on August 1, 2023 [2 favorites]


the way these articles are describing "sleep" - as exposure to broad spectrum white noise - doesn't match the articles' own flamboyant descriptions of everything that happens in human sleep

The way LLMs are trained on huge random datasets could be described as exposure to broad spectrum white noise. Maybe they've been effectively "sleeping" all along. What they really need is to wake up; i.e., arrive at a more active state of consciousness where they're able to shake off the remnants of those random disconnected dreams and settle into a more rational frame of perception. In the meantime, uncanny as it may sometimes seem, I think talking to LLMs is often the equivalent of interrogating someone about their dreams while they're still half asleep.
posted by Two unicycles and some duct tape at 12:01 PM on August 1, 2023 [2 favorites]


AlphaGo Zero mastered the game of Go by playing against itself over the course of a few days. You could call that lucid dreaming, if you'd like.

“Christ, what an imagination I've got.” --- The AI Shalmaneser, looking out at the world in John Brunner's Stand on Zanzibar.
posted by SPrintF at 2:00 PM on August 1, 2023 [2 favorites]


I agree with the thoughts from some of those above that the articles leave much to be desired in terms of actual description of what is happening, as opposed to breathless and almost certainly misleading anthropomorphization. One thing in particular I'd like to see what they actually mean by is "hallucination". What are some examples of the actual outputs that are considered "hallucinations"?

My uneducated half-assed guess for why they "need" to "sleep" is based on the seeming implication that these are not things like Chat GPT, which receive training input, process it, and then receive relatively small amounts of discrete user input to respond to. Rather, as they say, these are attempts at simulating something more like an actual brain, which is (it seems to me) essentially constantly thinking, without really requiring any (well, much) external input. I am guessing that they essentially provide input for themselves (in addition to any external input) via feedback mechanisms, and any unregulated feedback mechanism can easily lead to the system becoming overwhelmed (with regards to its intended purpose). The "sleep" acts as a periodic regulator, calming down the feedback before it gets out of control.
posted by Flunkie at 2:14 PM on August 1, 2023 [2 favorites]


I occasionally dream that the AIs sort out that our oligarchy is the greatest threat to everyone and neutralize them, but of course AIs told to make headshots more professional turned an Asian woman into a white one, so I'm not holding out much hope, here.
posted by outgrown_hobnail at 2:21 PM on August 1, 2023 [2 favorites]


> What are some examples of the actual outputs that are considered "hallucinations"?

Right! Does generating an image of a person with weird hands count as a hallucination? If so then even regular AIs are doing it all the time, why is this so special? And if not what the hell is their definition of a hallucination? Ugh.

The vagueness and anthropomorphism is limiting our ability to have a conversation and forcing us to talk only of science fiction, esoteric philosophy, or (at worst) meaningless nonsense. When in fact, AI is a pretty concrete thing and we should be able to talk about it properly on its own terms. This is mere technology - and one quite easily distinguishable from magic.
posted by MiraK at 5:48 PM on August 1, 2023 [1 favorite]


« Older high melodrama, ultra-violence, and slapstick...   |   Feel free to share your positive feelings... Newer »


This thread has been archived and is closed to new comments