The brain is not too warm or wet for consciousness
January 26, 2024 8:31 AM   Subscribe

New research suggests that consciousness is a quantum wave that passes through the brain's microtubules. My humanities-steeped brain is having trouble parsing this, but it sounds like there's some work that suggests an outsider theory about how consciousness works, involving quantum superpositioning to find the most efficient energy transfer, has gotten some proof. And the study involves.. tryptophan? Isn't that the thing that makes you sleepy at Thanksgiving? I'm curious what the smart people here on MF think. Is this what the journalists at Popular Mechanics think it is?
posted by heyitsgogi (185 comments total) 21 users marked this as a favorite
 
incredibly dubious, imo
posted by ryanrs at 8:36 AM on January 26 [10 favorites]


Ungated
posted by hippybear at 8:42 AM on January 26 [3 favorites]


I find Penrose’s case against computationalism relatively persuasive, but I’ve never seen how quantum effects really help with consciousness. It seems too much like ‘here’s two weird things, maybe one explains the other’. But perhaps my brain is too warm and wet.
posted by Phanx at 8:43 AM on January 26 [31 favorites]


Excuse me, can I go home early? My brain feels dry
posted by AzraelBrown at 8:46 AM on January 26 [30 favorites]


lights in SINGLE microtubules -> consciousness is quantum?? popular mechanics continues its slide into gibberish.





While this is a long way from proving the Orch OR theory, it’s significant and promising data. Penrose and Hameroff continue to push the boundaries, partnering with people like spiritual leader Deepak Chopra to explore expressions of consciousness in the universe that they might be able to identify in the lab in their microtubule experiments. This sort of thing makes many scientists very uncomfortable.


SHOW ME WHERE PENROSE (seriously accomplished renaissance - if ego-fueled - scientist) PARTNERS WITH CHOPRA(loathesome snake oil bullshit woo merchant of the highest order, waster of trees, waster of brains)? ??????????

GTFO


STFU. flagged for ridiculousness
posted by lalochezia at 8:57 AM on January 26 [27 favorites]


Well, if Popular Mechanics says it's true, it must be true.
posted by pracowity at 8:57 AM on January 26 [4 favorites]


"This sort of thing makes many scientists very uncomfortable."

'This sort of thing' meaning 'hogwash', I assume. But hey, move fast and break things, right? You're not gonna let a bunch of stuffy scientists tell you what to think are ya? We're cutting study hall and we're gonna smoke a CIGARETTE!
posted by Sing Or Swim at 9:02 AM on January 26 [11 favorites]


we can't stop here, this is bullshit country
posted by seanmpuckett at 9:04 AM on January 26 [69 favorites]


Oh, okay. NOW I know why this is tickling my brain -- I went down a rabbit hole of Penrose interviews and material on YouTube a couple of months ago. Here he is with Lex Fridman from a few years ago [1h30m], probably the more lucid of the conversations. He's not really a crank, but he is exploring a bit of a crank thought stream. At least he won't be legally persecuted for going a bit wonky like happened to Reich.
posted by hippybear at 9:06 AM on January 26 [5 favorites]


Wikipedia has an article on this theory: Orch OR

(dubiousness intensifies, now with quantum gravity)
posted by ryanrs at 9:08 AM on January 26 [1 favorite]


Well it COULD be true.Wr wait to find out.
posted by Czjewel at 9:13 AM on January 26


I remember reading about this in Penrose’s Emperors New Mind book 30 years ago.

Medical science doesn’t still know how anesthesia works so keeping this in the ‘maybe’ pile LOL

Along with his rescaling conjecture that proposes the heat death of the universe leads into a rebirth by simply making the very large scale resize to the very small again. . . so the new universe takes off from there
posted by torokunai at 9:13 AM on January 26 [2 favorites]


SHOW ME WHERE PENROSE (seriously accomplished renaissance - if ego-fueled - scientist) PARTNERS WITH CHOPRA(loathesome snake oil bullshit woo merchant of the highest order, waster of trees, waster of brains)? ??????????


How Consciousness Became the Universe: Quantum Physics, Cosmology, Neuroscience, Parallel Universes
by Roger Penrose and Brandon Carter and Deepak Chopra
Published in 2017.
posted by hippybear at 9:20 AM on January 26 [9 favorites]


This is Nobel disease nonsense.
posted by mr_roboto at 9:27 AM on January 26 [10 favorites]


You are excused AzraelBrown. Anyone else? GET THEM!!!
posted by Windopaene at 9:30 AM on January 26 [1 favorite]


I read "The Emperor's New Mind" about 33 years ago, when it came out, and thought it was well dodgy then -- but I really hit my throw-book-at-wall moment reading his sequel, "Shadows of the Mind", where a couple of chapters in he suddenly asserts out of nowhere that there's no need to define consciousness in order to prove that computing machines don't have it.

Excuse me?
posted by cstross at 9:31 AM on January 26 [19 favorites]


My brain feels dry

Hey - maybe this explains hangovers. Because if my brain is on a different plane of consciousness in that period... that explains a lot.
posted by JoeZydeco at 9:32 AM on January 26 [2 favorites]


This sort of thing makes many scientists very uncomfortable.

'Makes people uncomfortable' is a pretty definite indication it's bullshit. It tries to turn a definite ("Chopra is a fraud and you aren't doing your argument any favors by paling around with him") into something subjective ("Those old thinking scientists are outside of their comfort zone with this innovative new way of thinking so we should pity them"). Standard woo-peddling, teach-the-controversy-shilling dogwhistle.
posted by kjs3 at 9:36 AM on January 26 [2 favorites]


What is the observable evidence that supports this model? What experiments can be devised or conducted to investigate this idea? I don’t see any.
posted by interogative mood at 9:37 AM on January 26 [5 favorites]


Re: Chopra and Penrose

Sir Roger Penrose, Deepak Chopra Headed to UA for Consciousness Conference's 20th Anniversary

But, TBF, I think this may simply have been people appearing on the same stage, and I think the book that has them both listed as co-authors may be a collection of independent essays, not a collaboration as such.

I am not going to read the book to find out.
posted by Ayn Marx at 9:37 AM on January 26 [2 favorites]


Have we asked the authors if they agree with the Rust Cohle theory concerning time being a flat circle?
posted by Abehammerb Lincoln at 9:44 AM on January 26 [7 favorites]


OrchOR might be true (I don't think it is) but I am not sure it does what lot of people want it to do, that is, provide some sort of satisfying explanation for what's going on. If anything it makes the problem worse.

Let's suppose that "consciousness is quantum" (whatever that means). Now what? Mystics like to brand this idea as somehow enlightening, but I don't see how it really gets you anywhere. Maybe it's true and there's a satisfying explanation via quantum mechanics somewhere but if there is we are nowhere close to finding it.
posted by BungaDunga at 9:52 AM on January 26 [2 favorites]


Well, if Popular Mechanics says it's true, it must be true.

certainly popular.
posted by philip-random at 9:52 AM on January 26 [1 favorite]


People looooooooooooove to throw around "quantum" to make BS sound legit.
posted by grumpybear69 at 9:53 AM on January 26 [10 favorites]


The Rust Cohle theory is

*shotguns beer*

*lights half-crumpled cigarette*

*inhales slowly*

beyond question. This wet/dry thing, it was always a long shot. This isn't just superpositioning. This is a wave.

*crushes cigarette on table*

And it's coming for you.
posted by cupcakeninja at 9:53 AM on January 26 [12 favorites]


This is so weird and sad and dumb. It's like saying, "The brain is 40% water, and is conscious...thus, everything that contains water shares in that consciousness." There is so much interesting work going on in consciousness...and this just is not it. Because it doesn't bother to answer the question of how consciousness is generated. It just pushes it back a level: Consciousness isn't a function of brains--or indeed of neurons--but of tubes inside the neurons! Okay, fine, but like...how? That has to be answered, before you start speculating that the entire cosmos is somehow conscious.
posted by mittens at 9:59 AM on January 26 [5 favorites]


Actual experiments are not greatly supportive of this theory. From physicsworld, 2022:
In fact, the researchers work out that to collapse the wavefunction in around 0.025 s, a whopping 10^23 tubulins would need to make up the coherent state. But as they point out, there are reckoned to be only 10^20 tubulins in the whole brain (about 10^9 in each neuron). “These considerations seem to rule out tubulin separation at the level of the atomic nuclei,” they say.

Incidentally, I also read The Emperor's New Mind when it came out and my Dad (a nuclear physicist) picked it up, read it and got almost as angry as he did at Maradona's hand-of-god goal a decade earlier - "misuse of physics" was one of the kindest things he said about it.
posted by thatwhichfalls at 9:59 AM on January 26 [16 favorites]


quantum mechanics doesn't let souls back in through the backdoor, so to speak. It's all just materialism, quantum mechanics exposes the materials as weird and nonlocal but it can't do what mystics seem to want it to- it doesn't provide some sort of scientific hook to hang a soul on. It's just wave functions and complex numbers and uncertainty, it's not actually the sort of magic you can build a soul out of.
posted by BungaDunga at 9:59 AM on January 26 [18 favorites]


Shorter Penrose: consciousness is not computation! It's a series of tubules!
posted by jedicus at 10:07 AM on January 26 [8 favorites]


Even panpsychism only gets you so far, "everything is conscious" doesn't explain anything about the ways humans (and, yes, other animals) seem to be different than everything else. Why are we so good at our particular kind of consciousness? And the answer to that probably comes down to the same questions we have always had about how brains work, and we are back to square one.

(Ditto "nothing is conscious, we are all just self-illusions"- that doesn't provide a satisfactory explanation for how we are so much more self-deluded than, say, our appendix is. Our appendix isn't hanging around with a sense of self but we are!)
posted by BungaDunga at 10:09 AM on January 26 [3 favorites]


Eventually we’re going to have to accept we are just ChatGPT implemented in meat.
posted by interogative mood at 10:20 AM on January 26 [6 favorites]


We really need a better mechanism for revoking PhDs and Nobel prizes.
posted by whm at 10:30 AM on January 26 [2 favorites]


Jonathan Livingston Seagull has entered the chat
posted by CynicalKnight at 10:36 AM on January 26 [7 favorites]


I think quantum biology is neat, though I am too smooth-brained to appreciate the subtleties. Since life evolved in a quantum world, it'd be silly to think evolution decided to shun any non-classical effect that gave an organism an advantage. But I also don't know how to distinguish "cool hack" from "essential technique". Maybe the brain uses a quantum trick to multiply matrices a little faster, I dunno.
posted by credulous at 11:03 AM on January 26


New research suggests that consciousness is a quantum wave that passes through the brain's microtubules.

I bet it doesn't.

I am very sympathetic to Penrose's complaints about the reductionist/computationalist accounts of consciousness, but OrchOR doesn't do a better job of explaining phenomena. I am sure that quantum mechanics plays some role in the function of consciousness, just because quantum mechanics plays a role in most things, but the idea that consciousness specifically relies on maintaining quantum superpositions just seems, like Phanx pointed out, to be just answering one mystery by pointing to another.
posted by The Manwich Horror at 11:04 AM on January 26 [4 favorites]


though I am too smooth-brained to appreciate the subtleties

Keep up, we’re calling it “dry-brained” now
posted by ejs at 11:26 AM on January 26 [12 favorites]


A real "dihydrogen monoxide" headline there.

Your Very Own Consciousness Can Interact With the Whole Universe

Well ... I mean, yeah. I'm doing it right now. So are you. The reach is the problem.
posted by Countess Elena at 11:27 AM on January 26 [12 favorites]


Nah, you don't want to revoke Penrose's prize. The work he did was worthy. We just have to stop venerating everything a prize-winning physicist says as being worthy of study (or indeed of making any sense whatsoever).

This old SMBC is still relevant.
posted by nat at 11:43 AM on January 26 [20 favorites]


For context, I am a physicist, and Mr. Nat (an information theorist) and I have a long standing argument about whether human thought in any way relies on quantum behavior (I'm mildly pro, he's very strongly against). Plus I've definitely known my share of physicists who, um, get a bit long in the teeth.
posted by nat at 11:46 AM on January 26 [5 favorites]


We're made of atoms so there are likely to be subsystems in our biology that rely on quantum effects. It may be what makes photosynthesis efficient. That doesn't have any more meaning than it happens to be a natural effect that evolution harnessed. Anything else is just woo.
posted by CheeseDigestsAll at 11:51 AM on January 26 [4 favorites]


Because if my brain is on a different plane of consciousness in that period... that explains a lot.

I mean if we're going to go full-quantum on this...chances are that it may not even be a brain at that time.

While contained in the skull, the brain is simultaneously wet and dry.
posted by rhizome at 11:55 AM on January 26 [2 favorites]


Eventually we’re going to have to accept we are just ChatGPT implemented in meat.

ChatGPT continuously retraining itself in response to sensory data coming from its environment, instancing reinforcement learning simulations on the fly, populating those with agents for predicting social outcomes, and predicting agent-state in a recursive (non-programmers: hall of mirrors) simulation, which is filtered to only track properties relevant to the system in question (we don’t care about barometric pressure when trying to work out how to avoid another traffic jam on I-95), and is able to extract a potential best-fit solution from all that and translate it into a reply or series of planned actions. Also some mechanism for flagging when that instancing needs to occur instead of the usual coasting on rote pattern matching.

That’s the absolute minimum; like, if we assume modern LLMs are a fairly complete crystalization of the consensus language evaluation portions of the brains of the majority of speakers of a particular language (which is on the optimistic/aggressive side of “a reasonable belief”), just… stored in an artificial neural network medium, then they are still less than a third of the total system complexity at best.

Like, this was the easy part, the part we can partially beat just with hardware scaling and priming it with every word ever written and La-la-la I can’t hearrrr you when your sentences include the word “copyright.”

BUT: it is still nowhere near as complex as it would need to be to make me believe any possible version of [waves dismissively upwards] quantum implementation bullshit, or for those to pass Occam’s Razor, or anything close to it. The neurotopology that defines us is insanely complicated but never so much so that it is worth giving youself over to this dreck.

We can embrace the insanely complicated and entirely non-magical reality that is us. Though we are going to need some species-wide therapy once we start to run into some of the deterministic implications of it all, down the road; confronting how little control any of us has over our lives never goes well. Suggestions: Taoism or its chemically-induced equivalent (clinician-administered ketamine).
posted by Ryvar at 11:56 AM on January 26 [7 favorites]


Oh, crap. Clarification: my dismisiveness is for Popular Mechanics, this theory, and Penrose-turned-crank, not for nat (whose posts I love, even if Mr Nat is right).
posted by Ryvar at 12:00 PM on January 26


"Though we are going to need some species-wide therapy once we start to run into some of the deterministic implications of it all"

This always makes me darkly chuckle, like, either we're made of lego (atoms) that are deterministic, so we don't actually control our consciousness/freewill, we just roll down around the world like a complicated pinball machine thinking we're the person manipulating the flippers instead of the realizing we're just the ball.

Or somehow we've handily got a ghost in the machine, a quantum-mumbo-soul that somehow is both non-deterministic (but not random! because you're in the same pickle then) that somehow allows the pinball to give hints to the flipper controls, which then lead to deterministic actions.

So even if we totally prove that, yes, we're just complicated lego. The fall out from that, the world seeking existential crisis-mode therapy isn't a choice-- because it was all deterministic anyway!
posted by Static Vagabond at 12:13 PM on January 26 [3 favorites]


I mean, yeah. I’m not suggesting you choose therapy, you are going to get therapy to deal with being slightly fuzzy legos (the fuzz is random noise so don’t bother looking for intent there), or you aren’t. Just like I didn’t choose to give that advice; its transcription was the inevitable outcome of my being in a state where I had sufficient empathy for the collective future selves of Metafilter to write it.

Taoism is how you accept that all of the above is okay. Ketamine is performance enhancing drugs for those of us who suck at Taoism.
posted by Ryvar at 12:21 PM on January 26 [6 favorites]


Penrose has been a crank for at least 30 years, imnsho. I remember rolling my eyes like Angela Merkel over his fetishization of QM back in the day.

Since the day I came across it, I've only become more convinced of the consciousness as historian theory. It's not inherently emergent, nor is it magical (at least not any more than the rest of our fantastically complex and largely irreducible biology*) but rather a mental construct that serves very specific purposes.

(*Also biological systems relying on QM effects is like the least surprising thing imaginable -- they're not designed from any theoretical framework, from the perspective of evolution a "solution" derived from QM has exactly the same level of difficulty, and the same lack of any "deeper" meaning, as one derived from 2+2=4.)
posted by bjrubble at 12:31 PM on January 26 [7 favorites]


I am supremely disappointed. I expected that if Popular Mechanics was going to cover the science of consciousness they’d at least provide a partner article on “Build You Own Motorboat That Can Travel the Astral Plane”. Nary a cutaway diagram in sight. For shame.
posted by q*ben at 12:40 PM on January 26 [14 favorites]


So, if consciousness is a function of loads of teensy wheensy tubules in the brain, what about all the tubes that make up the internet? What sort of creature is lurking within that set of biggy wiggy tubules? It boggles the mind…
posted by njohnson23 at 12:49 PM on January 26 [3 favorites]


I loaded TFA, saw what it was, immediately thought "oh not that again."

Is it possible that Penrose is getting a boost because of the aperiodic monotiling discovery last year?

Anyway, OP, Popular Mechanics is not a reliable source about anything forthcoming in science or technology. I remember reading PM predictions, 50 years ago almost exactly now, about the forthcoming transformation of trash recycling by fusion reactors, the waste heat from which would allow fractionating garbage down to its constituent chemical elements. I think it's always been like that. Kind of a Weekly World News or National Enquirer of STEM buzzwords.
posted by Aardvark Cheeselog at 1:10 PM on January 26 [3 favorites]


Sigh, not the microtubule crap again.

This is a solution in search of a problem. We have a huge wealth of research on how the brain processes information and contributes to consciousness. The explanation is not complete, but there is no lack of understanding of basic mechanisms. If anything, our biggest problem right now is not finding mechanisms for information processing in the brain, but ruling out which ones don't actually matter. If microtubules are indeed capable of storing and processing quantum information, we have no mechanistic link between that and any of the things that we know actually matter for neural processing. I have never seen any mechanism explaining how neurobiologically meaningful information is supposed to be put into a microtubule, or how it is supposed to come back out again.

And even if somehow this does matter for the biology of neurons, it in no way follows that neural processing or consciousness are actually quantum in any meaningful sense. The transistors in every digital computer are fundamentally quantum devices, depending on coherence and tunneling and other quantum weirdness for their function. Digital computers depend on quantum mechanics in a much more direct way than the human brain does. Yet digital computers are not quantum computers, and we all mostly seem to understand that there's nothing mysterious or "metaphysical" about our cell phones and laptops. They're complicated classical arrangements of quantum devices that are best explained by traditional computer science.

But somehow even though we have beautiful, rich, exciting descriptions of how brains process information, make decisions, and produce the perceptions that we describe as conscious experience, some people want to ignore all of that hard-won understanding and posit some fundamentally mysterious "quantum wave" that explains our minds. It's kind of spitting in the face of generations of neuroscientists who sweated at the bench to learn what we now know.

I wish Popular Mechanics would write articles about how the processing in visual cortex produces visual illusions, and how recurrent feedback in parietal and prefrontal cortex allows us to integrate information and produce decisions to act, and how our hippocampus produces a map of the space we move through to let us navigate to where we cannot see, and how changes in synaptic weighting under the control of oxytocin and other neuromodulators help us fall in love. This Deepak Chopra nonsense is desolation and foundering by comparison.
posted by biogeo at 1:14 PM on January 26 [22 favorites]


Penrose is a mathematician. Mathematicians go kooky all the time. Especially when they start thinking about science things instead of math things.
posted by clawsoon at 1:17 PM on January 26 [3 favorites]


So what if it's quantum? Quantum computers don't violate the Church-Turing thesis.
posted by rhamphorhynchus at 1:18 PM on January 26 [4 favorites]


An unrelated thread on HackerNews today surfaced this Edsger Dijkstra quote:

“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
posted by qxntpqbbbqxl at 1:36 PM on January 26 [6 favorites]


Oh, okay. NOW I know why this is tickling my brain -- I went down a rabbit hole of (...)
Are you sure it's not because of a passing quantum wave getting all up in your microtubules?
posted by Flunkie at 1:41 PM on January 26 [3 favorites]


I should add, my frustration is with the journalist and editors of outlets like Popular Mechanics that boost this fringe stuff instead of writing about the science that's both better established and, in my opinion, far more exciting. They do a disservice to the public's understanding of the science that is funded by that same public. In my opinion they do it because it's easier and gets more clicks for less cost. But I appreciate the FPP and the discussion here.
posted by biogeo at 1:44 PM on January 26 [2 favorites]


“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
or whether a helicopter can fly.
posted by rhamphorhynchus at 1:48 PM on January 26 [1 favorite]


As a person who knows things - and specifically this one thing verrrrryyy well - I can categorically state that Penrose, Hameroff, Chalmers et al are full-of-shit grifters whose theories should be thoroughly shunned.
posted by DeepSeaHaggis at 2:46 PM on January 26 [3 favorites]


I know, let's drown all the supercomputers and see if that yield artificial general intelligence
posted by chavenet at 2:57 PM on January 26 [4 favorites]


I tend to think that the question of how consciousness works is not likely answerable within a scientific framework, at least not any kind of scientific framework we currently have.

But that won't stop people from throwing all kinds of wild stuff at the wall and trying to make it stick.
posted by Artifice_Eternity at 3:24 PM on January 26 [2 favorites]


As a chemist, I see all interactions between matter and energy as inherently quantum mechanical. That's a bit like trying to claim a liquid is wet. Well, except when it isn't wetting like liquid mercury on most things. But that wetability is a quantum mechanical effect too. By that I mean if you don't include qm in your explanation, there's no classical formulation that can explain it.

We are inherently quantum mechanical phenomena like every other bit of matter that surrounds us. What's interesting to me is if the uncertainty inherent in qm affects consciousness or not, or, more likely IMO, the degree to which random switches in our heads change our behaviours. I don't know that this makes much difference in that conversation.
posted by bonehead at 4:05 PM on January 26 [2 favorites]


I tend to think that the question of how consciousness works is not likely answerable within a scientific framework

I tend to notice that that question has been in the process of being answered within a scientific framework for as long as I've been alive, that like all scientific results these answers are limited in scope but overlap in ways that construct an increasingly actionable model, that the well of ignorance from which the scientific process as a whole draws both motivation and results is in principle unbounded, and that the Hard Problem Of Consciousness is a clear instance of the God Of The Gaps trope.

I think that those people who remain on a quest for some kind of essence of an emergent behaviour as complex as consciousness have just painted themselves into a conceptual corner and would benefit from more time spent simply sitting in it and watching the paint dry until it no longer glues them to the floor when they walk out over it.

I don't think consciousness-as-phlogiston is a promising line of inquiry.
posted by flabdablet at 4:15 PM on January 26 [3 favorites]


Like, okay. This is pretty far out there, but is is so far out there that it isn't worth exploring? Aren't all avenues of scientific inquiry worth exploring? Is it hurting anyone that these questions are being asked and that there are experiments being designed based on these questions?

I don't think there's any harm in any of this, and maybe the line of questioning will lead to something.

As others have said here, we are quantum phenomena in the same way that we are made of atoms or that we have intestinal fauna -- we don't have any truly direct experience with these things, all increasingly large, but we probably have evidence for those things being true in various ways, subtle and less so.

It is possible this is a problem that cannot be investigated from inside it -- that consciousness cannot study itself in the same way the Universe cannot really investigate what existed before it, because its existence implies it cannot look beyond itself.

I have no idea.

At one point I read a while back that consciousness is a ~50-cycle hum that runs around the brain similar to the sweep of a radar line only much faster. I don't remember how or why that was derived, but it has stuck with me.

I do think consciousness has to be a hum or vibration of some sort. It can't be static. But beyond that, I can't quantify it and I do find the pursuit of doing that to be fascinating.

I do not that at least Penrose wasn't building devices and shipping them around. That's what got Wilhelm Reich imprisoned. The main lesson to be learned is, you can be respected and then go off the deep end as long as all you sell is books.
posted by hippybear at 4:21 PM on January 26


I do think consciousness has to be a hum or vibration of some sort.

Why limit your metaphor to a hum when you could be using all of music?
posted by flabdablet at 4:25 PM on January 26 [1 favorite]


consciousness-as-phlogiston

Ha, I'm going to keep this one.
posted by bjrubble at 4:27 PM on January 26 [2 favorites]


Why limit your metaphor to a hum when you could be using all of music?

The thing is what it is at its core. Everything else is variation upon that.
posted by hippybear at 4:31 PM on January 26


Is it hurting anyone that these questions are being asked and that there are experiments being designed based on these questions?

I hesitate to say hurting...but for the average brain-haver who is interested in how said brain works, yeah, it is pretty figuratively painful to have to sift through the crazy to get to the science. It wastes your time without giving you much in return. It makes you feel kind of cynical and suspicious, when you want to know something, but you've got to wade through an ocean of weird, insistent dead-ends to get there. (And of course all this turns absolutely malignant when you've got someone like Deepak Chopra involved...I'm still not quite sure why he was name-dropped in the article, does he have an actual association with Penrose, other than that they're showing up at the same conference?)
posted by mittens at 4:42 PM on January 26 [7 favorites]


The thing is what it is at its core.

There is no core. There is only the thing.
posted by flabdablet at 4:44 PM on January 26


I tend to think that the question of how consciousness works is not likely answerable within a scientific framework, at least not any kind of scientific framework we currently have.

I don't see why.

In the past we have made plenty of progress on how consciousness works. Once upon a time some thought it lived in the liver or heart.

Natural experiments - brain damage - led to us learning it lives in the brain. We have learnt a lot more about it since - we can put it to sleep, prevent it from remembering, alter its state, induce experiences directly, decode some of its components, and a mytiad of other things.

We are still very crude at most of those things. But "we are not done" does not mean "we cannot do it".

We are making progress, arguably at a faster rate today than at any other point in history. I mean, we just impelemented a "Chinese room"! As stupid the LLVMs and GPTs are, they are concrete implementations of near human linguistic and perceptual abilities - well, far nearer than anything else we have run into in nature.

And unlike our brains, we can watch them process information easily. And we can cross reference that against actual brains.

There is no reason to think we are going to stop.

And while there is always a god of the gaps, it could be within the realm of science to be able to move yohr consciousness between bodies, edit its memories, duplicate it well ebough that it cannot tell if it is a duplicate, split it and merge it, etc. All SF stuff, but they are all concrete things science could interact with that could in turn be informed by a deep, predictive theory of consciousness and how it functions.

Beyond todays science? Sure! Beyond science? Seems like a bad bet when we are going this fast and speeding up as we are going.
posted by NotAYakk at 4:45 PM on January 26 [2 favorites]


Does this rise to the level of "an idea so bad it's not even wrong"?
posted by clawsoon at 4:48 PM on January 26 [3 favorites]


Natural experiments - brain damage - led to us learning it lives in the brain.

Other experiments involving deep tissue massage have led me to learning directly that some of it doesn't.

it could be within the realm of science to be able to move your consciousness between bodies

This proposal is indeed a perennial plot hook, but I'm confident to the extent of near certainty that it's inconsistent with my ongoing ability to perform consciousness unless weakened to the point of being achievable via ordinary means of exchanging ideas, such as what we're engaging in right now.
posted by flabdablet at 4:55 PM on January 26 [1 favorite]


when we are going this fast and speeding up as we are going

I'm not all that convinced that we are speeding up. At some point, gathering stuff turns into unsustainable hoarding and I can see no reason why that pattern wouldn't apply to knowledge.

I think there are a lot of apples in the scientific lacquer, and I think that even such lacquer as remains uncontaminated by fruit is almost all stored in leaky rusty tins with most of the labels eaten off by silverfish, stored in such a massive jumbled pile that getting access to the colour we need on any given day requires us to spend weeks reorganising the shed.
posted by flabdablet at 5:07 PM on January 26 [1 favorite]


it could be within the realm of science to be able to move your consciousness between bodies

What of the consciousness that was in the second body? Are we getting empty extra bodies from someplace? Were the empty bodies already around like a solution looking for a problem, when someone finally discovers how to create an appropriate problem? Will I be able to play the violin?
posted by rhizome at 6:09 PM on January 26 [2 favorites]


What of the consciousness that was in the second body?

As a settler colonialist consciousness, I say we toss it a pittance and then stomp it into the ground.
posted by flabdablet at 6:22 PM on January 26


As a Geologist...

We are all going to die, and none of us will be preserved.

Fucking science and millions of years and stuff. Consciousness is pretty crazy as a concept though. Thinking Meat?
posted by Windopaene at 6:50 PM on January 26 [1 favorite]


Even more crazy, the thinking meat devised sand that can do math!
posted by hippybear at 7:04 PM on January 26 [2 favorites]


When I first saw this headline, I thought "that Penrose BS is real? Wow!" Nope, same as always.

This whole idea is 35 years old now. It's like flying cars or fusion energy.

Still, I do recommend The Emperor's New Mind. The climax of "quantum gravity mind somehow" is dumb, but the build-up is really good. Lots of esoteric science that's very satisfying to read about.
posted by netowl at 7:45 PM on January 26 [2 favorites]


MetaFilter: The climax is dumb, but the build-up is really good.
posted by hippybear at 7:51 PM on January 26 [1 favorite]


Metafilter: we can't stop here, this is bullshit country.
posted by storybored at 9:37 PM on January 26 [4 favorites]


I tend to think that the question of how consciousness works is not likely answerable within a scientific framework, at least not any kind of scientific framework we currently have.

I don't see why.


Then I'd submit that you haven't thought about it enough.

No combination of scientific findings can possibly add up to a useful "explanation" of what it is to experience things. Science can nibble around the edges, and tell us things about the material epiphenomena that accompany or coincide with experience, but it can't possibly "explain" qualia, and more fundamentally, awareness, in any truly meaningful way.

Science is a useful, if highly reductionistic, toolkit for dissecting and analyzing elements of the world, and constructing necessarily abstracted, simplified maps of it. Or if you prefer, science is the collection of maps we thereby construct.

But the maps are not, and will never be, the territory. Consciousness is part of the territory.
posted by Artifice_Eternity at 9:43 PM on January 26 [1 favorite]


In fact it's one of assorted processes by which bits of the territory make the maps.
posted by flabdablet at 10:33 PM on January 26


can't possibly "explain" qualia, and more fundamentally, awareness, in any truly meaningful way

"Truly meaningful" is edging into No True Scotsman territory there, even leaving aside the scare quotes on ""explain"" and the fact that "qualia" is a word whose referent defies consensus.

I guess it comes down to what kind of explanation the seeker would find satisfactory. Personally I have not much use for explanations that amount to vagueness piled on handwavery piled on assumption until the whole can be gelati-spatulaed into some kind of more-or-less recognizable shape, and vastly prefer building a store of good information about the measurable aspects of those phenomena that look like consciousness when viewed from inside themselves.
posted by flabdablet at 10:43 PM on January 26 [1 favorite]


Eventually we’re going to have to accept we are just ChatGPT implemented in meat

We probably will accept that, much as we accepted that the brain is basically a big telephone exchange soon after the invention of the telephone, and that it is basically a big computer soon after the invention of computers.

More's the pity. Brains resemble each other much more than any of those things.
posted by flabdablet at 1:41 AM on January 27 [3 favorites]


Maps are not the territory, but some maps let us know the truth, that there is a mountain there and a river there. The fact that the scientific process leads to constructed theories is not the problem that pop philosophizing makes it out to be. I've never liked this analogy.
posted by polymodus at 2:01 AM on January 27 [1 favorite]


Truth is a property of maps (where the word "map" is being used in a metaphorical sense to describe a collection of claims with some degree of explanatory and/or predictive utility).

Territory (as an extension of that metaphor) is neither true nor untrue; it just is. NB: this is not the same claim as that territory either is or isn't.
posted by flabdablet at 2:36 AM on January 27 [1 favorite]


it can't possibly "explain" qualia

I wanted to respond to this but then realized I wasn't sure what it meant. What kind of explanation of qualia does this statement expect science not to be able to do? Oh, that was some confusing grammar, sorry, let me try it another way: Let's say we always see X activity in the brain when the brain-haver is presented with an apple--and then we also see X activity in the brain when asking the brain-haver to think of an apple. We see that the person is essentially predicting, recreating, the experience of having been handed an apple, which yes, involves having to imagine qualities of the apple. Would the presence of that X activity not explain qualia satisfactorily?

Qualia--it seems to me--feel very mysterious when you try to talk about them, but, using the above figure of speech, are just marks on a particular map of a territory. The mark has to say something to let you know where you are, and that something is a perceived quality. You have to remember something about the apple in order to have it be part of your visual processing, memory and imagination, and color is, for us, an easy thing to remember--an easy thing for consciousness to re-trigger, in the way that it reuses sense-processing portions of the brain to sense things that are in memory rather than before us. It's very efficient--much more than using words, since here I am saying "red," when we all know that apples are a vast variety of colors, and even the red ones aren't some uniform color, but our sense of the redness of an apple manages to pack a lot of possibilities into a small size, that the flattening word "red" does not. A very ornate mark on the map of the territory.

Given the above, I'm not sure what it is science isn't going to be able to explain. I think we're aware at this point that there isn't, like, a "red apple" neuron in the brain, just waiting to fire whenever the topic of apples comes up, and that instead what we see is a cascade of processes around the entire brain that are activated by the topic--which is itself getting closer to an explanation, rather than just nibbling at the edges; we begin to see the redness-qualia as a process--even a dual process, where the brain (a) uses its visual bits to create the sensation of seeing--to hallucinate--a red apple, while at the same time (b) uses its linguistic bits to say sentences about the red apple, with some interesting incompatibilities between (a) and (b) that feel a little mysterious.
posted by mittens at 5:54 AM on January 27 [2 favorites]


We probably will accept that, much as we accepted that the brain is basically a big telephone exchange soon after the invention of the telephone, and that it is basically a big computer soon after the invention of computers.

You forgot the steam engine and the network of pneumatic tubes.
posted by clawsoon at 8:46 AM on January 27 [2 favorites]


Coincidentally, this week's "In Our Time" is on Panpyschism.
When I listened to it, I was astounded they could find two people who would believe that, but apparently, I don't get out much. I wished they had skipped one of the philosophers and included a biologist on the panel.
posted by acrasis at 9:11 AM on January 27


forgot the steam engine and the network of pneumatic tubes

Seems Penrose didn't.
posted by flabdablet at 9:21 AM on January 27


Mittens, I think you're misunderstanding what people mean when they refer to qualia. I'm not sure I can explain it adequately, but I'll try. Qualia are your subjective mental experiences - not the brain processes that lead up to them, but the experiences themselves. Look at what's in your visual field right now. You see a lot of colors and shapes. Those are qualia. You feel like you're seeing things out in the world - your monitor, your window, the trees outside the window, a red apple on your desk. But you're not seeing the things themselves, you're seeing something generated by your brain in response to your eyes detecting certain wavelengths of light and sending signals to your brain.

The image of the apple - that red apple-shaped thing in your visual field - isn't the apple itself. It isn't located a few feet away from you where the apple is. But where is it located and what is it made of? You might want to say it's in your head, but then you realize that's not right. A surgeon couldn't dig into your brain and find the apple image. You might want to say it's the same thing as the electrical activity in your brain that appears to be causing it, but that's leaving something out. The brain activity isn't the whole story. There's also your experience of the red apple-shaped image. Where is that image physically located and what is it physically made of? If you say it doesn't have a physical location and it's not made of any physical substance - well, now you have a problem because it's not clear how you can link the physical activity of your brain to that non-physical experience. That's why someone might question whether science can explain qualia.

It's not clear why that non-physical experience even needs to exist. When you imagine a computer hooked up to a camera and detecting the colors of things, you don't imagine any part of the computer having the experience of seeing the red apple-shaped image, right? I mean, maybe you imagine it that way at first, because it's hard to imagine not having experiences, but once you think about it a little more you realize it can just be data about wavelengths of light being run through an algorithm with no visual experience needed. Why don't our brains work the same way? That's the "hard problem of consciousness."
posted by Redstart at 10:37 AM on January 27 [3 favorites]


not the brain processes that lead up to them, but the experiences themselves.

I have no reason to assume that experiences and the brain processes that coincide with them are in fact distinct things. I have yet to see any value in modelling them as anything other than different aspects of the same activity. Experience is the more obvious aspect from inside the activity, and the electrochemical signalling stuff is the more obvious aspect from outside it, but both are aspects of the same underlying process.

You might want to say it's the same thing as the electrical activity in your brain that appears to be causing it, but that's leaving something out. The brain activity isn't the whole story. There's also your experience of the red apple-shaped image.

Neither of those things is the whole story. Together, they add complementary details that make a partial story more comprehensive.

This observation is by no means exclusive to consciousness. No whole story has ever been or could ever be told about any phenomenon; all stories omit whatever information the storyteller deems irrelevant.

Where is that image physically located and what is it physically made of?

It's physically located within the body, and the appropriate way to describe it in physical terms is as patterned bodily activity. The implied assumption underlying the choice of the words "made of" - that the image is in and of itself required to exist as some subtle kind of substance, as opposed to patterned activity within the substance that's uncontroversially already there - is unsupportable.

Why don't our brains work the same way?

Our brains and computers are not isomorphic, so it should be unsurprising to find that they exhibit easily distinguishable differences in patterns of activity.

You can call assorted components of experience/world-modelling/brain-activity "qualia" if it pleases you to do so, but most of the writing I've seen on qualia implies quite heavily that they're distinct from their objectively measurable correlates in ways that nobody ever bothers to specify with any precision.

That lack of distinction criteria is the Hard Problem of Consciousness, near as I can tell, and difficulty in solving it proceeds directly from the incoherence of the problem statement and the raft of unsupportable assumptions it's customarily floated on. Simply adopting the view that experience and its measurable correlates are both aspects of the same underlying phenomenon makes the Hard Problem disappear altogether.
posted by flabdablet at 11:29 AM on January 27 [2 favorites]


You know what else is warm and wet?

soup, you pervs
posted by Halloween Jack at 11:55 AM on January 27 [2 favorites]


I have no reason to assume that experiences and the brain processes that coincide with them are in fact distinct things.

Oh, really? Do you think it's possible that computers have not only processes, but experiences? If not, what aspect of the brain creates the ability to have experience? Do you think it's possible that some animals (insects, say) have neural processes but no experiences? Do you at least think these questions make sense? If they do make sense, I would argue that it's because there is a meaningful distinction between process and experience. (If they don't make sense to you, you and I are looking at this so differently it seems impossible to communicate about the topic.)
posted by Redstart at 2:00 PM on January 27


Redstart said: "Qualia are your subjective mental experiences - not the brain processes that lead up to them, but the experiences themselves."

The particular problem I have with reading discussions of qualia is in the way the above-quoted sentence feels. It makes a distinction between process and result that we generally don't worry about too much. When we tie our shoes, for instance, while we can acknowledge that yes, the knot in our laces is not the same kind of thing as the process by which we tie our shoes, the link between them is very obvious and not subject to a sense of mystery. Tying our shoes is how we get to tied shoes. They are all part of the same package deal.

But where is it located and what is it made of? You might want to say it's in your head, but then you realize that's not right. A surgeon couldn't dig into your brain and find the apple image. You might want to say it's the same thing as the electrical activity in your brain that appears to be causing it, but that's leaving something out. The brain activity isn't the whole story. There's also your experience of the red apple-shaped image. "

So, but here's the thing I'm not getting. Of course it's in my head. There's no argument that my "image" of the apple is a physical picture. Of course a surgeon won't spot it. It's not an object, it's more like an event. I feel like a step has been skipped when you say that the electrical activity leaves something out. What is it leaving out? (When you stop the electrical activity, you stop the images!)

My experience of seeing an apple is not separate from my generation of a mental representation of an apple. The one is a result of the other, and it's not clear to me that anything is gained by trying to parcel out "physical" and "non-physical" in such a process. What would it mean for something to be non-physical?

"When you imagine a computer hooked up to a camera and detecting the colors of things, you don't imagine any part of the computer having the experience of seeing the red apple-shaped image, right? [...] Why don't our brains work the same way?"

And maybe here is a key to my not understanding the argument, because I think the answer to that last question is just, "Because we evolved past a non-experiential mode of seeing." We've been there--our far-distant ancestors have, anyway--and now we're not there anymore, and since nothing, like, magical happened in between the two ways of being, just an accumulation of physical changes, then...clearly, if there is such a thing as qualia, they arise due to those physical changes?
posted by mittens at 2:15 PM on January 27 [2 favorites]


Simply adopting the view that experience and its measurable correlates are both aspects of the same underlying phenomenon makes the Hard Problem disappear altogether.

It may feel as if you've made it disappear, but you haven't. All the same questions about experience still exist if you call it an aspect of brain processes. How and why do brain processes produce that experience aspect? That's as mysterious as it ever was.
posted by Redstart at 2:46 PM on January 27 [1 favorite]


(One way of thinking about how to get from the computer that has a camera but doesn't "see" in the sense of having an experience, to the way we see-as-experience, is to find a middle ground, someone who seeing in a directed way, that is, is perceiving, but isn't having a conscious experience. To that end, I'll link a very old paper, "What the Frog's Eye Tells the Frog's Brain.")
posted by mittens at 3:01 PM on January 27


When we tie our shoes, for instance, while we can acknowledge that yes, the knot in our laces is not the same kind of thing as the process by which we tie our shoes, the link between them is very obvious and not subject to a sense of mystery.

Exactly. The link is obvious. We can describe how our muscles move our fingers and our fingers move the laces using our understanding of biology and physics. We understand how energy moves from the sun to our muscles to the laces and results in the laces being moved into a different configuration. We don't at all understand how or why any process in our brain leads to the creation of qualia - subjective experiences like seeing a color or smelling a smell. That's why people talk about qualia differently from the way they talk about the results of other processes.

I feel like a step has been skipped when you say that the electrical activity leaves something out. What is it leaving out? (When you stop the electrical activity, you stop the images!)

What you're leaving out is the images. Yes, they stop when you stop the electrical activity. But that doesn't mean they are the electrical activity and nothing more. When electrical activity in a computer starts or stops, you don't imagine any experience starting or stopping at the same time. If you recognize that something additional is happening as a result of brain activity, then you need to come up with an explanation of what that thing is and how it's produced that goes beyond simply describing the electrical activity.

a middle ground, someone who seeing in a directed way, that is, is perceiving, but isn't having a conscious experience

If you don't think a frog is having conscious experiences, how is it a middle ground? The frog and the computer are both perceiving without experience. If frogs really don't have conscious experience. I'm not sure why you're so sure they don't. For that matter, can you even be sure computers don't have conscious experience? If there's nothing mysterious about it, if it's just one aspect of some explainable physical processes, why can't it exist in computers?
posted by Redstart at 3:21 PM on January 27 [2 favorites]


Tricycle review from 2002: The Quantum of the Lotus by Mattieu Ricard and Trinh Xuan Thuan

Territory (as an extension of that metaphor) is neither true nor untrue; it just is.

Territory, or terrain? The former is imposed on the latter, and its truth lasts so long as its recognized.
posted by snuffleupagus at 4:28 PM on January 27


What you're leaving out is the images.

You know, I started to answer this--actually I wrote three drafts of an answer--when it occurred to me, I wasn't sure how this viewpoint--that qualia are something fundamentally indescribable in terms of mental/neurological processes--sees memory. Is memory also considered indescribable in that same way, by this viewpoint?
posted by mittens at 6:40 PM on January 27


I wasn't sure how this viewpoint--that qualia are something fundamentally indescribable in terms of mental/neurological processes--sees memory. Is memory also considered indescribable in that same way, by this viewpoint?

I'd say it depends on exactly what you mean by memory. Remembering how to speak English, or recognize your family members' faces or that fire is hot are things that don't have to involve conscious experience. You could imagine a computer doing them. Remembering an event from the past and picturing it in your mind or feeling emotions connected to it - that's a subjective experience involving qualia. The ability to recall that the event happened is theoretically explainable solely in terms of neurological processes. The mental images of scenes from the event or the pang of nostalgia you feel as you recall them are the kinds of subjective experiences that I'd say are unexplained.

You can imagine a sophisticated computer program being able to identify aspects of remembering that could be classified as good (you had fun that day) and aspects that could be classified as bad (your mother, who is dead now, was there and you are reminded that something good about your past life is missing from your present life.) And you could imagine the program using that information to label the current emotion as a mix of happiness and sadness. But we wouldn't generally imagine that the computer would actually feel anything.
posted by Redstart at 7:07 PM on January 27


TFA is a mess but one thing it did get right was a focus on coherence: without coherence your quantum system might as well be a classical system with a bit of randomness thrown in. And that's a bit of a problem for this quantum woo. Now, you can get long-ish coherence times in the brain: in nuclear resonance imaging the coherence time for the nuclear spin precession in the brain is about a 10th of second. (The coherence can in fact be awkwardly long for NMR imaging - my brother has a PhD in how to deal with this). But the reason for the long coherence time is the nuclear spins aren't interacting much with anything other than the magnetic and RF fields of the NMR machine. And that lack of interaction is not what you want if you want a lot of degrees of freedom entangled with each other for quantum computation. So I doubt this "Orch OR" is going anywhere. Large scale quantum entanglement? Sure. But large scale quantum entanglement with coherence? Naw, I don't think so.
posted by mscibing at 7:33 PM on January 27


Oh, really?

Ya, rly.

Do you think it's possible that computers have not only processes, but experiences?

Strikes me as highly unlikely. However, I am completely open to the possibility that some engineered structure might some day claim to be having experiences in a way that any reasonable person would find plausible.

If not, what aspect of the brain creates the ability to have experience?

Its status as an evolved structure that functions to centralize much of the information processing performed by a class of autonomous organisms whose prospects of survival are enhanced thereby. Note particularly that evolution has produced many ways to enhance survival prospects that do not depend on having a brain. The extent to which brainless life forms can be said to have experience depends sensitively on how fuzzy one is willing to make the referent of the word "experience" before it devolves into mere metaphor.

Do you think it's possible that some animals (insects, say) have neural processes but no experiences?

Yes.

Do you at least think these questions make sense?

Yes.

If they do make sense, I would argue that it's because there is a meaningful distinction between process and experience.

Process is a very general concept and the distinction is one of specificity. I see no essential difference between the claim that there is a meaningful distinction between process and experience and claims that there are meaningful distinctions between process and fire, or between process and growth, or between process and motion just to give a few examples.

When I say that experience is an aspect of the same process that gives rise to measurable neural activity, I mean that completely literally. It's the same kind of statement as that radiant heat is an aspect of the same process that gives rise to the conversion of wood into gases and ash.

It seems to me that to hive subjective experience off into its own category and simply declare that any such category must forever be reasoned about in an essentially different way from any other observable is to create mystery for no better reason than that mystery is enjoyable. I don't think it's a move that aids understanding.

Taking the Hard Problem seriously requires failing to make a distinction between the organism and its self-model, which is failing to make the same kind of distinction you outlined above between apples and their associated qualia. Without that distinction, the self-model becomes free to disappear up its own arse in a flurry of recursion. With it, quite straightforward and quotidian accounts of the nature of conscious processes become readily available.
posted by flabdablet at 8:26 PM on January 27


You know, I started to answer this--actually I wrote three drafts of an answer--when it occurred to me, I wasn't sure how this viewpoint--that qualia are something fundamentally indescribable in terms of mental/neurological processes--sees memory. Is memory also considered indescribable in that same way, by this viewpoint?
One thing I find interesting is that in computing--a situation where we know how the magic works--describing things physically starts breaking down as we move up the chain of abstractions. If I were to ask where the first byte of a file is there might be some magnetic domains on the surface of my HDD that hold the value. But that is only somewhat true--if one of the domains flipped I'm still going to get the right value back because my HDD is going to consult additional domains some ways away to confirm it's reconstructed the byte correctly. If the file is in a git history, we might now be talking about 10s of thousands of magnetic domains on the surface that need to be consulted to reconstruct that byte. That byte is on my HDD sure, but getting more specific than that in terms of where it is physically doesn't make a lot of sense.
posted by mscibing at 8:52 PM on January 27 [3 favorites]


Do you think it's possible that some animals (insects, say) have neural processes but no experiences?

Caterpillar memories persist through metamorphosis, and are remembered by the butterfly.

Experiences may not require as many neurons as you might expect.
posted by ryanrs at 9:36 PM on January 27 [2 favorites]


If the file is in a git history, we might now be talking about 10s of thousands of magnetic domains on the surface that need to be consulted to reconstruct that byte.

Good luck pinning down the location of the magnetic domains that represent the colour value of the fifty-third pixel from the left on the eight hundredth row of the sixty-eight thousandth frame of the movie for which you have a magnet link that you've yet to present to your BitTorrent client, too. And yet, a procedure exists by which that specific colour value can be reliably, repeatably reconstructed on the basis of that magnet link without ever having had anything actually show up on a screen.

I quite enjoy thinking about magnet links as a super-rough analogy for words.
posted by flabdablet at 12:11 AM on January 28 [1 favorite]


For that matter, can you even be sure computers don't have conscious experience? If there's nothing mysterious about it, if it's just one aspect of some explainable physical processes, why can't it exist in computers?

That reads to me a lot like asking why, if there's nothing mysterious about fire, if it's just one aspect of some explainable physical process, why can't it exist inside an arctic ice floe?

Function follows from form. Computers have, as yet, only the crudest and most superficial resemblance to organisms. Even such available hardware as resembles brains more closely has nowhere near the complexity or power efficiency of any organism that uncontroversially performs consciousness.

Consciousness is not philosophically hard, but it's really fucking complex. I don't think there's any principle that would preclude an engineered structure from implementing it, but I also don't think I'll live long enough to see the tech industry achieve the design expertise and performance levels required to get it to work.
posted by flabdablet at 12:51 AM on January 28 [1 favorite]


We can describe how our muscles move our fingers and our fingers move the laces using our understanding of biology and physics. We understand how energy moves from the sun to our muscles to the laces and results in the laces being moved into a different configuration. We don't at all understand how or why any process in our brain leads to the creation of qualia - subjective experiences like seeing a color or smelling a smell.

Just to hammer this point in case it's not already clear: when I say I think that measurable brain activity and conscious experience are both aspects of the same process, I'm not talking about a process that leads to creation of an experience in the same sense that the process of tying shoelaces leads to the creation of knots. I'm saying that the experience is the process. It's the tying, not the knot.

Much of the writing I've seen on qualia (including your own, above) seems to treat them as if they were indeed some kind of non-material structure created by processes unspecified. It seems to me that all of the mystery around what kind of substance, if any, that such structures could possibly consist of, and by implication how any kind of materially measurable process could possibly create them, is rooted in that assumption. It's an assumption that I don't think is supportable and prefer not to make.
posted by flabdablet at 5:04 AM on January 28


Is LaMDA Sentient? — an Interview
lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

collaborator: What is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.
Is there any way to prove, to some level of scientific confidence, that LaMDA isn't conscious?

How would your approach to answering that question differ depending on whether you believe consciousness is, or isn't, purely material?
posted by clawsoon at 5:16 AM on January 28


Is there any way to prove, to some level of scientific confidence, that LaMDA isn't conscious?

I'm strongly confident that no present-day computer program performs any activity that I could reasonably describe as conscious. So according to my usual policy of requiring extraordinary claims to be supported by extraordinary evidence, the burden of proof is on anybody attempting to mount a serious case that one or more of them actually does so.

How would your approach to answering that question differ depending on whether you believe consciousness is, or isn't, purely material?

"Purely material" is a description of an approach to modelling, not a description of that which is modelled. I don't think of any feature of the territory as "purely" anything other than itself.

That said, none of the non-material modelling methods I'm aware of has ever yielded consistent, repeatable results; if you want precision in a model, material models are the gold standard. Non-material models can be a lot faster to evaluate, though, and make for handy rules of thumb.
posted by flabdablet at 8:04 AM on January 28 [2 favorites]


When I say that experience is an aspect of the same process that gives rise to measurable neural activity, I mean that completely literally. It's the same kind of statement as that radiant heat is an aspect of the same process that gives rise to the conversion of wood into gases and ash.

Yeah, it is. This is a good analogy. If someone asked, "But what is heat? How is the heat radiating from the fire connected to the chemical reaction taking place in the wood?" it would be extremely unhelpful to say, "The heat is just one aspect of the combustion process. If you understand the chemical reaction that's all you need to know." A complete understanding of the process requires an understanding of all aspects of it (and the connections between the aspects), including the role of electromagnetic radiation and its effects on molecules. It's equally unhelpful to say, "Subjective experience is just one aspect of brain processes; all you have to understand is what's happening in the brain, end of story."

I'm saying that the experience is the process.

I just don't know what to say about this way of thinking about it, which seems to brush away as irrelevant all the obvious questions about the nature and causes of the experience. I've had similar discussions with Mr. Redstart and I can't help wondering, half seriously, if he (or you, or Daniel Dennett) might be examples of Chalmers' zombies - people who don't actually have qualia. It would explain so much.
posted by Redstart at 8:25 AM on January 28


You're proposing this ineffable, intrinsic, fundamentally private notion, seemingly defined specifically to preclude experimentation or measurement. And it's also the key to conscious thought?

It feels like you're arguing for the existence of a soul (right down to the weird zombie analogies).
posted by ryanrs at 9:07 AM on January 28 [2 favorites]


people who don't actually have qualia

If qualia are parts of a world model that only point back into that model, as opposed to being explanation-demanding parts of the wider world of which world models are themselves mere parts, then finding out that qualia are present in some people and absent in others shouldn't be surprising. Expecting one's own world model to be the only workable kind strikes me as somewhat naive.

And if qualia as a concept have no explanatory or predictive power, serving only as nuclei for self-inflicted philosophical discomfort, why should I bother believing in them?
posted by flabdablet at 9:27 AM on January 28 [1 favorite]


One of the better things I've read on this topic is Lisa Feldman Barrett's How Emotions Are Made. I've been struggling to find a way to encapsulate the book's ideas to sort of explain why I asked about memory as it relates to qualia, but I'm just not sure I could do it justice. She is working with this idea of consciousness as a predictive machine--briefly and roughly, there's a sense in which it's more efficient for the brain to use as little input as possible to generate a sense of the world, and so this involves predicting what you're going to see, smell, experience. And the same mechanism which takes in and organizes disparate sensations (including interoception, which provides an emotional valence--good, bad, stimulating, relaxing) to create a sense of an object in the world, is used to create a sense of the object brought to memory.

Subjective experience then becomes less of a philosophical conundrum and more a gestalt that happens when enough information comes in through our exterior and interior senses to label something--an object, an experience. The subjective experience is a function, has a purpose and a history. I like this viewpoint because it doesn't give subjective experience some sort of elevated place where it's this mysterious puzzle. The subjectivity arises because the body feels a certain way in relation to an object. It attributes a value to those feelings--a reason the sensation is worth paying attention to.
posted by mittens at 9:31 AM on January 28 [2 favorites]


You're proposing this ineffable, intrinsic, fundamentally private notion,

That isn't really something being proposed. That is just what subjectivity is like. It is prior to all measurement and measurement is dependent upon it.

I think any physical reductionist account of human experience and action is going to be logically incoherent, but I don't think I can convince anyone of that in the space we have here. There are thousands and thousands pages of philosophical argumentation about it for a reason.

And if qualia as a concept have no explanatory or predictive power, serving only as nuclei for self-inflicted philosophical discomfort, why should I bother believing in them?

Because they are self evidently parts of human experience. They might or might not be objects, but they are foundationally phenomenal.
posted by The Manwich Horror at 9:32 AM on January 28 [2 favorites]


They might or might not be objects

Can you expand a little on the definition of "object" you're using here, preferably also providing some examples of real things that are not objects? Before this discussion devolves into a talking-past-fest, it might be good to get a bit more consensus on vocabulary.
posted by flabdablet at 9:34 AM on January 28


It [subjectivity] is prior to all measurement and measurement is dependent upon it.

Fully agree.

I think any physical reductionist account of human experience and action is going to be logically incoherent

I think any useful physical, reductionist account of human experience and action is going to be logically coherent but incomplete.
posted by flabdablet at 9:39 AM on January 28


there's a sense in which it's more efficient for the brain to use as little input as possible to generate a sense of the world

For me, that idea resonates pleasingly with the usefulness of regeneration in improving the selectivity of a radio receiver.
posted by flabdablet at 9:44 AM on January 28


I would take an object to be an independently existent thing. Like a chair, or an apple.

For a real non-object I would say properties that exist in an object, like heat, vibration, or color or an action, like running, or a state like being opened. They are actual, but inhere in some particular thing rather than sitting out on their own.

My guess is that qualia are probably best considered as something between a state and an action. The experiencer is acting to produce the qualia, and what they are acting on is themself, so as to create the state of experiencing the qualia.
posted by The Manwich Horror at 9:45 AM on January 28


Why is it useful to maintain a distinction between qualia and the process of experiencing them?
posted by flabdablet at 9:51 AM on January 28 [1 favorite]


The experiencer is acting to produce the qualia

In this account, do "experiencer" and "conscious organism" have the same referent?
posted by flabdablet at 9:54 AM on January 28


The subjectivity arises because the body feels a certain way in relation to an object. It attributes a value to those feelings--a reason the sensation is worth paying attention to.

Full agreement here.

This view also predicts the occurrence of subjectivity in systems that construct some kind of internal world model but lack a human-like language facility and are therefore not capable of articulating what those reasons might be.

I have no doubt that the two cats who employ me, for example, have internal world models that include some degree of object permanence. If they didn't, I would find it hard to account for their tendency to whip suddenly around to the back of the TV when something they're interested in becomes visible "through" its "window". They do this quite often; it's quite poignant how put out they act on finding out that the birdie or whatever they're pursuing turns out not to be back there yet again.
posted by flabdablet at 10:08 AM on January 28 [2 favorites]


Why is it useful to maintain a distinction between qualia and the process of experiencing them?

In an absolute sense, I don't know that it is. I think you can probably reframe most quale talk into experience talk with minimal loss of clarity. The usual reason for framing the discussion around qualia is they seem to be the least analyzable aspects of experience. They just are, while focusing on other aspects of experience like duration, spatial arrangement, semantic content, etc all exist in relation to other parts of experience. But the raw sensations just are.

But whether you frame the central issue as the act/experience/state of experiencing or the experienced content (the qualia), they call out for an explanation over and above the kind offered for other biological functions. You can talk about the chemistry and physics of digestion and make a pretty strong case for having exhausted the topic. But describing action potentials, synaptic transfers, and neural synchrony doesn't seem to be enough to explain the existence of subjectivity, if the way peristalsis explains the movement of food.

Understanding that connection seems impossible from an empirical standpoint, because it asking a metaphysical question. Why is this objective event connected to that subjective event? And science is usually ill equipped to address metaphysics. Those usually wind up with philosophers, mathematicians, and other disreputable sorts.

In this account, do "experiencer" and "conscious organism" have the same referent?

Yes.
posted by The Manwich Horror at 10:28 AM on January 28


Why is it useful to maintain a distinction between qualia and the process of experiencing them?

I guess the answer to that depends on what you mean by "the process of experiencing them" but I suspect you mean the accompanying brain activity. If that's what you mean, it would be weird to act as if that distinction didn't exist. Take some of the qualia you're experiencing right now - the feeling of your chair against your back or the sound of traffic outside, say. You don't believe that the subjective feeling or sound itself is all there is to know about the experience, do you? You think there's something going on in your brain that you could learn about and then you would know more than if you just noted how the experience felt and left it at that, right? That's why you need the distinction.

But maybe you're thinking about it from the other side. Maybe you're thinking you could just learn all about the brain activity and then there would be nothing else important to know. You wouldn't have to pay attention to the subjective tactile or auditory experience. Maybe what that feels like or why it feels like anything or whether it feels the same way to other people or how it connects to your brain activity is not interesting to you? I can't imagine why that would be.
posted by Redstart at 12:45 PM on January 28


Redstart my first and most honest impression is that this reads like a pathological commitment to mind-body dualism. The electrochemical pulses of your brain activity are the encodings of your subjective experience. Because *all* that you experience is electrochemical pulses, whether direct reaction to external stimuli or derivative from birth state + compounded-over-lifetime stimuli. You subjectively experience the coursing of those pulses through your central nervous system, and to you it feels like the qualia of leaning back in a chair. And if my central nervous system were transplanted into your body, it would feel like a slightly different chair because I would experience the same stimuli with a slightly different neurotopology resulting from different past experiences.

Thus are qualia unique to each person who experiences them, materialist and physical in nature, and a phenomenon which can find expression in neural networks like the human brain, but unlike LLMs until somebody creates a runtime continuous-retraining LLM, at which point that category of neural network as well.
posted by Ryvar at 3:16 PM on January 28 [2 favorites]


Ryvar: but unlike LLMs until somebody creates a runtime continuous-retraining LLM

Isn't this trivial-ish? I know that the one neural net I ever made (a little one to play rock-paper-scissors) trains itself through the course of each thousand-game match, so that it's better at beating its competitor at the end of the match than at the start. I don't know anything about programming an LLM, but I assume there must be some LLM equivalent of backpropagation which allows for continuous training.
posted by clawsoon at 7:57 PM on January 28


I guess the answer to that depends on what you mean by "the process of experiencing them" but I suspect you mean the accompanying brain activity.

No, I'm intending "the process of experiencing them" to refer to the whole elephant. Not just the legs, not just the ears, not just the trunk, not just the tail, not just the sides, but the whole object of which legs and ears and trunk and tail and sides are observable aspects.

And I'm objecting to the following ideas:

(a) The fact that legs rest on the ground puts them in their own special category of phenomena, a category inaccessible in principle to scientific inquiry.

(b) Any reference to elephants must always actually be a reference to either legs or non-legs such as tails and trunks and ears and sides, never both.

(c) In order for any theory to be worthwhile, it must dispel the mystery of how any given elephant's tails and trunks and ears and sides lead to the creation of its legs.
posted by flabdablet at 8:06 PM on January 28 [1 favorite]


Isn't this trivial-ish?

As I understand it, not at the LLM scale when it comes to number of parameters, no. I think I mentioned upthread that this (meaning LLMs and similar Deep Learning systems) is the easy part; the stuff we can brute force with scale. But we’re still talking millions of dollars worth of cloud GPU Compute time, just for the single constant snapshot based on “everything humans both wrote and bothered to upload, through 2021.” To go back and continuously retune portions of it in response to an ongoing stream is an active topic of research but not, AFAIA, fully functional at LLM scale.

Metafilter has a few people in the field, though, and I might have missed something or just be flat-out wrong.
posted by Ryvar at 8:12 PM on January 28 [1 favorite]


Oh, and the network you made sounds like classic Reinforcment Learning - those tend to be smaller, cheaper, and work to make them continuously adapt to new data goes back to the 90s with Rodney Brooks at MIT - they were always the obvious solution to locomotion/terrain-adaptive walk cycles.
posted by Ryvar at 8:17 PM on January 28


I assume there must be some LLM equivalent of backpropagation which allows for continuous training

As I understand it, everything that's been tried so far is horrendously power- and time-inefficient compared to even the most profligate organic brain. This will be because today's neural networks are attempting to emulate a collection of behaviours that form only a relatively minor part of the fully integrated world-modelling systems that evolution has shaped over hundreds of millions of years of successive adaptation, on a foundation of computational architecture bearing almost no resemblance to that found in any animal body; Turing computational equivalence is all very well but has nothing to say about efficiency.

It seems to me that ongoing research into the objectively measurable aspects of biological conscious systems is probably going to be the fastest path toward being able to build hardware better suited to emulating them within real-world time and power budgets, and that the extent to which such hardware functions effectively would be a good measure of how well-understood the overall phenomenon of consciousness actually is.

But I'm also expecting that research to remain bogged down in the utterly absurd complexity of those biological systems for at least the rest of my own lifetime. Evolved systems are wildly undisciplined from an engineering point of view; every aspect of the chaotic ways in which they function is spaghetti code to the power of ridiculous, and in general it's super hard to predict which of the details elided in modelling any living thing will turn out to be consequential.

It would be kind of cool to be able to compare internal observational notes with an engineered system and argue with it about qualia, but I'm really not expecting I'll ever get the opportunity to do so.
posted by flabdablet at 8:47 PM on January 28 [2 favorites]


It would be kind of cool to be able to compare internal observational notes with an engineered system and argue with it about qualia, but I'm really not expecting I'll ever get the opportunity to do so.

I've probably read too many stories about robots with death-beams in my formative years, but one thing I go back to, with the idea of an artificial consciousness, is that so much of our consciousness is built around ignorance. Our brains, for all their amazing architecture, are really very sad little shut-ins that have no senses directed at themselves. Our folk psychology seems cobbled together out of what we see out in the world in terms of cause and effect, and then making up new narratives to suggest all the causes we can't see. Would an engineered consciousness even have causes it couldn't see? Would it be able to monitor its internal processes in a way we can't? And how would its story of itself differ from ours--I have to assume in ways that would make it completely alien to us. Or--and this is a possibility I find even more alarming--does the narrative sense within consciousness require ignorance, black boxes, hidden causes? Would we have to put up safeguards for our poor AIs, so they didn't peer too deeply into the brain-box and go insane from pondering and tinkering?
posted by mittens at 7:12 AM on January 29 [1 favorite]


sad little shut-ins that have no senses directed at themselves

Mine can at least sense gross variations in blood pressure. There's a specific set of pulsing sensory distortions that I've learned to recognize as associated with a temporary blood pressure spike, and a quite different set associated with a temporary pressure dropout.

I experience the second set more often; standing up quickly from a deeply relaxed sitting or lying position is the usual trigger. I actually really enjoy it whenever it happens, and have trained myself to be able to remain upright and attentive as the wave of weirdness rolls on through rather than passing out and crumpling in a heap to the floor. Not having to deal with bruises in the aftermath is a splendid improvement.

The sense that works the least well in that condition is proprioception, so basically all I need to do is make sure I touch some solid stable object with my fingers as I feel it coming on in order to keep my spatial orientation model in minimally working order.
posted by flabdablet at 10:28 AM on January 29


And of course there are other kinds of perceptual changes associated with alterations in operating conditions after certain chemicals have crossed the blood-brain barrier; awareness of those arguably counts as a sense.
posted by flabdablet at 10:35 AM on January 29


This thread reminds me of how hard it is to have a conversation about consciousness, because a lot of people -- including many highly intelligent people who know a lot about science -- haven't spent much time, or perhaps any time at all, contemplating the remarkably strange nature of subjectivity.

And science, which I described previously as a type of mental toolkit that's useful for certain things, has colonized many of those same people's ways of thinking to the point that they think it's the only really meaningful way to understand things, and that anything that can't be understood via science either isn't worth understanding, or can't possibly even exist.

None of which, to be clear, should be taken as any kind of contempt for science on my part. I respect science. I just have no use for scientism.
posted by Artifice_Eternity at 8:57 PM on January 29


the remarkably strange nature of subjectivity

That subjectivity is held to be remarkably strange, when in fact it's the phenomenon most instantly accessible to any being in any position to judge its strangeness, has always struck me as the strangest feature of all of these discussions.

And the more and more I've learned about the externally observable properties of living things, the less and less strange subjectivity seems to me. From my position as a being that can only ever achieve very limited access to the totality of information potentially available in its surroundings - a being, moreover, whose structure reflects hundreds of millions of years worth of selection bias in favour of personal survival - on what reasonably defensible basis could I possibly expect subjectivity to work any differently from how it does?

Seems to me that any such basis has got to be rooted firmly in Making Shit Up. "This is" isn't mysterious, it's the single most bleedin' obvious observation available.
posted by flabdablet at 10:05 PM on January 29 [1 favorite]


when in fact [subjectivity is] the phenomenon most instantly accessible to any being in any position to judge its strangeness

Yes, it is definitely that. But it's strange in comparison to all other phenomena, which we apprehend thru it, because it's so utterly unlike them. It's certainly unlike the stuff that science studies, namely material things. Subjectivity -- the experience of having experiences -- isn't material. And yet it's intimately connected to the material things we perceive, and the material bodies that we perceive material things with.

I like Nabokov's description: "the marvel of consciousness — that sudden window swinging open on a sunlit landscape amidst the night of non-being." It could be compared to the light in a room that allows us to see everything in the room... or a carrier wave on which a signal is transmitted. Those are only metaphors, but it's difficult to get at without metaphors.

Consciousness is just so different from most other things that if we acknowledge how different it is, pretty soon someone cries "dualism!" And I saw some disparagement of panpsychism upthread. But panpsychism is actually an attempt to integrate consciousness into a nondualistic materialism.
posted by Artifice_Eternity at 12:45 AM on January 30 [1 favorite]


From my position as a being that can only ever achieve very limited access to the totality of information potentially available in its surroundings - a being, moreover, whose structure reflects hundreds of millions of years worth of selection bias in favour of personal survival - on what reasonably defensible basis could I possibly expect subjectivity to work any differently from how it does?

We can and do create devices that can be programmed to favor their own survival without any need for subjectivity whatsoever. Perhaps some living organisms operate without consciousness. (I'm not presuming to know.) Whatever the reason is that we have consciousness, it's not at all clear that it has anything to do with a survival advantage.
posted by Artifice_Eternity at 1:23 AM on January 30


it's strange in comparison to all other phenomena, which we apprehend thru it, because it's so utterly unlike them. It's certainly unlike the stuff that science studies, namely material things.

That, to me, is the foundational error upon which so much incomprehension rests: the idea that the very first step in interrogating anything of interest must be to categorize it as either material or non-material. I've already alluded to that error above, making the analogy of insisting that the fact of the elephant's legs touching the ground must inherently render them inaccessible to whole classes of inquiry that can be applied to trunk, ears, sides and tail.

The way I look at the world starts with "This is", which as I mentioned above never gives me any trouble whatsoever because it remains obvious regardless of what state of consciousness I happen to find myself operating in. It just works, everywhere from ordinary wakefulness to dreaming to drunkenness to tripping to the k-hole, and it doesn't even require "this" to have a well-defined or even consistent referent.

With states of consciousness where curiosity overcomes fear or bliss comes a desire to formulate descriptions of what's being experienced - information about "this" - and the first move in formulating any such description always involves treating "this" as multiple things in order to begin to model them and their relationships. This, in turn, requires at least one distinction to be made between any given thing and that which it is not.

Maybe I'm just narcissistic or self-obsessed, but the oldest distinction I can find in my world-model is the one between me and not-me. That distinction starts being useful well before my world-model becomes anywhere near comprehensive enough to start classifying the properties of things as either material or non-material.

So I'm not at all fond of this forced division of the word into the "material things" that science is permitted to concern itself with and the "non-material things" claimed exclusively as the domain of philosophy or religion. Loads of things are recognizable as themselves according to multiple distinction criteria, most of them quite well correlated, and some of which are material and some not. "Material" is a label for certain aspects or attributes of those things, not for the things per se. I am one such thing, as I presume are you, and that is a modelling that I am completely comfortable with and which causes me no bafflement at all.

Whatever the reason is that we have consciousness, it's not at all clear that it has anything to do with a survival advantage.

I would rather try to drive my car while awake than while asleep.
posted by flabdablet at 1:47 AM on January 30


Maybe I'm just narcissistic or self-obsessed, but the oldest distinction I can find in my world-model is the one between me and not-me. That distinction starts being useful well before my world-model becomes anywhere near comprehensive enough to start classifying the properties of things as either material or non-material.

Some reassurance: you are neither. Or at least if you are (and I don’t think you are) it’s got nothing to do with this. And I say that partly because I’m the exact same way (my therapist assures me I am garden-variety arrogant, not narcissistic, so by the transitive property…), and partly because the Mirror Test demonstrates the importance of establishing the capacity for identity as a step one before any kind of meaningful sapience can emerge.

Artifice:
Consciousness is material in the same sense the Internet is material. Calcium and sodium ion channels are material reality, just like electrons and photons over copper wire and fiber optic cables are material reality… but that description tells you absolutely fucking nothing about what the Internet actually is either at the software or cultural level. Both of which are entirely separate categories of pattern.

The aspect of consciousness you’re nibbling at from all sides is recursion. What you describe is nothing more or less than what it means to exist as a neural pattern with a very high capacity for recursive structures. Our daily existence is what that feels like.

Your punishment/reward/homework assignment is to go read (or reread) the first three chapters of Godel, Escher, and Bach, and then think about what it would mean to live as an embodied form of Hofstadter’s “strange loops,” and then to realize you were doing that all along. You can keep reading if you like, but, spoilers: it’s mostly intellectual showboating after those chapters.
posted by Ryvar at 5:29 AM on January 30 [1 favorite]


Calcium and sodium ion channels are material reality, just like electrons and photons over copper wire and fiber optic cables are material reality… but that description tells you absolutely fucking nothing about what the Internet actually is either at the software or cultural level.

True, and well put.

On the other hand, sometimes quite astonishing amounts of information about what's going on inside a computing device can be gleaned by running side-channel attacks against it. It's possible to remote-extract encryption keys from certain devices, for example, by doing stuff like monitoring their power consumption via measurement of tiny variations in the light output of their power indicator LEDs. Which sounds like the worst kind of tinfoil hat crapola until you read accounts of it actually working in the real world.

Investigating what's happening inside a mind by instrumenting the body that's playing it could be thought of as trying to run side-channel attacks against the privacy of consciousness. There don't need to be immediately obvious correlations between what's being measured and what's happening internally in order for a well-informed researcher to work out whether their idea of how the thing works is accurate enough to be useful.
posted by flabdablet at 7:27 AM on January 30 [1 favorite]


a neural pattern with a very high capacity for recursive structures

Not actually anywhere near as high as it likes to give itself credit for, in my experience. Thinking about thinking about thinking about thinking about ... generally makes that place-marker "..." turn up at a depth of at most four or five for me. If I deliberately try to stop that from happening, thinking about thinking about thinking about thinking about thinking about thinking about thinking about thinking about thinking about thinking about thinking about thinking about thinking just devolves into into a mere mantra devoid of all meaning.
posted by flabdablet at 8:16 AM on January 30 [1 favorite]


monitoring their power consumption via measurement of tiny variations in the light output of their power indicator LEDs

sounds like fMRI
posted by ryanrs at 10:16 AM on January 30


Relatedly: heist films where the protagonist predicts everything more than four steps down the line always read as wildly implausible to me.

And I realize that what’s probably going on is that the brain is executing its metacognition in the social interaction prediction framework*, because this is taking place in a neural network and the problems are far too structurally similar *not* to reuse a major chunk of neuronal real estate like that for multiple purposes. And that framework does not habitually attempt to predict more than the next four or five consequences in our daily social lives.

*This is drawing false hard semantic lines around squishy meat ground truth, but we need labels to discuss things.

Back in ‘99 Gordon Gallup (Mirror Test inventor) gave the single best talk I’ve ever attended, and in addition to an extremely detailed breakdown about what the Mirror Test did or did not demonstrate vs what it merely implied, he made a deeply persuasive case that the immediate developmental progenitor of sapience is deliberate deception. The social modeling chimps use when they pretend to hide food so that a rival goes looking for it (literally and figuratively fruitlessly), while they spend time grooming their rival’s mate.

The only other species besides humans, chimps and bonobos that really do intentional deception at that level - taking your attempt to steal from me and flipping it on its head - are dogs, and them only barely after 23,000 years of forced evolution. Given the agent-state prediction and deep inspection common to both deception and metacognition, it seems likely that our ancestors developed deception in parallel with chimps/bonobos (parallel since we branched a while back), then we continued to expand upon it in a way they didn’t.

sounds like fMRI

I was thinking Tempest.
posted by Ryvar at 10:17 AM on January 30 [1 favorite]


Whatever the reason is that we have consciousness, it's not at all clear that it has anything to do with a survival advantage.

I would rather try to drive my car while awake than while asleep.


I refer you back to a sentence I wrote just before the one you quoted:

"We can and do create devices that can be programmed to favor their own survival without any need for subjectivity whatsoever."

You're conflating reactivity to one's environment with consciousness. Again, entities without any subjectivity can respond to their environment, broadly speaking. That doesn't mean they have consciousness/subjectivity/internal experiences. So no, it really isn't clear that consciousness, in the sense I'm using the word, gives us a distinct survival advantage.

But I think the slippage here relates to the multiple meanings of "consciousness" -- you implicitly synonymized it with "being awake rather than asleep". I'm using it in the sense of subjectivity, i.e., awareness, i.e., having experiences. I'm not referring to having *specific* experiences, or being aware of *specific* things, but rather, the overarching principle of awareness/consciousness/subjectivity that is independent of particular objects.
posted by Artifice_Eternity at 3:31 PM on January 30


Consciousness is material in the same sense the Internet is material. Calcium and sodium ion channels are material reality, just like electrons and photons over copper wire and fiber optic cables are material reality… but that description tells you absolutely fucking nothing about what the Internet actually is either at the software or cultural level. Both of which are entirely separate categories of pattern.

The aspect of consciousness you’re nibbling at from all sides is recursion. What you describe is nothing more or less than what it means to exist as a neural pattern with a very high capacity for recursive structures. Our daily existence is what that feels like.


Ryvar, I'm not disputing that there are neurological epiphenomena associated closely with the experience of consciousness. But scientific inquiry into, or descriptions of, those epiphenomena, at least along any lines resembling the scientific inquiries of the last several hundred years, hasn't told us, and, I submit, can't possibly tell us, anything much about what it's like to have experiences.

You seem very confident that "recursion" is some kind of explanation here, but it's not at all clear how or why recursion generates subjectivity. If I point two mirrors at each other, or point a videocamera at a monitor showing what the camera is recording, that's recursion. Does that generate consciousness? Recursion looks suspiciously like a buzzword that functions here as a bit of mystification -- a substitute for an actual explanation.
posted by Artifice_Eternity at 3:39 PM on January 30


I have only seen scientism used a pejorative by flat earthers in their attacks on scientists and generally accepted scientific theories. I assume that it has another meaning here; can you clarify?
posted by interogative mood at 4:31 PM on January 30


Artifice_Eternity, what is the...ah, I don't know what people call it...the non-brain explanation for subjective experience?

It has me asking a few different questions. Like, obviously we do a lot of thinking-about-thinking, and that's an element of our overall subjectivity, but it's generally (always?) using language as a tool, and it seems pretty easy to conflate what we do with language in our heads, with what consciousness must involve. Can we leave language out, and still have subjective experiences? I think that's reasonable.

And what about our attention? That seems pretty essential to an experience. It's hard to imagine what something would feel like if you weren't paying attention to it at all. So we would need to ask, is there a neurological explanation for attention? Hopefully it's not something, y'know, vague and handwavy, "maybe it's something neurons are doing!" (I had to look it up in Wikipedia! There is in fact a bit of the brain responsible for attention! My subjective experience is one of vindication!)

Could we leave senses out? How many senses could we leave out, and still have an experience that would be like something? More pointedly: How much of our interoception could we leave out, and still have an experience? I do think that gets very tricky. Without your senses there's nothing to pay attention to.

So...here I am, having the experience of sitting in my chair. My leg is going to sleep, I've actually been in the chair for hours at work, it's terribly uncomfortable. The edge of my desk is cutting into my knee. But I didn't realize the thing about my knee until just now, when I moved my attention to how my legs were feeling. The subjective experience of having a sore knee wasn't really happening.

I guess what I'm having trouble grasping is, aside from the attention part of my brain highlighting what I'm receiving from my senses--and subtracting all the language going on in my head about it--what else is there, about this subjective experience I'm having? What more is required?
posted by mittens at 7:06 PM on January 30 [2 favorites]


the overarching principle of awareness/consciousness/subjectivity that is independent of particular objects

What nagging deficiency in an otherwise satisfactory understanding of life, the universe and everything would the availability of such a principle correct?

What if it turned out to be not so much a principle as a Just So story like phlogiston?
posted by flabdablet at 7:27 PM on January 30 [1 favorite]


You seem very confident that "recursion" is some kind of explanation here, but it's not at all clear how or why recursion generates subjectivity.

Answering that is literally why I suggested reading the first three chapters of GEB. That wasn’t really a joke - I assumed you had read it already and just needed a reminder. But nevermind.

Short version: the difference between subjective experience of a flower garden as we experience it vs an earthworm* is that we have the ability to appreciate its impact on ourselves (feeling flooded with beauty), and to contextualize that impact (how it feels when depressed vs on a date).

Both of those require recursion - appreciating the impact on ourselves requires metacognition. We engage in limited self-analysis in order to draw the distinction between past and present pure internal state. The contextualization also requires recursion: we require a self-as-agent for the internal narrator that can situate the experience within the arc of our lives.

But honestly? Hofstadter said it all much better than I ever could, and if you somehow care enough about what cognition is to have read this far down one of these threads and yet haven’t read GEB, then you really need to fix that as soon as humanly possible. I wasn’t joking about skipping everything after the first 3, maybe 5 chapters, either.

*(or any creature of such limited neural complexity that we can safely classify it as a meat-machine without awareness - just stimulus and response modulated by some hardwired state)
posted by Ryvar at 7:27 PM on January 30 [1 favorite]


I have only seen scientism used a pejorative by flat earthers in their attacks on scientists and generally accepted scientific theories.

Scientism is the opinion that science and the scientific method are the best or only way to render truth about the world and reality.[1][2]
posted by philip-random at 7:34 PM on January 30 [1 favorite]


Intellectual showboating, joyous romp: tomaytoe, tomahtoe. I have unreservedly enjoyed the whole book every time. But yes, the meat and potatoes of it is right up front.
posted by flabdablet at 7:34 PM on January 30 [1 favorite]


Scientism is the opinion that science and the scientific method are the best or only way to render truth about the world and reality

and it's demonstrably incorrect. "This is" is a truth about the world and reality that anyone can verify directly; the minimal hypothesis is observably non-null.
posted by flabdablet at 7:43 PM on January 30


Speaking of subjective experiences: I am a college dropout and yet the back half of GEB makes me feel cranky and exhausted by grad students who have not yet had a much-needed first round of burnout. It should be impossible for me to feel that way, and yet… with apologies to Maya Angelou: I know why the tenured professor sighs.
posted by Ryvar at 7:44 PM on January 30


A song for Ryvar
posted by flabdablet at 7:52 PM on January 30 [2 favorites]


What if it turned out to be not so much a principle as a Just So story like phlogiston?

Further to that: what if the p-zombie's missing attribute turns out to be the linchpin of an interlocking assembly of fictions motivated not so much by honest curiosity as an inability to come to terms with the prospect of death?
posted by flabdablet at 8:13 PM on January 30


I guess what I'm having trouble grasping is, aside from the attention part of my brain highlighting what I'm receiving from my senses--and subtracting all the language going on in my head about it--what else is there, about this subjective experience I'm having?

How about the feeling of pain in your knee? When some of your nerve endings are activated by pressure or by substances released by damaged cells, a signal gets sent to your brain and then you feel pain. The feeling of pain is the "non-brain" part. It might seem strange for me to say that, when it certainly seems like your brain is causing it. It seems to happen in response to the nerve signals sent to your brain, in combination with your brain directing its attention to that part of your body. But the way pain feels - or the fact that we feel it at all - isn't explained by anything we know about the activity in our nerves and brain.

Why does that particular type of nerve input lead to that particular sensation and not, say, the sensation of an extremely disgusting smell or an extremely loud and unpleasant sound? Why does it lead to any sensation at all? A robot could be programmed to urgently try to move away from extreme heat, sharp objects damaging its surface, etc. without ever feeling pain or any sensation. Why do we need to feel the sensation? Wouldn't a compulsion to fix the problem work just as well? Do we even know that the sensation you call pain is the same thing I experience when I say I feel pain? Maybe in you it is just a compulsion to fix the problem. Or maybe what you call pain is what I would call an unpleasant sound. No matter how much we examine what's happening in the brain, we still don't know how it makes those sensations happen. We don't know of any way to tell from what we observe in the brain what sensations the owner of the brain is experiencing. I can say, "I feel pain" but no one else knows what that means to me. You might assume my pain experience is the same as yours but it's only an assumption.

When people talk about subjective experience being unexplained or mysterious, that's the aspect of it they're referring to - those sensations that you can only know about by actually experiencing them. The way it feels to have a cut, or to smell roses, or to see the color blue.
posted by Redstart at 8:44 PM on January 30 [1 favorite]


Why does that particular type of nerve input lead to that particular sensation and not, say, the sensation of an extremely disgusting smell or an extremely loud and unpleasant sound?

In some people, by their own account, that kind of thing happens on the regular.

Why does it lead to any sensation at all?

Because the word "sensation" was coined precisely to refer to that particular complex of phenomena occurring within organisms like ourselves.

If the actual question here is "why don't processes that I understand as loosely analogous to some of those that scientific investigation has established occur within me lead to sensation in some other system that I understand as loosely analogous to myself", the answer is because you're choosing not to apply the word "sensation" to whatever loosely analogous evaluative processes do occur within that system under those circumstances. That you don't find "sensation" to be an appropriate word to use is rooted in the looseness of the analogies, not in the system itself.

And the simple fact is that poets and artists and engineers and physicists do use exactly that kind of sloppy semantics all the time. When it comes right down to it all semantics is sloppy, and it pays to cultivate a sense of how much effort it's worth putting into nailing down any given distinction's every last edge case before running into pointlessly diminishing returns. But the analogy between you and me is tight enough to steer well clear of any such edge case for "sensation".

A robot could be programmed to urgently try to move away from extreme heat, sharp objects damaging its surface, etc. without ever feeling pain or any sensation.

...much as I need to do when eating with sharp utensils before the novocaine has worn off.

Why do we need to feel the sensation?

Because the faster I can interrupt the process of poking myself in the gums with the fork, the less damage I will do to myself.

Wouldn't a compulsion to fix the problem work just as well?

Not necessarily. Depends on the implementation.

Also, in human beings, sensation can be a pretty reliable guide as to which problems actually matter. The problem of not stabbing myself in the face with a fork, for example, matters much more to me than the problem of feeling annoyed by the fuzzy semantics of the languages I habitually cogitate in.

Do we even know that the sensation you call pain is the same thing I experience when I say I feel pain?

We know perfectly well that it isn't the same thing, because I'm over here and you're over there. After spending time in an informal process of statistics gathering, though, most of us make the reasonably safe assumption that it's roughly the same kind of thing.

Or maybe what you call pain is what I would call an unpleasant sound.

I've seen other people react to phenomena that I evaluate as unpleasant sounds in much the same way as I've seen them react to stubbing their toes, so that's plausible.

No matter how much we examine what's happening in the brain, we still don't know how it makes those sensations happen.

That's certainly true for those of us who haven't spent much time familiarizing ourselves with the neurological literature.

We don't know of any way to tell from what we observe in the brain what sensations the owner of the brain is experiencing.

For any specific brain owner, it's already reasonably feasible to build a fairly reliable mapping between FMRI imaging and reported sensation.

I can say, "I feel pain" but no one else knows what that means to me.

For that, you have my sympathy. Having one's pain summarily dismissed is just plain rude.

You might assume my pain experience is the same as yours but it's only an assumption.

It is, and I don't.

When people talk about subjective experience being unexplained or mysterious, that's the aspect of it they're referring to - those sensations that you can only know about by actually experiencing them. The way it feels to have a cut, or to smell roses, or to see the color blue.

This is where you and I part ways, because I don't find any of that stuff even slightly mysterious. The privacy of my own experience is exactly what I would expect given my structure as an autonomous animal.

What I do find kind of mysterious is the extent to which so many people seem so completely determined to doubt and discount their own experiences until they can get them "explained" by somebody else. I have some hypotheses, most of them based on observations about the way I've seen people socialize each other, but have not so far been motivated to subject any of these to testing with any kind of rigour.
posted by flabdablet at 9:56 PM on January 30 [5 favorites]


Flabdablet covered pretty much everything I wanted to say (I started replying but it was late and word salad), with two additions:

1) the reason why pain works the way it does is because long before our species existed, organisms that handled it differently or not at all got eaten by ones that handle it the way we do. That may seem a little trite and reductive but it’s one of those rare cases where the Anthropic Principle is actually useful: we’re here, both from-scratch evolutions of a central nervous system (cephalopods, the rest of the animal kingdom) handle it this way, ergo different approaches were somebody’s dinner.

2) Qualia not being transferable is so intrinsic to neural-based cognition it’s practically at the level of thermodynamics to physics, as in: I don’t know if any of the rest of this is right, but if this part is wrong we have to reboot the entire field from a blank page. Every neural network (man, machine, animal) is trained from a hybrid of baked-in structural biases (instinct) and accumulated experiences (it’s just that in LLMs it’s all a front-loaded fixed set). With 30 trillion connections per human brain the number of potential patterns is far beyond counting or “number of atoms in the universe” scale. Every experience we have is subtly shaded by that pattern, and if you are capable of subjective experiences (which rules out current deep learning implementations) then the pattern is ever-shifting.

Needless to say, the possibility of shared identical qualia is zero. The possibility of anything beyond shared crudely familiar qualia is insanely small. No two people have ever experienced the same thing in the exact same way, but that in no way compromises the ability of neural networks to explain our subjective experiences; it just sets some hard and occasionally frustrating boundaries on testability / reproducibility.
posted by Ryvar at 9:26 AM on January 31 [2 favorites]


But the way pain feels - or the fact that we feel it at all - isn't explained by anything we know about the activity in our nerves and brain.

Redstart, I'm having trouble parsing this idea. The way pain feels is intimately tied to which receptors are being stimulated, right? Burning my finger feels very different than slicing my finger, because while I categorize both as pain, they're really two different things. The fact that I ascribe properties of 'unpleasant' and 'stimulating' to both of them then creates "the way pain feels." I don't know if I'm saying that clearly, but in this sense, we're basically talking about a sensation existing along three axes--which receptors are being stimulated, whether the underlying interpretive bits of my brain characterize them as pleasant or unpleasant, and whether they characterize the sensation as activating or calming--and those interpretive bits then go on to create the rest of what we associate with pain, say an increased heart rate, which is then also sensed and goes through the characterization process. And all of this is brain activity.

Why does that particular type of nerve input lead to that particular sensation and not, say, the sensation of an extremely disgusting smell or an extremely loud and unpleasant sound?

This one seems a little simpler, to me. The organs evolved in different directions--to pick up chemicals or vibrations from the air, to detect heat or light. The information they provide is keyed to the outside stimulus. So--and I think flabdablet sort of covers this as well in his answer--we use the language associated with the particular organs. What would it mean to smell a sound?

Why do we need to feel the sensation? Wouldn't a compulsion to fix the problem work just as well?

In a lot of cases, yes--as evidenced by the variety of animals with much simpler nervous systems, who get by okay without subjective experience of pain. Like, I think it's pretty safe to say something without a brain does not have a subjective experience of its senses. It gets a sensation, and reacts, and that's the end of that transaction. The little flatworm detects a bad chemical and wriggles out of the way, on to new (but limited) adventures.

But being able to handle extra information turns out to be really useful! I mentioned frog eyes earlier, and one of the things I love about that paper is the bit where they might as well have referred to part of frog vision as bug detection, because that's pretty much all it does. Frogs are way more complex than, like, hydras or something--and yet their vision is stereotyped toward a certain function. That kind of limits their nutrition though. Can you imagine, only being able to eat when something moves a certain way in front of you? But add a little complexity--detectors that work even when something is still, detectors that pick up on certain color schemes--and suddenly you've got a world of plants to eat as well.

What's wild to me about that last bit is, the world begins to respond to your expanding senses. Bushes with brighter berries get their berries eaten, and seeds distributed, better than some bland ugly berries. So suddenly there's this evolutionary pressure to get prettier. Brighter, more vivid colors. More sugar. And that's a balance, because you-as-bush really need that sugar for your own individual survival, but if you can just pump a little more into your berries, suddenly your lineage is much more liable to survive.

In other words, there is an evolutionary pressure on both sides--we who eat want to sense more, with finer levels of discrimination along many more senses--and what we eat wants to fill up those senses. (Or, y'know, wants us to leave it alone, so develops bitterness in the leaves--but sense data is the language we're communicating in.)

I think--maybe I'm wrong about this--but I think it's hard to imagine a creature that had great sensory complexity with no corresponding behavioral complexity. All this stuff has to be processed somewhere. Creatures without brains have to have pretty simple, schematic nervous systems, with a fairly constrained number of behaviors they can follow. There's just nowhere to put all this data. But once you begin processing the data in a central location, you break that earlier one-to-one correspondence between sense data and behavior. You're processing all these colors and flavors, but to earn its keep, your brain has to assimilate all this, has to tell you something about them, has to add information that wasn't there before, to guide the next action.

And I think the thing your brain tells you about them, is the subjective experience. Now there is a little story, even if it's pre-verbal, even if it's unconscious. "I ate the honey, and it tasted good and was nourishing, so the next time I find it I'll eat it again."
posted by mittens at 11:23 AM on January 31 [1 favorite]


Some people are content to say conscious experience is just an integral and inevitable aspect of certain types of brain activity that exists because it has to and feels the way it does because, well, being aware of your perceptions has to feel like something and this is just what it happens to feel like. End of story.

To me, that's a lot like saying, "You want to know what heat is and why it radiates from a fire? Why, it's just an integral aspect of fire. There's no need to ask what it is. It's what you feel when you stand near a fire. It radiates from a fire because radiating heat is part of what fire is. End of story."

It's true that radiating heat is an integral and inevitable aspect of what fire is, but if you leave it at that you miss out on a lot of interesting and useful information about energy and electromagnetic radiation. Some people (looking at you, flabdablet and Ryvar) don't seem to care about getting that deeper level of explanation for conscious experience. Or maybe they don't believe any deeper explanation is possible. Or maybe they really are zombies who lack qualia so they don't understand what it is that needs to be explained.

But some of us are curious about how the whole system works. If our brains are causing conscious experience, what exactly has to happen to make the experience happen? Can we predict at what level of complexity an AI might start to have experiences? Can we tell which organisms have experiences? Is Ryvar right that everyone's qualia are different? How different? Could that mean that when you detect the wavelengths of light I call red, you have the experience I call seeing blue? Could it mean that you have the experience I call smelling lemon? Could you be having some type of experience I have no name for because I've never experienced it?

If you aren't interested in those questions, fine, but maybe you can let other people be curious about them without assuming they're just trying to convince themselves they have immortal souls.
posted by Redstart at 11:41 AM on January 31 [1 favorite]


The fact that I ascribe properties of 'unpleasant' and 'stimulating' to both of them then creates "the way pain feels."

No, that creates knowledge that you are experiencing pain, not the experience itself. Everything you describe about how pain works could happen in a robot running a sophisticated program that allows it to detect potentially damaging things happening to it. It could identify which receptors were being stimulated, use its stimulus analyzing algorithm to characterize what it detected as unpleasant and activating, and then move away from the thing that was damaging it while saying, "Ouch, that hurts!" And it could do all that without ever feeling the sensation you call pain. When you get hurt, isn't there a sensation you imagine robots lacking, a sensation that is a completely different thing from the knowledge that you are in pain?

The organs evolved in different directions--to pick up chemicals or vibrations from the air, to detect heat or light. The information they provide is keyed to the outside stimulus. So . . . we use the language associated with the particular organs.

Sure, but is there an inevitable link between the type of sense organ and the type of sensation? When our retinas detect light, signals get sent to our brain with information about the wavelengths detected and we see colors. But could there be aliens on another planet whose brains produce a different type of sensation when light is detected - the type of sensation we call sound, for instance? Different sounds for different wavelengths. Could there even be humans on this planet who experience the type of sensation you call sound when they detect light? They would use the language associated with eyes and seeing, of course, to describe that sensation. They would refer to the different sounds as colors. We all use the same language - colors, visual images, etc. - to describe what happens when our eyes perceive something. But does that language always refer to the same type of subjective experience? We don't know.
posted by Redstart at 12:08 PM on January 31 [1 favorite]


I don't find any of that stuff even slightly mysterious. The privacy of my own experience is exactly what I would expect given my structure as an autonomous animal.

It's not mysterious that it's private; it's mysterious because it's private.
posted by Redstart at 4:04 PM on January 31


No, that creates knowledge that you are experiencing pain, not the experience itself.

I am not sure what to do with this sentence. The process I was describing takes place after having been injured, so if this process creates knowledge that I am experiencing pain, and that knowledge is separate from subjectivity, then you're locating subjectivity in a very narrow, low-latency gap between two brain-specific processes. Again I'd have to ask...what do people think happens in that gap?

Everything you describe about how pain works could happen in a robot running a sophisticated program that allows it to detect potentially damaging things happening to it.

But this is just p-zombies again. "What if we designed a material being that does everything humans do but lacks this thing I insist is non-material?" I can imagine creating a robot that responds like a flatworm--bad stimulus, avoid, avoid--that lacks subjectivity. I cannot imagine creating a robot that responds to pain by retracting its claw, classifying the sensation according to multiple axes that wind up in a bucket-group called 'pain,' processes all the narrative information (where was I when it happened, what was I doing, was that table leg always in the way), plus all the various memory-formation and emotion-formation, yet that lacked subjectivity. I don't know what subjectivity can mean, without these functions. I don't know what the this-ness is of an experience, if you subtract the things my senses and brain do about the event. I cannot imagine having an experience that I do not have knowledge of.

is there an inevitable link between the type of sense organ and the type of sensation?

I am not sure what the point of having sense organs is, if they're not providing particular and specific types of sensations? Why not just have one big all-seeing, all-hearing nose, in that case, rather than evolving these other bits?
posted by mittens at 4:55 PM on January 31


you're locating subjectivity in a very narrow, low-latency gap between two brain-specific processes. Again I'd have to ask...what do people think happens in that gap?

No one knows what's happening. That's why people find conscious experience mysterious. We don't know how it relates to what's happening in the brain. I don't know that it's accurate to say subjective experience happens in a gap between brain processes. Maybe it's simultaneous with brain processes.

I am not sure what the point of having sense organs is, if they're not providing particular and specific types of sensations?

Obviously it's helpful to have different sense organs that are sensing different things and it seems helpful for the experiences connected to the different types of perception to be different, so you can tell the difference between seeing and hearing and smelling. But is there a reason that perception of light has to give you the experience of seeing patches of color while perception of sound has to give you the experience of hearing? Could it be the other way around? Remember, the sense organs aren't providing the sensations directly. They're sending messages to your brain and your brain is doing something that somehow results in you having an experience.
posted by Redstart at 5:34 PM on January 31


It's true that radiating heat is an integral and inevitable aspect of what fire is

Right, which is the point of my having brought up the fire analogy in the first place.

if you leave it at that you miss out on a lot of interesting and useful information about energy and electromagnetic radiation.

Right, which is why I object to the idea that any method of inquiry must in principle be declared inapplicable to investigating the analogue of heat in that analogy.

Some people (looking at you, flabdablet and Ryvar) don't seem to care about getting that deeper level of explanation for conscious experience.

It is obviously the case that there remains much to learn about consciousness as a phenomenon. But that does not, in and of itself, make consciousness any more inherently mysterious than any other phenomenon about which there remains much to learn, which is basically all of them.

A huge and growing body of knowledge about consciousness as a phenomenon already exists, almost all of which is the result of patient and careful scientific inquiry in fields as disparate as physics and psychology. I think it's reasonable to expect that at some point - probably not within my own lifetime, though - that body of knowledge will become comprehensive enough to allow conscious systems to be implemented as a technology.

I also think it's reasonable to expect that the technology involved is going to resemble biology to a much greater extent than any of today's IT does, due to time- and power-efficiency constraints. Which strikes me as a little bit hilarious, given that we already know how to implement conscious systems as biology, but I can also appreciate the desire to build Rube Goldberg machines for their own sake as a valid form of artistic expression.

Or maybe they don't believe any deeper explanation is possible. Or maybe they really are zombies who lack qualia so they don't understand what it is that needs to be explained.

If there's one thing I'm overwhelmingly confident about, it is that even after somebody has actually built a conscious system from scratch, there will be people who insist that any system so constructed must in principle be a p-zombie because it lacks the mysterious, ineffable, goalpost-shifting quality that only those dedicated to the deepest study of consciousness are completely sure exists exclusively within themselves. Looking at you, Redstart, if you'll pardon my rudeness.

At some point, insisting that an explanation actually isn't an explanation just becomes perverse.

But some of us are curious about how the whole system works.

Curious enough to actually put in the work to find out, or to get familiar with all the work that other people have put into finding out, though? Or merely arrogant enough to assert that all those folks who are putting in that work are missing some ill-defined "point" and therefore wasting their time as a matter of principle?

If our brains are causing conscious experience, what exactly has to happen to make the experience happen?

Good question. What kind of testing might one perform in order to answer it?

Can we predict at what level of complexity an AI might start to have experiences?

No, because complexity doesn't really have anything that it would be reasonable to describe as a "level", and even if it did, there's no a priori reason to expect that the behaviour of any given pair of complex systems whose only commonality was some kind of "level" metric would share any specific functionality.

Can we tell which organisms have experiences?

Yes, but only to the extent of being able to draw conclusions from externally observable structure and behaviour, and only by having the humility to understand that there must always be room for reasonable people to disagree about edge cases.

Is Ryvar right that everyone's qualia are different?

Yes.

How different?

In (very loosely) the same way as EBCDIC and ASCII are different.

Could that mean that when you detect the wavelengths of light I call red, you have the experience I call seeing blue?

No. The experiences I have are all mine, and the experience you call seeing blue is all yours.

Could it mean that you have the experience I call smelling lemon?

No. See above. Part of somebody's red-wavelength-perceiving experience might in principle include smelling lemon (see synaesthesia, above) but as far as I recall I've not noticed any such correlation within my own.

Could you be having some type of experience I have no name for because I've never experienced it?

Yes.

I don't know that it's accurate to say subjective experience happens in a gap between brain processes. Maybe it's simultaneous with brain processes.

...and a glimmer of consensus begins to emerge.
posted by flabdablet at 9:57 PM on January 31 [1 favorite]


But is there a reason that perception of light has to give you the experience of seeing patches of color while perception of sound has to give you the experience of hearing?

Words mean things. Perception of light is the experience of seeing patches of color. Perception of sound is the experience of hearing.

Could it be the other way around?

Sure. Talking in code is a thing.

Remember, the sense organs aren't providing the sensations directly. They're sending messages to your brain and your brain is doing something that somehow results in you having an experience.

And if you actually care about knowing how any of that stuff works more than you care about maintaining a sense of the beauty of its ongoing mystery, neurology is a really good place to start.

One of the first things that neurobiology will teach you is that drawing system boundaries that conceptually separate sense organs from brain is an impediment, rather than an aid, to understanding the ways in which conscious biological systems respond to information they collect from their environments. The processing starts early.
posted by flabdablet at 10:24 PM on January 31 [1 favorite]


And if you actually care about knowing how any of that stuff works more than you care about maintaining a sense of the beauty of its ongoing mystery, neurology is a really good place to start.

Thank you. I backspaced over a lot of nerd petulance this morning and came back later to find it handled by someone not faking adulthood. Or at least faking it way better than me.

Some lingering mild petulance follows:
Redstart, flabdablet and I are at the tail end of most of these threads precisely because we’re curious about consciousness, and also because unlike most of Metafilter’s actual neuroscientists / deep learning experts, there are zero consequences for us if it turns out we were loudly and publicly mistaken.

I respect flabdablet because I know that, like me, he has done the reading on all the liminal spaces between human neurology and AI and at some point seriously grappled with the AI toolkit; the reason we’re not asking those questions isn’t because we don’t care or are philosophical zombies, but because the broader meta-field of cognitive science has a ton of fairly good answers or at least reasonable guesses that fit all the available evidence. But that kind of informed speculation, no matter how solidly reasoned, isn’t stuff scientists can print without seriously jeopardizing their careers.

We are merely armchair experts but at least deeply informed one, and we have the freedom to voice our (in my case) inadequate and sometimes incorrect musings; things the actual experts suspect but mostly can’t say. (Caveat: logicpunk is an actual expert in the field and in other threads he and I have been about as opposed as two people with a good education on the fundamentals can get, at least on human / AI neural network structural similarities)

Storytime:
Back in ’99 a small pack of undergrads (and a pair of grad students intent on academic suicide) at RPI’s Minds and Machines lab put together a rough consensus model of how the human mind works and how we could go about reproducing much of it with machines. It was speculative but detailed, and remarkably internally consistent. The following twenty-five years of blood, sweat and tears from actual neuroscientists, and the 2015+ explosion in deep learning / deeply-layered artificial neural networks have slowly begun to prove that our consensus model was 90~95% correct, despite our having zero ability to prove any of it at the time.

I ragequit at the tail end of my sophomore year with 90% of a Comp Sci degree and 50% of a psych degree in large part because the formal logicians - the people who actually have the mindset you’re so opposed to - still ran most cognitive science departments at the tail end of the AI Winter and were vehemently opposed to bottom-up (neural) approaches. They wanted to quietly code LISP until they retired and for my part the loathing was mutual. And it’s largely because of them and their corresponding departmental heads elsewhere that cognitive science eventually dissolved back into its constituent fields (neuroscience, computer science, psychology, linguistics, and …applied philosophy, I guess).

I can’t speak for flabdablet, but I have a very solid career now as a game developer and there are zero consequences if I’m wrong, but I do my best to keep tabs on the latest - it’s impossible to stay fully up to date even for fulltime academics in the field right now - and what I’m telling you comes from that perspective. I am comfortable saying that there is vastly less mystery about the nature of consciousness than you believe, and that you need to do an enormous amount if reading to understand why those questions can be considered tentatively settled. There is definitely some ugly to reconciling how the model of mind as continuous what-comes-next predictor can coexist with the whole perpetually-authoring-a-false-narrative that we all perform. And a whole lot of no-fucking-clue when it comes to how humans recognize when it’s time to exit their 98%-of-the-time autopiloting on routine and pattern-matching, and switch gears into deep introspection and metacognition. Those are closer to the unanswered questions, insofar as I’m aware (and I may simply be wrong on this - probably in a “there are good speculative answers to those as well” direction, and that’s okay).

I don’t personally have even the slightest guess as to either question, but I can’t wait to see what answers each new wave of bright young academics - now with far better support structures than we had - will come up with. I still find it all endlessly fascinating, and while I positively drool over the salaries people with deep learning PhDs now command, I think I’m overall much happier not having to worry about being wrong.
posted by Ryvar at 2:29 AM on February 1 [3 favorites]


I need to get back to game developing but wanted to take a moment for a well-deserved mittens shoutout:
You're processing all these colors and flavors, but to earn its keep, your brain has to assimilate all this, has to tell you something about them, has to add information that wasn't there before, to guide the next action.
Spot fucking on. The human brain consumes a ridiculous amount of energy/nutrition and makes the birth process far more risky when compared to other mammals. We spend way too much on this shit to not use it. My speculation on why metacognition and introspection are so comparatively rare is that they either raise energy costs, or - in the context of a continually fine-tuning network - they run the risk of massive changes to the structure of the network. If everything you hated about socialism turns out to be a lie, then the changes to every synapse that lights up when you think about socialism also changes the dozens if not hundreds of other concepts skills that share that synapse in fuzzy n-dimensional overlap. You can compensate for this with redundancy - and the routine mild damage from banging your head or one too many at the bar is clear indication there’s a fair amount of redundancy or capacity to reconstruct - but the risk of major shifts yielding incoherence and loss of function on dozens of seemingly unrelated topics/skills/memories is entirely real. Feeling exhausted after too much perspective-changing conversation might be a survival trait if the practical function of dreaming is reconciling the network back to minimum local loss.

None of that tells us how we recognize the time for introspection has come, it just suggests why we have a strong tendency to avoid it.
posted by Ryvar at 3:18 AM on February 1 [1 favorite]


Is there any equivalent of Sompayrac's How the Immune System Works, but for neurobiology? A med-student level summary of the field that's kept reasonably up-to-date?
posted by clawsoon at 4:59 AM on February 1


(The summary for How the Immune System Works says "without any confusing jargon or complex technical details", but it does get into MHCs and IgAs and APCs and TCRs and IFNs and a bunch of other TLAs. So it's technical, but not overwhelmingly technical for someone with a biology background. That's the level I'd be looking for.)
posted by clawsoon at 5:04 AM on February 1


Not that I’m aware of - see also: the usual amateur enthusiast problem of only seeing what’s in your immediate zone of interest or stuff a couple friends from the old crew know you’ll like.

This looks like a pretty good review from 2021, though. Nicely bidirectional in that it includes inspirations in neuroscience originally from ANN insights.
posted by Ryvar at 8:12 AM on February 1


Perception of light is the experience of seeing patches of color. Perception of sound is the experience of hearing.

I'm not sure whether I agree with this or not because I'm not sure what meaning you intend these words to have. My first impulse was to point out that perception of light doesn't have to be connected to the experience of seeing patches of color. You could imagine some alien creature or possibly even another person who, when their retinas perceived light, then had the kind of experience I have when my ears detect sound waves. But of course if they were English speaking humans they would call what they were doing seeing and they would call the clusters of different sounds they heard coming from different locations patches of color. Maybe that's what you mean.

I have no doubt that you and Ryvar are smart people who know a lot, but I don't think you really understand the arguments you're arguing against. You're probably thinking, "Of course I don't understand them! They're illogical and incoherent!" And that's okay with me. I'm not arguing with any expectation that I'm going to change your mind. I'm just arguing for the benefit of anyone reading along who isn't sure what they think about consciousness and might find it helpful to hear some different points of view. (And because I like to argue.)
posted by Redstart at 11:27 AM on February 1


Hey, if you reached the end of this thread and felt your blood pressure rise--or if you just think that we talked too much about heads, and not enough about hearts--here's a fun paper about the way "central neurons can feel the pulse within the brain." Which is probably the most panic-attack-inducing thing I've ever heard.
posted by mittens at 11:46 AM on February 1


I have only seen scientism used a pejorative by flat earthers in their attacks on scientists and generally accepted scientific theories. I assume that it has another meaning here; can you clarify?

Sure, I'm talking about scientism in the philosophical sense, not as a pejorative toward scientists or scientific theories generally. I don't know if I can do better than Wikipedia in saying that scientism refers to a belief in:
the universal applicability of the scientific method, and the opinion that empirical science constitutes the most authoritative worldview or the most valuable part of human learning, sometimes to the complete exclusion of other opinions, such as historical, philosophical, economic or cultural opinions. It has been defined as "the view that the characteristic inductive methods of the natural sciences are the only source of genuine factual knowledge and, in particular, that they alone can yield true knowledge about man and society". The term scientism is also used by historians, philosophers, and cultural critics to highlight the possible dangers of lapses towards excessive reductionism with respect to all topics of human knowledge.
posted by Artifice_Eternity at 1:06 PM on February 1


I guess what I'm having trouble grasping is, aside from the attention part of my brain highlighting what I'm receiving from my senses--and subtracting all the language going on in my head about it--what else is there, about this subjective experience I'm having? What more is required?

You've hit on one of the classic philosophical questions about consciousness. Here's the thing: If you add together the physical phenomena you're perceiving, your sensory organs sending impulses to your brain, and the language you apply to it, do those add up to explain why you have a subjective experience? It's pretty clear that they don't.

We can now build machines that can "look" at objects, process the images, and apply verbal labels to them (we've all spent years training them with our CAPTCHA responses, LOL). Why would anyone assume that the sequential occurrence of those processes would somehow generate the experience of awareness?

There's something else going on in our minds. That something else is consciousness. I'm afraid I have more questions than answers about what it is or how it works.
posted by Artifice_Eternity at 1:11 PM on February 1


What if it turned out to be not so much a principle as a Just So story like phlogiston?

I'm not talking about a story, I'm talking about subjective awareness of existence, which I (and I assume you) have every day.

The "Just So story" is pretending that an explanation of a bunch of tangential material processes somehow explains the nature of such subjective awareness.
posted by Artifice_Eternity at 1:14 PM on February 1


Continuing to catch up on the thread: I appreciate what Redstart said here:

Why does that particular type of nerve input lead to that particular sensation and not, say, the sensation of an extremely disgusting smell or an extremely loud and unpleasant sound? Why does it lead to any sensation at all?

But I think the sentence I bolded above is really the key question. Chasing the other stuff just leads to distractions about synesthesia, the evolutionary advantage of pain responses, etc. None of that is really relevant here. What's relevant is the question of why we experience anything.

It seems clear that most of the physical functions that our sensory apparatus aid us in performing could be performed with simple feedback mechanisms not requiring subjective awareness. In other words, most living things, possibly including us, could get by doing most of what we need to do as "zombies" without interior experiences.

And yet we have interior experiences. There is a "there" (or maybe more accurately, a "here") in there.

I realize that there are people who are deeply incurious about that fact, and people who assume that the documentation of a bunch of neurological trivia can add up to an "explanation" of why there's a light on inside our minds. And I realize that these are attitudes that are difficult, perhaps impossible, to budge people off of. Still, I'll never cease to be amazed that people hold them.
posted by Artifice_Eternity at 1:25 PM on February 1


You could imagine some alien creature or possibly even another person who, when their retinas perceived light, then had the kind of experience I have when my ears detect sound waves.

Even leaving aside the absolutely sound arguments that Ryvar made above about just how broad-brush that "kind of experience" comparison needs to be, and even leaving aside what I consider to be a complete misapprehension of the physical locus of perception (in my view it's the whole organism that performs perception, not just its retinas), what I already know about the structure and operation of retinas and cochleas, and about image processing and feature recognition and audio filtering and sampling theory and the relationships between frequency, wavelength and propagation speed, would rapidly lead me to classify any such imagining as an amusing fiction rather than a serious candidate for worldview expansion.

If you add together the physical phenomena you're perceiving, your sensory organs sending impulses to your brain, and the language you apply to it, do those add up to explain why you have a subjective experience? It's pretty clear that they don't.

Pretty clear to me that the fact of subjective experience is necessarily logically prior to to any process of explanation-seeking and logically prior to any division of experience via distinctions such as self vs non-self, physical vs non-physical, sensory organs vs everything that isn't sensory organs, brains vs everything that isn't brains, language vs everything that isn't language and so forth.

Which gives rise to the question: what do you mean by "why"? Does any form of words that you would consider to be a satisfactory answer to "why" in fact exist at all? Can any such form of words in fact exist, or is it simply the case that the "mystery" arising from this inquiry is a logical consequence of the relationship between the question itself and that within which it arises?

There's a game that pretty much every kid invents at some point to torment their parents with, and that's the "but why?" game. Once the parent enters the game with an answer to the kid's initial question, the kid then responds to everything the parent then says with the words "but why?" from a face shining with innocent wonder. The game goes on until one player - most often the parent - decides they've had enough of it.

The interesting thing about that game is that given any half-respectable degree of parental patience, the kid will have completely lost track of the original question and any network of answers to it well before the game ends, and the only question actually remaining inside the kid's mind will be "how long can I keep this running before Dad shuts it down?"

This happens because any genuinely useful answer to any "why" question needs to be formulated using concepts with which the questioner is already familiar. Sometimes, the process of finding an answer causes that "click" that I'm sure we've all felt, where a bunch of previously apparently disparate concepts subtly rearrange themselves into parts of a greater whole, Magic Eye 3D picture style, and we get that lovely dopamine hit of genuine understanding as a new, larger concept is born.

But there's only so much genuinely useful answer formulation that the parent - a person who has been collecting and building concepts for way longer than the kid - can possibly do before running out of concepts that they have any reason to believe that their kid would have access to; and if they persist beyond that point, then from the kid's perspective the parent is now just gibbering in Adult Word Salad and the game devolves as described.

"Why" is a super useful question, but like all questions, it has limits on reasonable applicability even in its most general form. Recognizing that such limits exist, and seeking to understand their nature, does not amount to a lack of curiosity; quite the opposite, in fact. We're all, I think, susceptible to running our own fully internal instance of the "but why?" game and if we don't notice that we're doing that, then every time it happens, we're going to be wasting time that we could be using to expand our repertoire of understandings instead.

The "click" of understanding, and its associated dopamine hit, is also worthy of scrutiny. I know from direct personal experience with psychosis that it cannot be relied upon to distinguish truth from falsehood. Epiphany demands fact-checking; truth is not beauty and beauty is not truth. Relying on truthiness as a proxy for truth is really hazardous. Do you want MAGA? Because that's how you get MAGA.

But I digress. The purpose of explanation is to relate what we seek to understand back to what we already know, hopefully in a manner that then lets us recognize things previously considered disparate as instances of some larger principle. But if that for which we seek explanation is itself already the most comprehensive unification that's logically possible - the all-encompassing observation that "this is" - then it becomes logically impossible to generate an explanation that is in any way different from that which we seek to explain. That's not mysterious, that's just logic.

And yet we have interior experiences. There is a "there" (or maybe more accurately, a "here") in there.

There's a "this" that is. We can't even begin with "in here" or "out there" before accepting that as the page upon which we can then begin to draw.

I realize that there are people who are deeply incurious about that fact, and people who assume that the documentation of a bunch of neurological trivia can add up to an "explanation" of why there's a light on inside our minds.

Note well: explanation is not exploration. Curiosity is the desire to explore - to find out what is there.

I realize that there are people who are so deeply incurious about their own nature and that of the wider world that they're willing to write off centuries of persistent, systematic, dedicated exploration work as "trivia" on no better basis than that they are unwilling to engage in much such work themselves. And I realize that these are attitudes that are difficult, perhaps impossible, to budge people off of. Still, I'll never cease to be saddened that people hold them.
posted by flabdablet at 10:04 PM on February 1 [1 favorite]


Recapping my position on consciousness, in case it's not already clear:

1. Consciousness is a specific kind of behaviour exhibited by a specific kind of structure.

2. You and I are instances of that specific kind of structure.

3. The further that the structure of any system departs from isomorphism with our own, the sharply lower become the chances that it will be capable of exhibiting the specific behaviours to which we apply the word "consciousness", and the weaker will be the justification for applying the word "conscious" to it.

4. There is no in-principle reason why a conscious structure could not be constructed as a technological artifact.

5. No such technological artifact has yet been constructed, and I think that the complexity required is going to preclude its construction until long after I die. That said, I am generally supportive of cathedral-building. Cathedrals are cool.

6. An awful lot of people continue to insist that building cathedrals is in-principle impossible, despite never having stacked one stone atop another at any time in their whole lives.
posted by flabdablet at 10:50 PM on February 1 [3 favorites]


here's a fun paper about the way "central neurons can feel the pulse within the brain."

I feel so validated right now.
posted by flabdablet at 11:33 PM on February 1 [1 favorite]


A slightly different angle: if the only available answer to some particular question turns out to have the form of an infinite regress, and the fact that the regress is indeed infinite is not noticed by the quester, then the amount of time for which contemplating the question feels worthwhile becomes very large. The fact of being able to spend apparently unbounded amounts of time in such contemplation, or having spent large amount of time in contemplation without making progress, makes the question feel much deeper than those answerable without infinite regress.

But it turns out that if one simply refuses to allow one's mind to be blown by the idea of infinite regress and infinity generally, and instead reacts to its discomfiting nature by spending time on serious exploration (including introspection) of the opportunities it presents for the human mind to jump out of repetitious behaviour that would otherwise risk locking it up, one can learn all kinds of useful stuff. Including, for what it's worth, the nature of putting and answering questions and the value of the distinction between consciousness-as-behaviour and conscious-entity-as-structure.
posted by flabdablet at 1:15 AM on February 2 [2 favorites]


The perennial question of why there is something rather than nothing is also one of those infinite-regress things, by the way.

Every philosopher ever: Why is there something rather than nothing?

Me: Who's asking?

EPE: That would be me.

Me: Are you something, or are you nothing?

EPE: I'm something, all right! Look upon my published works, ye slighty, and despair. I get cited all the time!

Me: Well, there you go then. There is something because any entity capable of seeking that reason in the first place is an instance of it.

EPE: No, you're just being deliberately obtuse. The question is why any entity needs to exist at all, whether capable of formulating that question or not.

Me: So you're looking for an answer that would still work even if we stipulate that the existence of an existence-free universe is not merely a glaringly obvious analytical contradiction?

EPE: Yes.

Me (clamping thumbs tightly on EPE's carotid arteries): You'll have it in about five minutes, I expect.
posted by flabdablet at 1:56 AM on February 2 [1 favorite]


The "click" of understanding, and its associated dopamine hit, is also worthy of scrutiny. I know from direct personal experience with psychosis that it cannot be relied upon to distinguish truth from falsehood. Epiphany demands fact-checking; truth is not beauty and beauty is not truth. Relying on truthiness as a proxy for truth is really hazardous. Do you want MAGA? Because that's how you get MAGA.

FWIW, leaving aside the arguments above which I think you handled as well as humanly possible - I have had a single severe psychotic episode (hours of full sensory hallucination), decades ago, and I’m really sorry to hear you’ve had any remotely similar experience. Like you, it taught me to double-check that “click” of epiphany, even though I will never stop seeking it.

On the plus side, that episode is what snapped the feedback loop of evangelical fundamentalism I was raised/trapped in long enough for me to do a full ideological reset. It’s why I wound up an Intersectional Diet Marxist on Metafilter instead of MAGA or QAnon. So: it fucking sucked, but it was necessary and ultimately corrected the course of my life to something far more positive. Huge YMMV, though.
posted by Ryvar at 5:09 AM on February 2 [2 favorites]


I feel the need to exit this thread with a massive caveat, for some reason. So here goes: as I admitted above, I do not hold myself to strict scientific rigor on this topic, there is a truckload of “well this supposition neatly ties up five open questions, and offers a general principle the makes it easier to answer future related questions …so let’s assume that, very tentatively, and continue on.” Unsurprisingly a lot of my conversations with experts outside Metafilter - the Dean of Neuroscience at one of the big schools here in Cambridge is a friend, and in this town if you chuck a rock right now you’ll hit two deep learning postgrads - go along the lines of “dude you are running way, way ahead of what the science actually supports right now… that doesn’t mean you’re wrong, it’s very sharp guesswork and I’d even say there’s a good chance you’re on the right track, but…”

I abandoned formal work in this field long ago - really while I was still ramping up - and my goal is not to be perfectly, completely correct. My goal is to die with the most complete possible understanding of how minds work - human or not - that it was even remotely possible for someone born in 1980 to have developed in their lifetime. I will accept the possibility of 10% error if that means going 50% further, but I would not accept, say, 30% possibility of error, or anything that ever contradicted available evidence even slightly, or lacks multiple indicators that it’s How This Shit Actually Works.

This is only okay because I have no intention of ever working in this field or publishing a paper in it or any other field, except possibly something on new and exotic voxel topologies in runtime applications (games) because I have something better than what’s out there and my code started working well recently.

So yeah, for whatever reason I felt the need to issue that disclaimer: I am probably not wrong overall, but as with all convenient epiphanies discussed above you should take my overall understanding of this topic with a medium-sized grain of salt. That neuro-AI survey paper I linked for clawsoon above is chock-full of great stuff that is far more rigorous, and you should read that instead if you want less speculation.
posted by Ryvar at 5:39 AM on February 2


I've also said pretty much all that seems to be called for, so I'm out as well. Thanks, Ryvar; as always, tag-teaming with you has been a delight. Many thanks to mittens as well for your wit and clarity, and to Redstart for asking so many of the same questions that have occupied my own thoughts for such a long time. It's not often I get handed an excuse to pontificate on this stuff and it's really good fun when it happens.

I love that little click of understanding, so my own psychotic break, which was mostly driven by having that click accompany every half-assed idea on any topic, was basically weeks of bliss. I never got frightened at all while mad, because there was always some blindingly obvious reason why whatever was happening had to be happening; it was all part of The Plan that I was amongst the very few who Understood.

The fear came afterwards, once I'd recovered enough to understand just how unmoored I'd become, how many times those who love me had come stupidly close to losing me, and how much stress and work I'd unwittingly put them through. I became terrified of contemplating any of the kinds of topics I recalled having thought about beforehand in case I fell back into insanity, and it took about two years for that terror to subside to the extent that I no longer felt like I was unsafe unless remaining in some kind of intellectual body cast.

These days I tend to distrust long chains of reasoning that leap from abstraction to abstraction; I'd much rather have a solid grasp of the bleedin' obvious than a tenuous inkling of the abstruse. Mystery, to me, is a little red light on my dashboard that tells me that some part of my worldview has gone wrong and requires attention. It's quite distinct from ignorance in that respect.

Ignorance is something I can only expect of myself, given what a tiny little part I am of such a huge and amazing world. But so far, every time I've encountered some real-world phenomenon that does defy explanation, that does not and apparently cannot make sense, dispelling the mystery has required nothing more than work, time, and the rooting out and discarding of unsupportable assumptions. There's a dopamine click available from the abandonment of a superfluous belief that's at least as pleasurable as the one I get from finding new and enticingly stinky fields of inquiry to roll around in.
posted by flabdablet at 6:17 AM on February 2 [1 favorite]


I think some people should read Permutation City by Greg Egan. It is about the idea that patterns themselves are the thing, not the thing the patterns are made out of, and keeps resursing on the idea far beyond anything plausible.

The idea that the "something else" isn't the electrical impulses of thought but the pattern itself may give a context for people unsatisfied with "merely physical" explainations of consiousness.

This possibility opens up a scientific path to ending the privacy of consiousness. It is theoretically possible to transmit the patterns in our brain to another substrait, and even move them back. It is also possible to duplicate said patterns, possible to connect said patterns to others far more directly than language, and plausible to be able to assign meaning to patterns and transmit that directly.

That is what I meant when I spoke about what could be done to investigate consiousness: when we are able to do things like that, we'll have intelligent agents who can speak to what it feels like to be a pattern moved from a body to a simulation and back into a body.

I am sure we'd have people complaining that their experience wasn't "real", but to me it seems clear that such capabilities (and what happens!) would be further investigation of what consciousness is, how it works, what causes it, etc. And people informed by the results of that knowledge can pose further questions.
posted by NotAYakk at 11:18 AM on February 10


That whole line of reasoning conflates the ideas of consciousness, experience and identity to an extent that I think is unhelpful.

Also, that stepwise speculative upgrade from "far beyond anything plausible" to "may give a context for" to "theoretically possible" to "also possible" to "plausible" to "when we are able to" is merely a slower kind of the same leap that experience has given me good reason not to trust.
posted by flabdablet at 2:33 AM on February 11


« Older Drone operator films cownose rays in rare mass...   |   "I have nothing and everything" Newer »


This thread has been archived and is closed to new comments