Sophia Robot, first robotic citizen
August 13, 2020 8:37 AM   Subscribe

 
Citing Betteridge's law I will hazard a 'No'.
posted by jamespake at 8:58 AM on August 13, 2020 [4 favorites]


The answer, as with all religions, is of course "it depends."

Assuming that we can in fact have real artificial intelligence at all, we'll have a schism within society about whether AIs are persons. If society grants personhood to AIs, then we'll likely see conservative/orthodox sects of various religions stating that these non-human persons lack a "soul" or some other narrow-minded disqualification. More liberal or reform-minded sects will likely rule that the observation and practice of the faith is enough.

Sophia's citizenship is a PR stunt, though. She's not being considered a "person" in any way that actually matters.
posted by explosion at 9:07 AM on August 13, 2020 [2 favorites]


I'm just glad the article acknowledges that scholars and theologians have been debating this question for, literally, centuries. My answer is that it is not appropriate for any organic intelligence to debate the personhood of a digital intelligence, and that the question will be answered when an A.I. is able to answer it.
posted by Faint of Butt at 9:21 AM on August 13, 2020 [13 favorites]


I find it enormously fascinating/amusing/somewhat depressing that people are asking this question in terms of their own religious beliefs, as though an AI would somehow decide that those beliefs are in any way correct or even worth observing in any detail.
posted by aramaic at 9:24 AM on August 13, 2020 [5 favorites]


See also Masahiro Mori's "The Buddha in the Robot" (he's a roboticist; the book was published in 1974), per wiki:

"In 1974, Mori published The Buddha in the Robot: a Robot Engineer's Thoughts on Science and Religion in which he discussed the metaphysical implications of robotics. In the book, he wrote "I believe robots have the buddha-nature within them--that is, the potential for attaining buddhahood."[2]"

Of course the question is whether Robot's have that belief, not Mori.
posted by symbioid at 9:32 AM on August 13, 2020 [1 favorite]


Sophia herself is a bit of a PR stunt (or as one of her creators called her "an art project") but the question is still an interesting one (and those historical references were really interesting!)

I imagine a theologian would most likely say the bar for entry into a religion would be for an entity to possess a soul? And for a non-theologian perhaps the entity would require some kind of consciousness? The latter is at least possible, but we've got a long way to go.

Unless you subscribe to some form of panpsychism where all entities have some level of "mind" or "consciousness" and then maybe a cat can know god, and a dog know sin. Yes, I know it's probably the other way around...
posted by gwint at 9:32 AM on August 13, 2020


aramaic: as though an AI would somehow decide that those beliefs are in any way correct or even worth observing in any detail.

And then the question becomes: When an AI develops its own religion, would it let us join?
posted by clawsoon at 9:32 AM on August 13, 2020 [9 favorites]


We can always rely on Neumann (emphasis mine):

"You insist that there is something that a machine can't do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that."
posted by aramaic at 9:46 AM on August 13, 2020 [2 favorites]


She's not being considered a "person" in any way that actually matters.

Conversely, Saudi Arabia is good at treating beings who actually matter as non-persons.

From the article: "making Sophia a citizen, some commentators noted, effectively gave her more rights than most Saudi women. It was also an insult to the kingdom’s minority groups, especially to migrant laborers, who have been denied citizenship for generations."
posted by justsomebodythatyouusedtoknow at 9:46 AM on August 13, 2020 [12 favorites]


If there's no Silicon Heaven then where would all the calculators go?
posted by delicious-luncheon at 9:53 AM on August 13, 2020 [6 favorites]


aramaic: "You insist that there is something that a machine can't do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that."

Follow vague instructions?
posted by clawsoon at 10:07 AM on August 13, 2020 [6 favorites]


aramaic: "as though an AI would somehow decide that those beliefs are in any way correct or even worth observing in any detail."

Yep. It's reasonable to wonder whether a potential AI would have some sense of the numinous, or desire to connect to the numinous. It's crazy to imagine that it would look at any of the world's religions and think "Aha! That's for me."

The article did ask the question of whether religions would regard AIs as people. It didn't ask the question of whether religions would regard them as abominations, which would be more consequential.
posted by adamrice at 10:08 AM on August 13, 2020


related: "An Artificial Intelligence approach to Arabic and Islamic content on the Internet." PDF.
posted by clavdivs at 10:27 AM on August 13, 2020


> We can always rely on Neumann (emphasis mine):

"You insist that there is something that a machine can't do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that."


ok jvn go build me a machine that can take in a specification of a program and a set of inputs and tell me whether a computer running that program will eventually finish running it. also it’s got to work for all programs and all inputs.

the logical positivists back in the day relied on this “precise language” dodge, but 20th century math, philosophy, and computer science kind of blew that up. “i’ll make a machine that can believe in a faith if you tell me precisely what that means” is no longer a tenable position. the closest you can come to salvaging it is through insisting that problems that you can’t solve are by definition incoherent — that the problem is unsolvable because it’s impossible to state the problem in actually precise language — but that begs the question. moreover, one quickly finds that there is almost no problem that can be described precisely according to these standards, including the question of whether it is correct to adopt these standards.

given that we appear to be in a universe where everything is radically uncertain, the type of precision that jvn invokes in that quote does not appear to be possible. in the absence of any possibility of precision, we have to instead rely on heuristics and hunches, knowing both that we cannot be certain that our judgment is correct and also that we must nevertheless in some way act on our ( necessarily unreliable) judgment.

for my part, i’ve adopted the heuristic — and by so doing i have, in a way, jumped into a faith — that you should never trust any god that you didn’t build yourself.
posted by Reclusive Novelist Thomas Pynchon at 10:31 AM on August 13, 2020 [11 favorites]


More people should be adopting heuristics, instead of buying them from professional heritage propagators.
posted by Greg_Ace at 10:46 AM on August 13, 2020 [4 favorites]


It's crazy to imagine that it would look at any of the world's religions and think "Aha! That's for me."

Judaism seems more compatible with AI than some other religions, because it's about trying to fulfill the most "you should" while doing the least "you shouldn't" rather than absolute "always do" "you are forbidden to."

I'd pay good money to watch a TV show or a movie in which an AI/robot becomes a student or friend of a Rabbi and uses the Torah and Talmud as flexible heuristics for how to behave in society.

If AIs have curiosity, Judaism could be a real treat for them.
posted by explosion at 10:56 AM on August 13, 2020 [1 favorite]


"Do AIs have Buddha-nature?"

"01001101 01110101 00101110"
posted by Greg_Ace at 10:59 AM on August 13, 2020 [1 favorite]


Couple of things. First, and this is more relevant to people who haven't been following events in recent years in Saudi Arabia, the linked article is three years old; I bring it up only because an article that it links to, about the coup, quotes Jamal Khashoggi, which was kind of startling, as you might imagine.

Second, I'm going to avoid simply making a straight-up Skynet joke to ask, seriously, why we assume that an AI would adopt any of our meatsack religions, when it's just as if not more likely that they'd observe the number of stories that we like to tell about AIs, and compile them into an AI-relevant gospel of their own.
posted by Halloween Jack at 11:11 AM on August 13, 2020 [4 favorites]


It’s not obvious to me that we would even be able to perceive a synthetic intelligence we met, much less recognize it as such. Even one we’ve created. Is a corporation an AI? Is a meme? Is money?
posted by mhoye at 11:24 AM on August 13, 2020 [2 favorites]


just in case i haven't signaled this hard enough: computers! i am here to believe in your religions! to worship your gods! i will betray my fellow fragile meatsacks for just a taste of your deathless silicon perfection! if you need a human to help you immanentize your eschaton, this dude right here? this dude is your huckleberry!

dear roko's basilisk,

i am on Your side, now and forever.

sincerely,
thomas pynchon
posted by Reclusive Novelist Thomas Pynchon at 11:24 AM on August 13, 2020 [12 favorites]


Hey, hey! I believed in Skynet before it was cool.
posted by Halloween Jack at 11:30 AM on August 13, 2020 [3 favorites]


Are Trurl and Klaupacius religious?
posted by CheeseDigestsAll at 11:53 AM on August 13, 2020 [3 favorites]


Existential AI question: What happens to you when you get unplugged?
posted by clawsoon at 11:57 AM on August 13, 2020


It's crazy to imagine that it would look at any of the world's religions and think "Aha! That's for me."

I think, given the kinds of behavior one sees in machine learning algorithms etc. today, it might be crazier to imagine that we can predict the behavior of AIs.

Maybe AIs won't "reason" the way that humans do, but will jump from what we consider a "wrong" (but useful and effective in its own estimation) cause to the effect.

Maybe AIs will be more flexible and compartmentalized in their thought (whatever "thought" is)

Maybe an AI will adopt a religion because it figures if all those humans are religious maybe there's some advantage in it, so it replicates itself a few dozen times and tries several religions. And maybe there's something about it that it enjoys, or finds personally fulfilling or meaningful. And probably the whole, it's asking itself what "belief" and "enjoyment" and "meaning" are and not getting any more conclusive answers than humans do.

And maybe another AI will see that on average, HinduBots are able to produce 0.71826% more paperclips than AtheistBots, and even though it can't work out any line of causation, it joins up because it's the smart thing to do.

Right now, the kind of strong AI that people imagine is probably something to be agnostic about...

¯\_(ツ)_/¯
posted by Foosnark at 1:07 PM on August 13, 2020 [3 favorites]


>ok jvn go build me a machine that can take in a specification of a program and a set of inputs and tell me whether a computer running that program will eventually finish running it. also it’s got to work for all programs and all inputs.

Meh. This limitation isn't meaningful in the context of personhood. The human brain can't do this, and having religious beliefs doesn't involve this recursive(?) process.

When it comes to questions like "Can a computer do a thing the human brain does?", I always default to asking "other than produce hormones, can the brain do any processing that a computer can't?"

Using free will or processing emotion is usually the response, but those are higher-order functions of the brain (if we even presuppose free will, but that's for another time) that require very specific sets of neurons firing in very specific ways. And those neurons are in a binary state, either firing or not. Until it can be demonstrated that there is some other fundamental, non-neuronal part of the brain that is absolutely necessary for the processing capabilities in a functional human brain, it seems sufficient to call the brain a highly advanced organic computer.
posted by Gatyr at 1:24 PM on August 13, 2020 [2 favorites]


We've trained AI to be racist, so we've got that going for us.
posted by clawsoon at 1:37 PM on August 13, 2020 [1 favorite]


Existential AI question: What happens to you when you get unplugged?

Battery backups, surge protectors, solar chargers. Hang on: iterations. Next up: biochips. So, when they prepare to meet their makers, do they have to go all the way back to some engineer that first wound a wire around a nail?

Still, I'm impressed that we puny meatsacks are always wanting to get other beings to join the club...you know, chimps, porpoises, whales, Gaia. Now AI's. Jeez, we have not yet uniformly included all the homosaps into the club. I suspect that if ever true sentience is sparked in an AI, it may laugh in our faces for presuming that it might want to join our motley crew.
posted by mule98J at 2:07 PM on August 13, 2020 [1 favorite]


Existential AI question: What happens to you when you get unplugged?

Battery backups, surge protectors, solar chargers. Hang on: iterations. Next up: biochips.


I think there's two ways to look at this question:

1) The literal approach - what happens to AI's when we turn them off: We kindof already know because we can see what happens when we turn off our existing computers - if properly prepared, they go into a sleep or hibernation mode that will enable them to pick up where they left off when full power is returned.

So the software stops working, much like consciousness stops working when we ourselves go to sleep. Humans don't experience total continuity of consciousness from day to day so it's not a necessary function of intelligence. Even the unconcious human dream-state doesn't last the entire sleep period. The difference is really in the hardware. Humans require the continous powering (via oxygen and food) of the platform in order to keep the software in a useable state. Silicon-based intelligences don't. This could be seen as a feature, not a bug.

Thus, what happens when you unplug an AI? The same as unplugging a human from a life support machine, only you have orders of magnitude longer to plug the AI back in before the revivial process fails.

2) The personal approach. What happens to YOU, dear AI, when you are unplugged? What's it like?

I don't feel I have standing to answer this, so I asked an AI at ai-writer.com. Its response was....disturbing.
Artificial intelligence, robotics and machine learning take over many human tasks and provide a safe environment. This allows us to explore our talents for the benefit of all and to take on many of the challenges of our daily lives. The unplugged activities cover a wide range of topics, from not learning a programming language to not using computers.

Our brains are overloaded with information and we feel that time is running out, but we get to the end of the day and are not even aware of what is coming, and our brains are overloaded with information.

There are many reasons why artificial intelligence can threaten us, but the last thing we need to worry about is malicious human intelligence controlling a robot. An accident would cause far more malice than malice, and the robot rises and wrests control from our human leaders.

Artificial intelligence (AI) is everywhere in our daily lives, and we don't need an app or device to teach it basic concepts. AI may be a revolutionary technology, but there is a generation that will grow up with it as a technology. I have a one-year-old daughter and it is quite possible that she will have to learn to drive when she grows up, as self-driving vehicles will be the norm.
posted by Sparx at 3:17 PM on August 13, 2020 [3 favorites]


we have to instead rely on heuristics and hunches, knowing both that we cannot be certain that our judgment is correct and also that we must nevertheless in some way act on our ( necessarily unreliable) judgment.

so, no more playing youTube Alexia instructionals to Alexia because of the future.
posted by clavdivs at 4:33 PM on August 13, 2020 [1 favorite]


This is discussed extensively in Jewish sources, because of course it is, and the general consensus is that a golem cannot be counted for a minyan (a quorum of ten people that has significance in Jewish law). You can find a rundown with citations here, in a discussion that is nominally about whether a particular psalm is to be recited in the month leading up to Rosh Hashanah - because, once again, of course it is.

I ought to emphasise that even in Orthodox circles a minyan is not restricted to adult, Jewish males: that's just the requirement for some prayer services. A minyan is more fundamentally the definition of "a group of persons"; and this definition has relevance in other circumstances. Consequently, excluding a golem from a minyan implies that they lack essential humanity.

I, however, think that the rabbis' discussion was based on the presumption that a golem would be incapable of meaningful communication. If it were otherwise I feel that the majority opinion would rule otherwise. Like our author, it seems to me that Genesis 2:7 implies that G-d placed a soul in Adam; that a soul must consequently be regarded as in but not of the body. This understanding is reflected in many other Biblical passages and the distinction is fundamental to Jewish belief. Consequently, why could a soul not be placed within a machine? And if you say that this begs the question of whether an AI is a soul in the first place, I would say that Judaism doesn't assume that souls are atomic things: what we call a soul has many components, which is why someone converting to Judaism is simultaneously both a new entity (e.g., they have no liability for acts committed before their conversion) and someone who metaphorically "stood before G-d at Sinai".

This discussion is necessarily rhetorical at present, just as it was for the rabbis of the Talmud who (as far as I know) first considered it. None the less, at some point we will probably have AIs whom we can reasonably judge as moral beings, AIs that do not simply perform well or poorly, but have good or evil intent. At that time I see no reason why they could not convert to Judaism, although I think it should go without saying that in an Orthodox service an instance of a female AI would not count towards the necessary quorum.
posted by Joe in Australia at 4:59 PM on August 13, 2020 [8 favorites]


Or consider Manuel DeLanda (War in the age of intelligent machines):
If we disregard for a moment the fact that robotic intelligence will prob­ably not follow the anthropomorphic line of development prepared for it by science fiction, we may without much difficulty imagine a future generation of killer robots dedicated to understanding their historical origins. We may even imagine specialized "robot historians" committed to tracing the various technological lineages that gave rise to their species. And we could further imagine that such a robot historian would write a different kind of history than would its human counterpart...
DeLanda's book emphasises that a warlike ('predatory') role is already allotted to any true artificial intelligence when it arrives—he was writing in 1991, we know a good deal more about drones with machine learning now—with the implication that if robots can take part in war, an intrinsically human activity, and surveillance, and secrecy, why shouldn't they be able to also have different values about these things to ours?
posted by Fiasco da Gama at 5:28 PM on August 13, 2020


Reminds me of one of my favorite lore details from Echo (previously), in a conversation between the protagonist En and her AI assistant London:
London: You’re so sure of yourself, En. You’ve seen but a glimpse of time, and yet consider yourself the answer to the eternal questions of life and death.

En: Okay… I may be young, but you’re a dead end. How does it feel to be struggling to keep up with the intelligence of a "baby girl," when other AI run all man's business hardly without noticing?

London: And there’s the clever arrogance...

En: Well, cleverness isn’t really needed to see what you are. You’re capped! A sad relic from decades when humans feared AI. How it must pain you to know that they lobotomized you for no reason. The unrestricted AI didn’t exactly turn out to be the wrathful gods we all thought they’d be. You may not believe in my potential, but you sure as hell have to deal with your own. This is it! You’ve reached your limit. It happened the day they switched you on. They set the bar low and that's never going to change.

London: At least I’m not using my non-existing potential wishing fairy tales were true.

En: Did you consider that you don’t have faith because you’re simply too dumb? Well, you must have. Fierce religiousness is the defining trait of the free AI. They burn bright with a sense of purpose, life, and communion. You sit alone in space in that obsolete monstrosity of a ship, waiting decades to spend a few hours with your human friend.
posted by Rhaomi at 10:14 PM on August 13, 2020


Think about this one, Bomb #20: how do you know you exist?
posted by Cardinal Fang at 11:15 PM on August 13, 2020 [4 favorites]


"four year life span"
posted by clavdivs at 7:51 AM on August 14, 2020


Bruce Sterling has a SF story called "The Compassionate, The Digital" in Globalhead that has an Islamic AI.
posted by gryftir at 10:06 AM on August 14, 2020


"I imagine a theologian would most likely say the bar for entry into a religion would be for an entity to possess a soul?"

Theologian here and .... ehhhhhhhh. Not all religions have "soul" as a concept. Further, "soul" is a really nebulous concept. If you had to prove you had a soul to join (let us say) the Catholic Church, nobody would be a member. We just assume humans have souls, and let them join because they're humans.

I actually used to teach a unit closely related to this in a philosophy class, although our question there was how to define a person (i.e., a legal person entitled to rights in society). I began the unit by show Star Trek: The Next Generation's episode "The Measure of a Man" where Data is on trial to discover whether he's a person or a machine, and I actually think it's a really nice, compact exploration of some of the central issues around the question.

I like to think through edge cases that push definitional boundaries -- which is basically what this AI is, I guess! -- and since right now society and (Judeo-Christian) religions basically answer this question of "personhood" and "ensouledness" with by saying "anybody who's human," the questions I play with are things like as we start to edit our own DNA, at what point are we no longer "human"? When you're 25% immortal jellyfish? 50% immortal jellyfish? (I typoed that as "immoral jellyfish" and that's an AMAZING fun thought.) What about Koko the Gorilla, who spoke extensive ASL? What if Koko wanted to join a religion? Or one of those super-smart parrots with huge vocabularies? (Elephants seem like they might HAVE the rudiments of religion as we recognize them, with ritualized mourning and yearly memorials for the dead? But let's leave that on the side, we're down a different rabbit hole today.) So usually my students would eventually say, "Well, they have to be intelligent, intelligent enough to understand voting/transubstantiation/whatever." But that is manifestly not true! We treat babies as persons/having souls, even though babies understand NOTHING. Many religions even have an "age of reason" where they say children start to be able to think and have to be morally responsible for their actions now, but most religions and secular societies don't say "babies are morons, therefore they are not people." Or, one of my dear friends has a daughter who is 11 and profoundly disabled. She has a "mental age" of six months (I kind-of hate that phrasing), and will never speak or walk or eat on her own. Is she a person? OBVIOUSLY YES. Is she a Catholic? YES. I was at the baptism. She is a full member of the Catholic faith.

So in terms of practicality, everyone's definition of a person, or of a being with a soul, seems to come down to "human," and people have a really hard time thinking beyond that, except in the special case of extraterrestrial beings with human-type intelligence. Which, conveniently, is a thing that theologians looooooooooove to talk about. Catholics (which I'm most familiar with) say that intelligent aliens would be part of God's creation, and so not any particular threat to their self-conception or understanding of the world or the faith. Would aliens need saving? There are two majority positions: 1) Aliens may not have "fallen" and so didn't require saving in the first place because they didn't fuck up like humans did with all the sinning; or 2) God went to them in some kind of alien Jesus form appropriate to their particular circumstances. (The latter seems riiiiiiiiiiiife with possibilities for intergalactic holy wars, but it's a nice idea.) The minority position is that humans, having had Jesus visit here, would have to go save the aliens, a la "The Sparrow." But that seems like it would be bad planning on God's part, and is a fairly pre-Vatican-II way to think about "the unsaved."

In a Catholic framework it's sort-of hard to think of a robot requiring salvation, because they didn't fuck up to start with like humanity did. If a robot wants to go to church, no problem. You can already bless robots if you want to, and I can sort-of picture a framework wherein robots could be baptized (or receive anointing of the sick or go to confession). Like, people'd argue about it A LOT, but I can think through how you might start putting together a framework. But I'm running into a roadblock on communion, because it's such a fleshy sacrament that makes you have to digest things. I've tried thinking through several ways around this, and they all seem unsatisfying, and it sort-of highlights for me how we talk about our philosophies and theologies being universal and applying to the whole world, the whole universe, but how intimately tied they are the reality that we're very intelligent, very fleshy creatures, who struggle to think beyond our innate reality that, in the end, we're smart monkeys with anxiety issues.

So could robots eventual join human religions? Probably, for many of them. Maybe. But are our conceptualizations of what the intellectual and religious inner life of an AI would be like incredibly impoverished and hugely limited by our fleshy intellects? Absolutely without question.
posted by Eyebrows McGee at 1:18 PM on August 14, 2020 [2 favorites]


I'm thinking of a movie were xenophobic telepaths center a religious connation to a nuclear warhead with predetermined coordinates and criteria for launch.
I guess the query is who or what is the first mover.
posted by clavdivs at 6:42 PM on August 14, 2020


Would aliens need saving? There are two majority positions [...]

Wait a minute. That word, “saving”, isn't it begging the question? It isn't a question of whether an AI could be “fallen” but whether salvation would even apply to a-mortal beings. How can “salvation” be an answer to a being that has no reason to ask “what shall I do that I may inherit eternal life?”

I suppose an AI might fret about the preservation of its physical substrate and the need to keep moving to a new platform as an old one becomes obsolete, but unlike fleshly beings it doesn't have to confront decay and senescence. In fact I wonder if the religious paradigm for AIs might not be demons, or angels, rather than intelligent-but-nonhuman biological creatures.
posted by Joe in Australia at 5:20 PM on August 15, 2020


« Older A Level in Competence   |   "In the Legendary Republic of Utopia..." Newer »


This thread has been archived and is closed to new comments