AI, Indigenous Epistemologies and the Circle of Relationships
April 26, 2019 10:30 AM   Subscribe

Making Kin with the Machines. Last year, MIT Media Lab's Journal of Design and Science (JoDS) had an essay competition for pieces responding to Media Lab director Joichi Ito's essay Resisting Reduction: A Manifesto. The essays "explore machine intelligence in light of diverse ecosystems in nature and its relationship to humanity." This piece, which brings Indigenous epistemologies to bear on the AI question, was one of the winners. [Via]
Blackfoot philosopher Leroy Little Bear observes, “the human brain is a station on the radio dial; parked in one spot, it is deaf to all the other stations [. . .] the animals, rocks, trees, simultaneously broadcasting across the whole spectrum of sentience.” As we manufacture more machines with increasing levels of sentient-like behaviour, we must consider how such entities fit within the kin-network, and in doing so, address the stubborn Enlightenment conceit at the heart of Joi Ito’s “Resisting Reduction” manifesto: that we should prioritize human flourishing.

In his manifesto, Ito reiterates what Indigenous people have been saying for millennia: “Ultimately everything interconnects.” And he highlights Norbert Wiener’s warnings about treating human beings as tools. Yet as much as he strives to escape the box drawn by Western rationalist traditions, his attempt at radical critique is handicapped by the continued centering of the human. This anthropocentrism permeates the manifesto but is perhaps most clear when he writes approvingly of the IEEE developing “design guidelines for the development of artificial intelligence around human well-being” (emphasis ours.)

It is such references that suggest to us that Ito’s proposal for “extended intelligence” is doggedly narrow. We propose rather an extended “circle of relationships” that includes the non-human kin—from network daemons to robot dogs to artificial intelligences (AI) weak and, eventually, strong—that increasingly populate our computational biosphere. By bringing Indigenous epistemologies to bear on the “AI question,” we hope in what follows to open new lines of discussion that can, indeed, escape the box.

We undertake this project not to “diversify” the conversation. We do it because we believe that Indigenous epistemologies are much better at respectfully accommodating the non-human. We retain a sense of community that is articulated through complex kin networks anchored in specific territories, genealogies, and protocols. Ultimately, our goal is that we, as a species, figure out how to treat these new non-human kin respectfully and reciprocally—and not as mere tools, or worse, slaves to their creators.
By Jason Lewis (@jaspernotwell), Noelani Arista @Noeolali), Suzanne Kite (@kite_kite_) and Archer Pechawis. Arista is assistant professor of Hawaiian and U.S. history at University of Hawai‘i-Mānoa. Pechawis is a practicing artist with particular interest in the intersection of Plains Cree culture and digital technology. Kite — an Oglala Lakota performance artist, visual artist, and composer — is currently a PhD student at Concordia University.

Joichi Ito's (@Joi) essay: Resisting Reduction: A Manifesto. Designing our Complex Future with Machines.
Nature’s ecosystem provides us with an elegant example of a complex adaptive system where myriad “currencies” interact and respond to feedback systems that enable both flourishing and regulation. This collaborative model–rather than a model of exponential financial growth or the Singularity, which promises the transcendence of our current human condition through advances in technology—should provide the paradigm for our approach to artificial intelligence. More than 60 years ago, MIT mathematician and philosopher Norbert Wiener warned us that “when human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.” We should heed Wiener’s warning.
There are 7 more essays in JoDS Issue 3: Resisting Reduction

The 10 competition winners are here in JoDS Issue 3.5: Resisting Reduction Competition Winners

Of the 10 winners, this essay by Nicky Case (@ncasenmare) on intelligence augmentation (IA) seems to be displayed most prominently (I'm not sure if it got a top prize or just more emphasis because it was the first submission): How To Become A Centaur
The rest of this essay will be about AI’s forgotten cousin, IA: Intelligence Augmentation. The old story of AI is about human brains working against silicon brains. The new story of IA will be about human brains working with silicon brains. As it turns out, most of the world is the opposite of a chess game:

Non-zero-sum — both players can win.

In the next few sections, I’ll talk about the past, present, and possible future of IA — how we humans have built tools to amplify our intellectual strengths, and overcome our intellectual weaknesses. I’ll show how humans are already working with AIs in various fields, from art to engineering. And finally, I’ll give some rough ideas on how you can design a good partnership with an AI — how to become a centaur.

Together, humans and AI can go from “checkmate”, to “teammate”.
There are 16 more recent essays since the competition in JoDS Issue 5: Essays in Exploration: Resisting Reduction
posted by homunculus (7 comments total) 26 users marked this as a favorite
Awesome post, now I know what I'm doing this weekend.

Re: the indigenous epistemology approach to AI, I get where they are coming from, but the water-mark for whether it truly is AI has in our generation been answering the question of whether the human can recognize that it is interacting with the non-human. If we don't change that premise, how much harder will it be for the non-human to answer this question? Can it even? Or if it can, could it potentially answer it faster and more effectively than humans can?

Or are we asking the wrong question?
posted by allkindsoftime at 10:52 AM on April 26 [1 favorite]

In general I wish AI were appreciated for the fundamentally boring topic that it is. Anyone who finds it interesting is either a deeply weird nerd or doesn’t understand it. Like, it would be profoundly odd if there were hundreds of breathless articles about new techniques for computational wind tunnel simulation, which is no more or less deep and interesting than AI.

However this essay is good and does capture something important. It is human nature to relate to and interact with these systems, even the very crude and simple ones, as beings, much like we do animals, the land, the weather, etc. So cultures that have a vocabulary for thinking in this way rather than suppressing it do seem worth learning from.
posted by vogon_poet at 3:04 PM on April 26 [3 favorites]

Now I'm not sure whether I'm a deeply weird nerd or I don't understand AI nearly as much as I thought I did.
posted by allkindsoftime at 9:00 AM on May 3

The Philosophical Case for Robot Friendship (PDF)
Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered our virtue friends - that to do so is philosophically reasonable. Furthermore, I argue that even if you do not think that robots can be our virtue friends, they can fulfil other important friendship roles, and can complement and enhance the virtue friendships between human beings.
posted by homunculus at 1:20 PM on May 9

« Older 'Builds up a hand of steam like no other'   |   #RedCupProject | More Protection for Active... Newer »

This thread has been archived and is closed to new comments