Ethics in AI
July 21, 2020 5:01 AM   Subscribe

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - "The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General's High-level Panel on Digital Cooperation."
The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

“Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present,” the paper reads. “This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.”

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationships and employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that “an indifferent field serves the powerful.” VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.
some of DeepMind's machine learning fairness research... also btw...
Softlaw: "law that is software coded before it is passed." (A very direct and literal take on @lessig's "code is law")[1,2]
posted by kliuless (38 comments total) 45 users marked this as a favorite
 
By what standard would we judge that? A lot of humans don’t seem to be able to convince some people.

To be honest, this has the whiff of buzzword-chasing to me. The AI field has plenty of ethical questions to answer that they don’t seem to be making much progress on. I do not trust their self-assessment over whether they’re dismantling colonialism by doing so.
posted by Merus at 6:24 AM on July 21, 2020 [10 favorites]


AIs are able to determine outcomes for groups of people, when trained and used for decision making in surveillance, medicine, economic policy, etc.

Oddly enough, whatever sentience ends up meaning, they are granted a priori rights or powers that individuals generally don't have, by those entities that own and operate them.

It's not AIs you have to worry about, so much. They are just a bunch of dumb matrices. It's the "DeepMind" companies in the linked article you need to worry about, selling their technology on the sly to autocratic governments and militaries until there is open backlash.
posted by They sucked his brains out! at 6:34 AM on July 21, 2020 [13 favorites]


As a condition of their acquisition by Alphabet nee Google, DeepMind required the establishment of empowered ethics review boards for uses of DeepMind's AI. This moral stance written into the terms of acquisition personally cost the people at DeepMind who demanded it a lot of money, as it depressed the company's valuation precisely when it was sold. Whether the linked article is the right thing is still TBD, but of all the AI companies out there, DeepMind is trying the hardest to not create a terrible future.

The worry about sentient AI is far off. The worry about dumb AI causing real human problems is here and now (e.g. every paper from AI Now).
posted by pmb at 7:06 AM on July 21, 2020 [40 favorites]


“AI” as a term of art needs to DIAF.

It’s not “intelligence” — it’s deep pattern matching of human decision-making, with the goal of correctly predicting what a human would do in a novel situation.

Which makes it obvious why you would need something like this — if you build a system to replicate what shitty humans do, of course it’s going to produce shitty decisions.

But bringing in terms like “AI” that suggest you need to read philosophy to it, is just dumb and confusing, and I’m increasingly suspicious that the recent spate of “deep AI questions” is actually covering for the cognitive dissonance that “AI” raises zero philosophical questions but lots and lots of sociological questions, which is the last thing AI researchers are actually interested in talking about.
posted by bjrubble at 7:46 AM on July 21, 2020 [35 favorites]


When a tool is being wielded by capitalists and/or authoritarians, it's going to serve their interests. Really there's no logic more complex than that needed. If you don't want a tool to be used for oppression, there must be keen-eyed oversight and regulation with pike-sharp teeth.
posted by seanmpuckett at 7:48 AM on July 21, 2020 [12 favorites]


Turing Test 2.0: can an AI convincingly argue that it should have?

Where is this magical land where the mere ability to petition for your rights means they’ll be granted and respected? I’d like to move there.
posted by mhoye at 7:49 AM on July 21, 2020 [7 favorites]


One more issue-- could an AI be sentient enough that it should have rights?

oh my GOD i am so tired of this

every single thread about AI on this site is immediately derailed with questions regarding AI sentience, AGI, eventual skynet jokes, blah blah blah

this carefully constructed thread is about decolonization, structural power imbalances which bias AI systems, and a carefully considered approach to tackling these issues by seriously significant leaders in the field. but instead, instant derail. please stop.
posted by lazaruslong at 8:06 AM on July 21, 2020 [58 favorites]


Here’s a fascinating discussion on the subject of racism in algorithms. AI will be as racist or not as the people who train it, and the information they use to train it.

When algorithms are used to determine things like criminal sentences, the rights I‘m concerned about are those of real people with bodies and lives.
posted by Pirate-Bartender-Zombie-Monkey at 8:25 AM on July 21, 2020 [6 favorites]


I mean, if you name your gold mining technology "magic gnomes" and talk a lot about how the "gnomes" are going to use "magic" to get the gold out of rocks, you're going to end up having a lot of conversations about how magic isn't actually real and the gnomes are drills. I agree that "is AI people?" is a silly derail, but it's also inevitable because we're using "AI" to refer to things that are not intelligences.
posted by Scattercat at 8:27 AM on July 21, 2020 [9 favorites]


No, the derail is not inevitable. I know because we often manage to avoid the easy lazy derails on all sorts of topics. We can do better.
posted by lazaruslong at 8:31 AM on July 21, 2020 [8 favorites]


The data biases is unconscious and really tricky to identify in detail. Who will review and correct billions of data points per run? Acquiring non garbage, valid, information is incredibly expensive, who will be subsidizing the even vaster cost of correcting?
posted by sammyo at 8:41 AM on July 21, 2020


Mod note: a few comments deleted; as lazaruslong says, please don’t derail a thread about a specific topic related to decolonization etc onto a more general topic about AI].
posted by travelingthyme (staff) at 8:49 AM on July 21, 2020 [12 favorites]


To be honest, this has the whiff of buzzword-chasing to me. The AI field has plenty of ethical questions to answer that they don’t seem to be making much progress on.

Ding ding ding! We have a winner!

There are a few hundred researchers publishing paper after paper about how to make AI less awful... and many thousands more ignoring those researchers and blithely building exactly the AI systems they want to build with little thought spared for ethics. There is simply too much money to be made. You want to move the needle on these topics? Lobby the US government to regulate AI and push for democratic reform in China. Good luck.

I worked in AI ethics for two years and stayed in because it was starting to look like regulators were finally getting serious. Then 2020 happened and now who knows when/if any of these rules will ever get made. All my love to people who are still working on it, but I'm deeply cynical that this paper represents anything other than preaching to the choir of a small, idealistic, mission-driven research community whose ideas are not taken seriously by the decision-makers who need to take them seriously.
posted by potrzebie at 9:07 AM on July 21, 2020 [12 favorites]


Part of the problem is that a lot of people don’t want accurate data, they want data that says they are right, that reinforces their views. Therefore, training an AI on racist data is a feature, since it won’t challenge your racist ideas. And it comes from an unimpeachably objective source like a computer.

Pessimism out of the way, maybe the danger of what happens when you forget your books are cooked (see Enron) might force people to look for more accurate models and data to build AI on. Then follow the answers....
posted by GenjiandProust at 9:36 AM on July 21, 2020 [6 favorites]


Let's rebuild it by not calling it AI when it isn't intelligent.
posted by GallonOfAlan at 9:39 AM on July 21, 2020 [3 favorites]


I think one major difficulty is that it is very, very hard to get the vast majority of the public interested in these issues because concepts like racist algorithms and colonialist AI just don't make sense to most people for whom computers effectively are magic truth machines. If I were to try to explain something like this to my dad, he'd write it off as "liberal P.C. nonsense," and I think even many people who aren't assholes would have similar reactions -- that this is trying to force identity politics onto the "objective" and "true" realm of hard science. If there's no general public interest in regulating how "AI" (or whatever one wants to call it -- algorithimically based mass data aggregation and analysis and human behavior prediction rolls off the tongue) because people just don't think it's an issue, then big money and greed and malcompetence (maliciousness + incompetence) will always win out.
posted by Saxon Kane at 10:01 AM on July 21, 2020 [14 favorites]


Deja Vu
posted by infini at 10:26 AM on July 21, 2020


I support a zeroeth law of robotics: Don't punch down.
posted by BrotherCaine at 10:56 AM on July 21, 2020 [2 favorites]


I'm trying to pivot the discussion in my circles to "artificial opinion", because people have a more intuitive understanding of how to value (and judge) opinions.

What I've observed in the industry matches what potrzebie described: people promising magic continue to rely on the algorithmic equivalent of tetraethyl lead to make their customers willing to pay to be absolved of complicated or expensive decision processes.

Since government regulation doesn't seem to be forthcoming, I'd like to see more groups translating the ad-hoc (and necessary) demonstration of failures into means to test and qualify artificial opinions with specific anti-racist goals.

That is - the big AI companies have been rushing to try to collect more diverse training data in the hopes of deflecting criticism. It's crass and it's doomed and many have realized it and it's actually driving the industry toward more explainable and robust approaches.

I know it's kind of idealistic of me, but I can see a point in the future where "train a model and let it decide" is given as much credence as "run a survey" or "ask a crowd" in that it can still have value but requires bounded expectations and a way to test and accommodate outcomes, especially unexpected ones.

That is, if we start from the expectation that artificial opinions are tenuous we may actually build in and make visible more of the layers of context that are often hidden in the human training data we rely on now. Instead of turning the crank and letting hidden layers form connections and assume they must be good because they exist, we learn to aggressively focus and prune to build an intelligence better than ours.

So, I'm actually finding my hope for AI (c'mon, AO!) Ethics is increasing the more I think about the benefits an anti-colonialist approach can yield. Metafilter sometimes confuses the AI researchers with the "AI tech bros" - the former typically really do understand their technology has tight limits, is very far from magic and can be full of danger. The latter see a cheap way to do tedious or contentious things and don't consider the limits of their understanding.

I'm glad the former are so clearly writing about this topic.
posted by abulafa at 10:58 AM on July 21, 2020 [20 favorites]


Interesting articles. I have wondered where this push would come from. Politicians are, generally, uneducated on tech and have a hard time with the concept of cookies. I just read a SCOTUS ruling where one of the justices said, referencing how information moves quickly in today's society, "someone could just put it on a thumb drive and then it's out there." Thumb drive? Anyhow, it seems to me that the government could wade into these waters with regulation but won't. Is there an AI creator consortium that could move to make standards/code more transparent? Academia? Are we just leaves on the stream here?

Thought provoking. Appreciate the post.
posted by zerobyproxy at 11:10 AM on July 21, 2020


Is there an AI creator consortium that could move to make standards/code more transparent? Academia?

Nope, and nope. The incentives to defect can be measured in at least billions of dollars. So far the only thing that has moved the needle is PR worries (see recent Amazon move to temporarily halt selling face recognition to law enforcement, which... I literally burst into tears when I saw it in the paper because I never dreamed it would happen) but there is no American PR disaster that's going to stop Chinese and Russian companies from selling this capability to interested buyers worldwide. Humans will continue to create these systems. Regulation of AI system use is the only tool we have, as far as I can tell. And it is really not a very powerful tool.
posted by potrzebie at 11:22 AM on July 21, 2020 [2 favorites]


If I were to try to explain something like this to my dad, he'd write it off as "liberal P.C. nonsense,"

Sigh. What people don't realize, because of lack of computer literacy, is that they want to be complaining about the "lameframe media", not the "PC nonsense" when it comes to AI stuff like this. At least there's not too much worry about invasive Apples taking over the AI ecosystems, I guess.

posted by eviemath at 12:48 PM on July 21, 2020 [2 favorites]


My first instinct on reading this apart from:
Boy this sure is finally hitting some peak 2020 cyber shit...

But secondly, and not that I oppose the concept (and I'm sure it's meant in a different way than I say here), but...

Boy if this ain't a CYA. Realizing the colonizers will now be colonized by the artilects unless the build an anti-colonial foundation before building the AI overlords.

Flesh Skin, Silicon Masks anyone?
posted by symbioid at 12:54 PM on July 21, 2020


abulafa, "artificial opinion" is very good. There's also the old saying "Garbage in, garbage out".
posted by Nancy Lebovitz at 1:08 PM on July 21, 2020 [3 favorites]


Symbioid, that sounds like perpetuating Roko's basilisk. I'm sorry if I'm overreacting to a joke but this doesn't come across as CYA as much as "there's a whole world of decolonial thinking that maps very clearly onto this technology and highlights blind spots we should actively fix".

The paper does a huge service just by describing the anatomy of the problem (metropole, periphery) and surfacing terminology to be able to discuss problems that seem to be easily ignored (ghost work, ethics dumping/shirking) into the impacts and values around AI.

It's like if nobody had ever heard of water table pollution and how using heavy metals to cut costs in your process polluted them, just introducing the greater world of externalities to what looks like (and is sold as) free magic fairy dust can move the conversation toward reality.
posted by abulafa at 1:23 PM on July 21, 2020 [11 favorites]


Metafilter sometimes confuses the AI researchers with the "AI tech bros" - the former typically really do understand their technology has tight limits, is very far from magic and can be full of danger. The latter see a cheap way to do tedious or contentious things and don't consider the limits of their understanding.

This reminds me strongly of the divide in economics, tbh... The hardcore 'markets == magic && ethics == unfettered self-interest' people are happy to reap the rewards of telling power what it wants to hear, and the others are basically ignored by policy makers.
posted by kaibutsu at 3:08 PM on July 21, 2020 [7 favorites]


I support a zeroeth law of robotics: Don't punch down.

Asimov actually had a zeroth law - A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Essentially, prioritise society as a whole over individuals, prioritise people over stuff.

We could do worse than to build our AI on those laws.

But in terms of 'don't punch down', I'm seeing that increasingly applied in practice. For example, in automated determinations on credit scores and credit-worthiness - only using AI to make decisions that benefit customers, not hurt them. Another way to get there is to ensure there is a 'human in the loop'. The machine-enabled decision is just a suggestion that must be ratified by a person. Sounds basic, but that's where government and companies are in terms of understanding how to apply this tech.
posted by His thoughts were red thoughts at 4:27 PM on July 21, 2020 [2 favorites]


I still do have hope that ML can be designed and regulated to be less biased than people, because is can be designed and assessed in ways that people can't be. It flies up against late-stage capitalism, but what doesn't. The tools will be available when reform or revolution comes, whichever first.

White people who I know are super-fragile about the idea that people can be racist have responded surprisingly well when I talk about how supervised ML models get to be racist, by learning from data that reflects existing biased outcomes, even when the model is formally colorblind. It's a funny distancing tool, but it's also true.
posted by away for regrooving at 12:19 AM on July 22, 2020 [4 favorites]


If a program rejects your application for a loan, for example, you should be able to get a plain-language list of the entire chain of reasoning: every rule, the data, and the source of the data. Stuff you could compare to reality and agree with or challenge ("Why is my race in here?" "What is the bank doing information I gave only to my doctor?"). You should never get just "Computer says no."
posted by pracowity at 1:59 AM on July 22, 2020 [8 favorites]


If a program rejects your application for a loan, for example, you should be able to get a plain-language list of the entire chain of reasoning:

But one of the issues with AI is that the “reasons” for the “decisions” it makes are almost always opaque. As I understand it (and I would love to be corrected), this is far less possible with AI than people, who can theoretically explain.
posted by GenjiandProust at 3:04 AM on July 22, 2020 [2 favorites]


But one of the issues with AI is that the “reasons” for the “decisions” it makes are almost always opaque. As I understand it (and I would love to be corrected), this is far less possible with AI than people, who can theoretically explain.

Only if the systems you build are designed that way. You can make a choice to design systems that are transparent and explainable, and deliver decisions together with reasons. AI-enabled decision-making doesn’t just arise out of the ether.
posted by His thoughts were red thoughts at 4:59 AM on July 22, 2020 [8 favorites]


previously...
New Theory Cracks Open the Black Box of Deep Learning - "A new idea called the 'information bottleneck' is helping to explain the puzzling success of today's artificial-intelligence algorithms — and might also explain how human brains learn."*

also btw...
A New Approach to Understanding How Machines Think - "Neural networks are famously incomprehensible — a computer can come up with a good answer, but not be able to explain what led to the conclusion. Been Kim is developing a 'translator for humans' so that we can understand when artificial intelligence breaks down."
posted by kliuless at 7:07 AM on July 22, 2020 [1 favorite]


As above, "explainable" inference actually offers an opportunity to surface the magic and tune it. However, as noted in the paper, the effect of a smaller percentage of black people being selected for AO-selected studies turned out to be related to using "total lifetime healthcare cost" being higher for such folks. The model did the thing it was tuned for: minimize total cost while selecting participants.

Explainable inference helped humans track the problem to what features were selected and how they were weighted.

The funny thing is, when you're denied for a bank loan by a fully human mediated process, the answer to why is usually a form letter with a few abstract bullet points like "ratio of debt to income" hiding factors like "the loan officer thought your name sounded non-white and when given a chance to take other factors into account chose not to do so." That is, a non-automated system offers even less decision transparency and we have to look at outcomes to detect bias, then only after outcry is high enough and damages are demonstrated maybe get companies to change behavior, lose business, or (haha right) get out competed.

My thinking is: by capturing more nuance and making inference algorithms explainable we actually have an opportunity to surface where the biased inputs are and even refine (or legislate!) What kind of data can be used to make decisions that affect a person's health or wealth.

That is, proprietary magic scores shouldn't be either. The fact machines are calculating them has always been a dodge - open automation (by legislation or at least design) creates incentive to nationalize such scoring because it reduces your legal exposure.

So, in summation: nationalize a standard for things like credit and health testing that is open and regularly tuned to fight bias, guarantee some legal protection as long as you use that standard, make using anything else legally fraught because of liability for biased outcomes. Like testing for environmental pollutants.
posted by abulafa at 7:26 AM on July 22, 2020 [3 favorites]


Also remember there is not 'a' neural net. The Big Data/ML/AI world is a multitude of carefully modified algorithms (SVM, SVD, supervised learning, unsupervised learning, semi-supervised learning, kNN, LVQ, SOM, LWL, LASSO, LARS, CART, ID3, C4.5, C5.0, Decision Stump, Gaussian Naive Bayes (many many Bayes), Expectation Maximisation, MLP, SGD, RBFN, CNN, RNN, LSTMs, PCA, PCR, PLSR, GBM, GBRT... ...) with combos and mixins all mushed up by a r idealistic researchers surviving by rapid publication of vast numbers of papers each increasing the accuracy of an algorithm by 0.0000n per cent.

There's also a sort of meta method, where a data set is processed by all the algorithms over and over until one 'hits'.

There is always a human involved in the process.

"AI" sells.
posted by sammyo at 8:04 AM on July 22, 2020 [7 favorites]


But one of the issues with AI is that the “reasons” for the “decisions” it makes are almost always opaque.

It depends. Some systems learn on their own and don't have a set of built-in rules written by humans. Other systems are pretty much nothing but rules written by humans.

If the system controls a machine that recognizes products and packs them into boxes efficiently, I don't care how it works as long as it works. It can learn by trial and error and make up its own completely unintelligible rules as long as it isn't grabbing people off the warehouse floor and cramming them into crates.

If the system decides my fate, though, I want to see all the rules and data that applied to my decision in a form I can understand. "We decided X. Here's why..." Experts should have to sit and write the rules, to encode their judgment and explanations into the system.

Transparency might not be reasonable in every case. Maybe an autopilot (a system that certainly decides my fate) is opaque. I'll put up with an opaque system as long as it works well. Telling me in plain language why it did what it did every hundredth of a second probably wouldn't ever help me.

But when a system makes decisions traditionally made with some deliberation by people presumably following an intelligible set of rules -- determining a prison sentence, for example -- it's reasonable to expect the system to be able to explain itself and expose its assumptions. "SentenceBot 3000 recommends a sentence of 743 days in prison. Here's why..." (And maybe if the judge decided something different, you could ask about the difference.)
posted by pracowity at 9:59 AM on July 22, 2020 [1 favorite]


So...it's about ethics?...in AI?
posted by Reasonably Everything Happens at 11:19 AM on July 22, 2020 [1 favorite]


well, actually...
posted by infini at 12:11 PM on July 22, 2020 [2 favorites]


So, explainable ML....

I tend to box problems into a few different classes: Perception, generation, and database. Database problems are what we're thinking of for things like loan approvals or sentence reduction: you've got a bunch of data in a database, and need to make some simple prediction. Perception is image classification or speech recognition, and generation usually is about generating text or speech or audio.

For database-type problems, plain-old linear/logistic regression is great. It works about as well as anything else in many circumstances, and is a paragon of explainability. If you don't like it relying on a certain feature, you just take it out of the database. And you can see the weights on the features it is using, and easily interpret what the big factors in a particular decision were.

For perception and generation, neural networks are doing WAY better than anything else, but have (thus far) been pretty low on explainability. This is a helpful guideline: if you're doing anything at all mission critical, you shouldn't be using non-explainable methods unless the explainable alternatives are unusable, because being able to understand and postmortem failure modes is important. Perception problems largely don't have good 'explainable' alternatives, and are really expensive to get right as a result. (Like, really really expensive. Understanding and correcting a class of failures might be a full-time job for an ML researcher taking months.)

---

My long term hope is that neural networks get more explainable over time. The models are massively over-parameterized, which means they have an enormous search space for a good set of model parameters. But it also turns out that there are regions of effectively similar-quality models that do the same thing with different parameters. Some of those might actually be very 'explainable:' the models are so big that they can 'hide' simple tasks by spreading them out over many layers and thousands of parameters.

But yeah, we're not there yet, by a long shot, and we're not going to get there without ongoing, open-ended research. I say open-ended, because the most important results might not come from trying to tackle the problem head-on: big steps towards explainability as I described above might come from trying to get smaller models or improvements in optimization, rather than squinting hard at activations in the current set of models.
posted by kaibutsu at 5:33 PM on July 22, 2020 [5 favorites]


« Older Michael Brooks has passed away.   |   I’ve never been more certain that Daisy is... Newer »


This thread has been archived and is closed to new comments