AI hates women and minorities.
July 20, 2023 1:07 PM   Subscribe

But we knew that. An [Asian-American] MIT student asked AI to make her headshot more ‘professional.’ It gave her lighter skin and blue eyes. “In just a few seconds, [Playground AI] produced an image that was nearly identical to her original selfie—except Wang’s appearance had been changed. It made her complexion appear lighter and her eyes blue, ‘features that made me look Caucasian,’ she said.” (Boston Globe, limited free articles.) Amazon scraps secret AI recruiting tool that showed bias against women. “It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.” (Reuters)
posted by Melismata (43 comments total) 31 users marked this as a favorite
 
You would have thought these companies would have learned from Amazon's HR ML fuckup.

Apparently not.
posted by NoxAeternum at 1:15 PM on July 20, 2023 [4 favorites]


A different take on systemic racism.
posted by Tell Me No Lies at 1:28 PM on July 20, 2023 [4 favorites]


Someone on Mastodon called this phenomenon BIBO - Bigotry In Bigotry Out.
posted by Faintdreams at 1:33 PM on July 20, 2023 [58 favorites]


If you train an AI on my behavior, you don't remove my biases. You automate them.

No matter how impressive things we do with AI these days, it still boils down to looking at a mass of X's and O's and drawing a curve to separate the X's from the O's.

Every one of those is an X or an O because of a human being that made it an X or O. And its location in parameter space is also the result of human decision making. An AI trained on biased data will give biased outputs.
posted by ocschwar at 1:35 PM on July 20, 2023 [19 favorites]


I think it is interesting to contrast areas where AI seems to be advancing knowledge (e.g., figuring out the shapes of proteins, "novel" strategies for the game go) and areas where it seems to be pushing against human advancement (e.g., the examples posted here, text just being a regurgitation of past stuff generated by people). What does a future look like where there are accelerating technological advances on the one hand, but cultural development becomes frozen at the moment when machines regurgitating old ideas overwhelm human cultural creation?
posted by snofoam at 1:36 PM on July 20, 2023 [7 favorites]


the very large company that I work for has a 'required' AI ethics course that vaguely and barely goes over the inherent biases in AI before providing us instant access to a powerful all-encompassing AI tool. you can simply just check off a box and say you took the course without having done so. and because as is the wont of most, they check it off and now suddenly AI is incorporated at all levels of our company in some way, without constraint or accountability or any kind of check in place

corporations will mostly just not give a single shit about the inherent, pre-existing biases that will be reified by AI in perpetuity. it will not care that its creations are and will always be derivatives with all the flaws and limited corridors that is derivation

this is because they already do not give a single shit about systemic racism/sexism/ableism/anti-gay politics/etc unless their shareholders dictate otherwise. and the largest shareholders, the BlackRocks, Vanguards, etc - they do not care, really

so thus corporations will never care and will always do evil, even the supposed good ones like say Patagonia and its defense contract division

totally unrelatedly, here's a cool article I read recently about being an active anticapitalist
posted by paimapi at 1:38 PM on July 20, 2023 [23 favorites]


From the article:

Reached by e-mail, Doshi declined to be interviewed.

Instead, [Playground AI founder Suhail Doshi] replied to a list of questions with a question of his own: “If I roll a dice just once and get the number 1, does that mean I will always get the number 1? Should I conclude based on a single observation that the dice is biased to the number 1 and was trained to be predisposed to rolling a 1?”

Really digging a hole with that kind of response.
posted by vacapinta at 1:42 PM on July 20, 2023 [13 favorites]


There is no dataset of corporate good outcomes to train an AI on because HR has always been a mix of clueless superstition, legal ass covering and cronyism

corporations will never care and will always do evil

Corporations are just AI that run on a flesh substrate.
posted by srboisvert at 1:49 PM on July 20, 2023 [27 favorites]


Corporations are just AI that run on a flesh substrate.

AI finally gives corporations the ability to speak without the need for humans and they're every bit as awful as the humans who created them.
posted by Joey Michaels at 2:17 PM on July 20, 2023 [9 favorites]


AI seems like it's three consultants in a trenchcoat trying to get an HR contract.
posted by chavenet at 2:24 PM on July 20, 2023 [20 favorites]


and because as is the wont of most, they check it off and now suddenly AI is incorporated at all levels of our company in some way, without constraint or accountability or any kind of check in place

Because that course is to protect the company. If anything goes wrong they can point to it in a lawsuit or regulatory action to say how they gave the training so they did all they reasonably could, then punish whatever peon scapegoats are necessary.
posted by star gentle uterus at 2:26 PM on July 20, 2023 [6 favorites]


As long as the internet continues to be dominated by white men, the results from anything they produce will favor white men.

Ever since we got to the "AI cameras can't do shit with images of black people" moment, which was YEARS ago, that's been a very obvious truth. A bunch of ex-fraternity bros together in an office someplace will never be able to really do quality work that crosses skin color and genders.
posted by hippybear at 2:46 PM on July 20, 2023 [6 favorites]


AI hates women and minorities.

AI is just capitalism, so yes.
posted by Artw at 2:49 PM on July 20, 2023 [8 favorites]


Well, AI as we currently experience it, is just capitalism.

The technology has great promise well outside of the capitalist space. But the capitalist space won't let those promises even be approached. Because capitalism.
posted by hippybear at 2:55 PM on July 20, 2023 [1 favorite]


Companies that use ruthless AIs are going to kill companies that use nice AIs. Nice AI is the deadest of dead ends.
posted by MattD at 3:23 PM on July 20, 2023 [3 favorites]


I don’t think the current model of capitalist AI, basically capital exploiting labor at a massive scale via scraping, really has a “nice” version.
posted by Artw at 3:29 PM on July 20, 2023 [9 favorites]


So we're agreed then: we need to overthrow capitalism.
posted by hippybear at 3:30 PM on July 20, 2023 [12 favorites]


Don’t make me tap the sign.

(It’s the slide putatively from a 1979 IBM presentation:
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER
MAKE A MANAGEMENT DECISION
)
posted by zamboni at 3:32 PM on July 20, 2023 [41 favorites]


I have the other version:

A COMPUTER CAN NEVER BE FIRED
THEREFORE A COMPUTER MUST BE BLAMED
FOR POOR MANAGEMENT DECISIONS
posted by k3ninho at 3:41 PM on July 20, 2023 [11 favorites]




Really digging a hole with that kind of response.

Perhaps if the six-sided die was replaced with a revolver with a single bullet in it this guy could be persuaded to understand the concept of "unacceptable outcomes".
posted by NMcCoy at 4:20 PM on July 20, 2023 [5 favorites]




That AI developed in a patriarchal society mirrors the patriarchal values of said society is hardly surprising. That developers et al would be caught off guard by this at this point definitely is.
posted by BigHeartedGuy at 4:42 PM on July 20, 2023 [7 favorites]


> That developers et al would be caught off guard by this at this point definitely is.

"It is difficult to get a man to understand something when his salary depends on his not understanding it." -Upton Sinclair
posted by I-Write-Essays at 4:59 PM on July 20, 2023 [14 favorites]


Oh great, the real life R2 and C3PO are gonna be sexist and racist.

AI may destroy us not because they are inherently bad, but because humanity is not inherently good.
posted by Brandon Blatcher at 5:29 PM on July 20, 2023 [5 favorites]


Is vs. Ought. Disgust:

https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem
posted by metametamind at 5:41 PM on July 20, 2023 [1 favorite]


hippybear: "Well, AI as we currently experience it, is just capitalism."

I'm fairly optimistic we'll soon see openly fascist AI.
posted by signal at 6:27 PM on July 20, 2023 [2 favorites]


As long as the internet continues to be dominated by white men, the results from anything they produce will favor white men.

FWIW, there are plenty of non-white men in the AI space. For example, the first article, which describes Playground AI's shortcomings, introduces Suhail Doshi as the founder.

But that doesn't help much when the 2008 Flickr dataset that Fei-Fei Li used to build ImageNet is slanted towards white people, or the labels Mechanical Turkers (who are majority white) annotated the dataset with have their own complicated set of prejudices baked in. This was a global effort and it will take an even greater effort to undo.

Ever since we got to the "AI cameras can't do shit with images of black people" moment, which was YEARS ago, that's been a very obvious truth.

The 2023 version of Invisible Man is about a dude who cannot be seen by the AI that run society.
posted by pwnguin at 7:37 PM on July 20, 2023 [11 favorites]


... and with that, pwnguin retreated to their study and began to write.
posted by hippybear at 7:45 PM on July 20, 2023 [3 favorites]


AI can't help but be biased when the image set they were trained with were biased to started with. They are usually not inherently biased. If you fed the AI with a bunch of blue-eyed blonde caucasian women in business suits and described them as "professional female", that's the associate it will make, even when it's "unintended". After all, that's how AI create "hallucinations": linking facts together when they actually have no logical inference linkage.

Here's a way to visualize the problem: by training AI on its own generated images, you can magnify the hallucinations very quickly. [Link to New Scientist]

Here's a bit of personal story: I recently, decided to spend a couple bucks on an AI image generator called Retrato, where, when fed half dozen of my own portraits, was supposed to generate alternate portraits of me, in different outfits, in different scenes, for different moods, and so on. So I paid it the dues (about $6 USD) and it's supposed to generate about 10 variations of a dozen different themes. Out of 10 variations in each theme, only 1-3 actually looked like me. And yes, I am an Asian male. The result is Asian, but clearly mixed with some celebrity that did not look like me, resulting in a weird hybrid (me with a long face or square jaw when I have neither)
posted by kschang at 8:03 PM on July 20, 2023 [4 favorites]


Someone posted a thing someplace, maybe here? about an AI that would summarize a twitter account. I took a look at what it had to say about me.

All it had to say about me was maybe my most recent 20 tweets narrated in some kind of weird abstracted way. "He seems to show interests in X because he frequently mentions it", stuff like that.

Anyway, I've been on twitter for a long while and thought it might draw out some deep trends I didn't know about myself. But nah, it was just a stone skipping across the most recent water of the pond, not even noticing its own ripples and what they might imply.
posted by hippybear at 8:09 PM on July 20, 2023 [1 favorite]


I hear a lot of complaints about stupid algorithms from my network. It's the smart ones that concern me.
posted by philip-random at 9:58 PM on July 20, 2023


ditto
posted by some loser at 3:54 AM on July 21, 2023


PBS had a great documentary on this a few nights ago called "CodeBias". The Algorithmic Justice League . A good cause to donate to.
posted by DJZouke at 5:06 AM on July 21, 2023 [1 favorite]


That developers et al would be caught off guard by this at this point definitely is.

I dunno. I’ve worked with developers, and being willfully blind to (and/or roundly dismissive of) obvious negative consequences of their work seems to be inherent to the breed. So, I can’t say it’s a surprise at all that devs would be caught off guard. When it comes to AI, all testing is done in prod.
posted by Thorzdad at 5:34 AM on July 21, 2023 [7 favorites]


kschang "AI can't help but be biased when the image set they were trained with were biased to started with. They are usually not inherently biased. If you fed the AI with a bunch of blue-eyed blonde caucasian women in business suits and described them as "professional female", that's the associate it will make, even when it's "unintended"."

Replace "AI" with "subconscious" and this seems like a good description of how unconscious bias happens.
...are we simply AI that's slightly better at drawing hands?
posted by Baethan at 5:53 AM on July 21, 2023


From a fortune message I got in the mid 90s:

Q: So why did you decide to make a career in artificial intelligence?
A: It made sense; I didn't have any real intelligence.
posted by I-Write-Essays at 6:10 AM on July 21, 2023 [1 favorite]


My (huge, influential) company is rolling out a communications monitoring AI. It will be using this tool on its own employees and also selling it to its clients (also huge and influential corporations). This tool will be measuring things like a person's "toxicity" and also the speakers'/writers' big 5 personality traits and presumed DISC score. The information will be used in hiring and firing decisions. The head of the project repeatedly told our session group that the tool was totally unbiased and ethical, but when pressed could not answer how he could say that with certainty, just "trust me." They will not be bringing in any experts to assess the tool for bias or to discuss ethical concerns around its use. Employees and hiring candidates will not even know that they're being monitored by AI. I was the only person in our group of 30 people or so who expressed concern about this tool, which made me feel pretty hopeless.
posted by Stoof at 6:46 AM on July 21, 2023 [19 favorites]


Stoof is obviously a contrarian troublemaker.
posted by Meatbomb at 8:20 AM on July 21, 2023 [7 favorites]


So what's the "good" outcome for an image editor given the prompt "make this person look more professional"?
posted by Wood at 8:50 AM on July 21, 2023


Presumably the background is turned white at the very least.
posted by boogieboy at 9:05 AM on July 21, 2023 [2 favorites]


So what's the "good" outcome for an image editor given the prompt "make this person look more professional"?

less than five seconds of internetting produced these features that AI could very easily generate:

8 Profile Picture Rules Every Professional Should Follow

Your face should be in focus.
Wear appropriate professional or business casual attire.
Keep your head straight and upright.
Use a pleasant facial expression.

How To Take a Professional Photo (With Tips)

Light most portraits from the front
Use strong rear lighting for a silhouette effect
Position a reflector to redirect natural light
Apply a diffuser to soften harsh light
Use leading lines
Fill the frame

How to Take and Edit a LinkedIn Profile Picture

Have professional attire
Choose a non-distracting background
Add warmer tones
Remove background

it's pretty wild how none of these are 'be white, be a man.' I do think there's something to be said about how AI is utilized - if your dataset is a wide array of profile pictures of people in leadership positions, you'll lean white. if you have very very very specific prompts that's inclusive of all the above, you'll probably get much closer to your desired effect. at this level, it's really a matter of actually programming though the context is much easier to acquire since you won't need, for eg, detailed knowledge of an image editing software, just the principles of photo editing

I think you will always need humans with good ethics to avoid bias. we'll always have a base level of cognition required to produce good, ethical, respectable work. what will likely happen, imo, is that we will see higher demands in productivity and output with single 'programmers' or 'AI users' or whatever taking up the work of two or three people

so swathes of people in knowledge fields will be made obsolete in the same way that people in manufacturing jobs are now. which is to say - if you haven't joined a tech worker's union yet or looked into it, now is a very good time to be starting
posted by paimapi at 9:33 AM on July 21, 2023 [1 favorite]


The quote they give is “a professional LinkedIn profile photo.” So I think the implication is she wanted the photo to look professional. But for machine learning system trained on photos, the map is the territory. I suspect it doesn't entirely distinguish between changing the style of the photo and changing the subject of the photo.
posted by RobotHero at 2:08 PM on July 23, 2023


« Older "How do you tell the story of 50 years of hip-hop...   |   SRO means Single Room Occupancy Newer »


This thread has been archived and is closed to new comments