"not a technical problem"
January 21, 2020 10:06 AM   Subscribe

Will artificial intelligence fix hiring discrimination? Well, no, but what do people think about it? And if AI isn't the answer, what is? In two blog posts, data scientist @ryxcommar discusses the snake oil of HireVue et al, public discourse thereof, and conducts a mini-survey of his own, with disconcerting results. posted by theodolite (14 comments total) 14 users marked this as a favorite
 
Previously on Metafilter: STET - a short story about what happens when GIGO gets married to the trolley problem by AI.
posted by NoxAeternum at 10:25 AM on January 21, 2020 [3 favorites]


If the public disagrees, then the public is wrong. AI (or machine learning) is only as good as the data it's trained on. If the training data are biased by racism, then the AI will be biased by racism.

RIRO
posted by ocschwar at 10:35 AM on January 21, 2020 [5 favorites]


Is that Unilever screenshot essentially advocating for a neophrenological approach to hiring?!
posted by grumpybear69 at 10:52 AM on January 21, 2020 [1 favorite]


If the training data are biased by racism, then the AI will be biased by racism.

ok yes but people don't know what biases are in their training data, that's part of the issue. You can't have unbiased training data so how can we use ML techniques for these sorts of problems or must we discard them completely?
posted by GuyZero at 10:55 AM on January 21, 2020 [7 favorites]


I seem to recall a paper around the time of Amazon pulling the plug on their AI that suggested it was possible to fix sexism in hiring by doing heavy preprocessing on the data, like replacing 'softball' (typically a women's sport) with COLLEGIATE_TEAM_SPORT.

The problem with 'AI can't fix hiring' arguments is that the status quo is already heavily biased, and ML is basically our best tool for detecting this. Training humans doesn't seem to work; I'm told implicit bias trainings either have zero effect, or make the situation worse.

But maybe we should just go full gonzo on this. If we can't trust human generated data, make some of our own. Assemble banks of questions, randomly assign people to the hire / no hire categories, and figure out what questions / filters predict performance in the job.
posted by pwnguin at 10:58 AM on January 21, 2020 [2 favorites]


like replacing 'softball' (typically a women's sport) with COLLEGIATE_TEAM_SPORT.

I'm sad that upon googling it turns out that the story about detecting tanks in photos is probably not true, but yeah, ML is really good at make false correlations based on non-causal factors. Scrubbing data sets ahead of time is helpful but it remains to be seen what slips through next.
posted by GuyZero at 11:19 AM on January 21, 2020


If this technology is good at detecting bias (presumably everywhere but in itself), maybe interviews should be the training data and the result be a list of recruiters who should be fired.
posted by rhizome at 11:22 AM on January 21, 2020 [5 favorites]


> If this technology is good at detecting bias (presumably everywhere but in itself), maybe interviews should be the training data and the result be a list of recruiters who should be fired.

I can save you the expense of hiring a ML PhD for six months: In my experience the answer is all of them.
posted by pwnguin at 12:31 PM on January 21, 2020 [5 favorites]


It strikes me that the key problem with attempting to apply AI to social functions (hiring, jury selection, etc.) is that you can never really explain why an AI made a choice that it did, so it will never be acceptable to anyone inclined to ask that very common human question. AI works by "stochastic gradient descent" which for all intents and purposes might be described to a layperson as "feeling about in a dark room" -- sure that's not a terrible strategy for finding one's keys without a flashlight, but it's nobody's idea of an optimal approach, especially when the keys you're looking for might be in another room entirely.
posted by axiom at 12:34 PM on January 21, 2020 [6 favorites]


But maybe we should just go full gonzo on this. If we can't trust human generated data, make some of our own. Assemble banks of questions, randomly assign people to the hire / no hire categories, and figure out what questions / filters predict performance in the job.

Didn't Google already sort-of do this experiment? Like, they had this massive heuristic that supposedly had been trained on candidate answers vs. later performance and tenure after they were hired, and they used that to drive what questions they asked to later candidates? Except it turned out that they'd forgotten to correct for the ingrained bias of what sorts of answers earned job offers, so it planed sharply toward those idiotic "why is a manhole cover round?" "brain-teasers," which the rest of the industry gladly hopped on board with for ten years or so before Google finally scrapped it and admitted that their approach was so badly broken that it was worse than useless at predicting employee performance?

In conclusion, ML is absolute shit if fed bad data, and everyone is feeding their ML bad data.
posted by Mayor West at 12:48 PM on January 21, 2020 [3 favorites]


What if, and I realize this is totally fantastical, we tried to have a society where everyone could live a fulfilling and materially comfortable life rather than trying to determine the perfect sorting mechanism for who gets to be at the top of the pyramid.
posted by Pyry at 7:35 PM on January 21, 2020 [2 favorites]


Didn't Google already sort-of do this experiment?

If Google's AI worked properly, then Google would never hire anyone who wanted to work there.
posted by Cardinal Fang at 12:06 AM on January 22, 2020


There is no artificial intelligence.
posted by GallonOfAlan at 1:27 AM on January 22, 2020


Obligatory XKCD.
posted by ErisLordFreedom at 10:38 AM on January 22, 2020


« Older What does it mean to be human? And what does it...   |   The Six Levels of Affluence Newer »


This thread has been archived and is closed to new comments