The Robots Are Now Hiring
September 25, 2018 5:34 PM   Subscribe

Some Fortune 500 companies are using tools that deploy artificial intelligence to weed out job applicants (wsj). HireVue uses machine learning algorithms to analyze facial expressions for companies such as Unilever and Hilton. DeepSense use ML to analyze an applicant's LinkedIn, Twitter and other social media accounts.
posted by adept256 (65 comments total) 37 users marked this as a favorite
 
Oh boy, new ways to be racist without saying that you are racist! I'm sure that the training data has been audited by third parties for various kinds of biases, right? They say that they are tuning for "...people who have previously been identified as high performers on the job..." Oh, there is totally no way that that could have a tilt toward white males at all!
Rise of the racist robots
Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs
You can see it right on the Deepsense site, with fun words like "culture fit", next to (screencap)a bunch of white people sitting around a table. They aren't even trying.
posted by rockindata at 5:52 PM on September 25, 2018 [91 favorites]


stuff like this makes me ashamed to be a computer scientist.
posted by scose at 5:54 PM on September 25, 2018 [67 favorites]


After watching that video, all I can say is what utter bullshit. To assume that these superficial “measurements” actually say anything about the person themselves is laughable. A statistical average of micro-expressions of people hired already, somehow reveals a good candidate who meets some high percentage of similarity to this average, is a joke. Plowing through publically available content on social media sites can determine who you are as a real person is similarly shallow and bogus. Aren’t social media sites subject to role playing? It just boils down to it’s cheaper to have a machine do all your thinking for you. The claim that this is much more fair and unbiased than people doing the interview because machines don’t have human biases is ignoring the fact that the machine is programmed by humans making all the decisions that will ultimately shape the machine’s “decisions.”
posted by njohnson23 at 6:01 PM on September 25, 2018 [10 favorites]


use ML to analyze an applicant's LinkedIn, Twitter and other social media accounts

heh. they are optimizing their hiring process for people who spend all day checking facebook and playing on their phones.
posted by ryanrs at 6:02 PM on September 25, 2018 [42 favorites]


I'm with you, that's why I put the black mirror tag in there. I think this needs to be regulated and stomped out before it spreads to law enforcement, insurance, dating, whatever. The danger is the industry will mature before regulations can catch up, like the gig economy. Especially in this political climate where regulations are being rolled back at the behest of industry power brokers.
posted by adept256 at 6:04 PM on September 25, 2018 [3 favorites]


There is absolutely no way that this isn't discriminatory. Racist, sexist, ableist...

I would say I hope someone sues the bastards - but in this climate, I don't have much hope that would lead to change. We need sensible laws to protect people from biased algorithms. But we're not going to get that. We can't even get a president who isn't a white nationalist.

I just... ugh.

Maybe the only hope is that they realize it's all smoke and mirrors - that it doesn't really result in better hires because the fundamental assumptions are flawed.
posted by Kutsuwamushi at 6:04 PM on September 25, 2018 [3 favorites]


before it spreads to law enforcement, insurance, dating, whatever

It's a touch late for that.
posted by pompomtom at 6:10 PM on September 25, 2018 [5 favorites]


The company says it uses a scientifically based personality test, and it can be done with or without a potential candidate’s knowledge.

Narrator: They don't, and it can't.
posted by His thoughts were red thoughts at 6:20 PM on September 25, 2018 [22 favorites]


Narrator: They don't, and it can't.

Legally, I mean. Collection of biometrics (e.g., photo or video scanning) without consent is not lawful under EU privacy law, and collection of personal information in virtually every jurisdiction requires the collector to notify the individuals.

However, the likely outcome of this is that job seekers will be required to consent as a condition of making the application. This is probably also unlawful under EU law but other jurisdictions are more permissive.
posted by His thoughts were red thoughts at 6:24 PM on September 25, 2018 [3 favorites]


Also 'scientifically based personality test', I mean, just fuck right off. You fools are basically talking about digital phrenology.
posted by His thoughts were red thoughts at 6:25 PM on September 25, 2018 [60 favorites]


I'm sure that the training data has been audited by third parties for various kinds of biases, right?

Yes, but that audit involved law firms advising them on how to do this in a way that will just barely skirt potential hiring discrimination issues, or at least provide enough of a fig leaf to ward off most of them even while they actually do discriminate.
posted by Sangermaine at 6:28 PM on September 25, 2018


Is it time for a Butlerian Jihad yet?
posted by radwolf76 at 6:29 PM on September 25, 2018 [14 favorites]


butlerian jihad. right here. right now.

> stuff like this makes me ashamed to be a computer scientist.

You should be! all going around making machines in the likeness of a human mind like it’s an acceptable thing to do!

Christ y’all are worse than those Tleilaxu assholes.
posted by Reclusive Novelist Thomas Pynchon at 6:29 PM on September 25, 2018 [23 favorites]


jinx! radwolf, you gotta buy me a machine in the likeness of a human mind AND THEN SMASH IT
posted by Reclusive Novelist Thomas Pynchon at 6:30 PM on September 25, 2018 [10 favorites]


Now someone make an app that subtly tweaks headshot pictures into features that trigger optimized results for the image bots.

When you make a bazillion dollars with the idea in a few years remember me because I'll probably be broke and unhirable.
posted by loquacious at 6:30 PM on September 25, 2018 [14 favorites]


> Now someone make an app that subtly tweaks headshot pictures into features that trigger optimized results for the image bots.

also make it mine bitcoins
posted by Reclusive Novelist Thomas Pynchon at 6:31 PM on September 25, 2018 [2 favorites]


MetaFilter: buy me a machine in the likeness of a human mind AND THEN SMASH IT
posted by loquacious at 6:31 PM on September 25, 2018 [12 favorites]


Burn this shit to the fucking ground.
posted by odinsdream at 6:50 PM on September 25, 2018 [8 favorites]


> Now someone make an app that subtly tweaks headshot pictures into features that trigger optimized results for the image bots.

This is definitely possible with current technology, e.g., Black-box Adversarial Attacks with Limited Queries and Information
posted by scose at 7:05 PM on September 25, 2018 [5 favorites]


I'm doing a Data Science Masters right now. I was planning on doing some kind of epidemiological stuff for my final project, but this is making me think about trying to write programs designed to fuck with these sorts of things. Make these worthless, flood them with so many false positive results that would otherwise stand out to regular people that everyone cancels their contracts.

Of course, that'd be shoveling back the sea, but I can dream, can't I?
posted by Hactar at 7:15 PM on September 25, 2018 [9 favorites]


It'll probably happen, and it will probably be funny when some big company hires the top ten cats on instagram. It won't be so funny when the GRU do it.
posted by adept256 at 7:19 PM on September 25, 2018 [2 favorites]


Biased people make biased algorithms.

And everyone making algorithms is biased.

Yes, everyone.
posted by SansPoint at 7:26 PM on September 25, 2018 [3 favorites]


but the critical part isn't the algorithm, it's the training data.
posted by scose at 7:33 PM on September 25, 2018 [9 favorites]


I wonder what they do when you have no social media, or your social media data is useless. I fill my facebook profile with incoherent ramblings in broken english, as an attempt to imitate spambots. It works on the algorithms that guess my interests, for the purposes of advertising. If they took a look at my profile, it would seem that I was completely insane.
posted by constantinescharity at 7:42 PM on September 25, 2018 [5 favorites]


DeepSense use ML to analyze an applicant's LinkedIn, Twitter and other social media accounts.

Soooo...In this brave new world, if you don’t do social media, you can’t get a job?
posted by Thorzdad at 7:48 PM on September 25, 2018 [15 favorites]


OK, who is going to have some Make Phrenology Great Again hats made in MeFi blue?
posted by b1tr0t at 7:48 PM on September 25, 2018 [4 favorites]


Gotta work on a bot that performs adversarial analysis of these tools and gattacas the hell out of your twitter
posted by Typhoon Jim at 8:08 PM on September 25, 2018


The sole purpose and end result of all this stuff is transfer money from Fortune 500 companies to the companies peddling this bullshit. Instead of having HR interns make sure your applicants aren't retweeting Pepe memes and Jordan Peterson quotes all day, you can pay someone 7 or 8 figures to have their "ML" (which might just be interns on their end anyway, depending how shite they are at machine learning) to do the same thing.
posted by sideshow at 8:14 PM on September 25, 2018 [6 favorites]


if you don’t do social media, you can’t get a job?

Probably, yes. For one thing, it shows that you're "technology-averse" (which doesn't need to be true, it's just a story they tell themselves).

Next up, more thinkpieces about how "kids these days" don't want to work hard, because what else could possibly explain the fact that my open positions are going unfilled?
posted by aramaic at 8:16 PM on September 25, 2018


Biased people make biased algorithms.

Given the black-box nature of some contemporary ML, it might not even matter who "makes" the algorithm.
posted by atoxyl at 8:28 PM on September 25, 2018


I'm never going to be able to get a stable job again
posted by Ray Walston, Luck Dragon at 9:11 PM on September 25, 2018 [8 favorites]


The sole purpose and end result of all this stuff is transfer money from Fortune 500 companies to the companies peddling this bullshit get bought out by Facebook or LinkedIn, cash in, and move on to something else before anyone realizes it's all snake oil with some magic buzzwords attached
posted by naju at 9:18 PM on September 25, 2018 [10 favorites]


> Also 'scientifically based personality test', I mean, just fuck right off. You fools are basically talking about digital phrenology.

Are you suggesting that personality traits don't exist, or simply aren't measurable? Or some third problem?

Fundamentally, I think the problem is simply what you do with the data. A potential engineering candidate self-rates high in Extraversion. Is that a no hire? At what point do you decide you have too many Conscientiousness focused engineers? Or say you're expecting to triple your team size in five years, and all your previous engineers promoted to management rated high in Agreeableness, do you go seeking out more engineers to fit that mold for later promotion?
posted by pwnguin at 9:20 PM on September 25, 2018


Amazon's online job applicant questions, required, are said to have included one asking if the applicants thought they were lucky.
posted by Baeria at 9:26 PM on September 25, 2018


opt out of DeepSense
posted by Going To Maine at 9:28 PM on September 25, 2018


This reminds me - I was a kid I tried for a job at Toys R Us and encountered my very first employee personality test.

After my application they called me in to tell me that I "aced" the test and that they didn't like it. And it wasn't just by seeing through all the silly multiple choice ethics-ish related questions posed in the test to essentially determine if you're a thief or not, and further, if you'd willingly/actively snitch on thieves.

I was a DnD/RPG nerd. I figured they were looking for lawful good, so they got it.
posted by loquacious at 9:32 PM on September 25, 2018 [31 favorites]


(Of course, one must believe that that’ll work.)
posted by Going To Maine at 9:32 PM on September 25, 2018


The company says it uses a scientifically based personality test

Smart money says this is closely related to Kosinski's MyPersonality work at best (previously).

At worst it's MBTI. You know how easy it is to make a language model from people who put this shite in their Twitter profile? Same problem I mention here, of course.
posted by supercres at 9:40 PM on September 25, 2018 [2 favorites]


An answer key for the Unicru old-school personality test exists. SWIM used this key in an online job application, and got a call in less than an hour. Funny, you're supposed to "strongly agree" or "strongly disagree" for every question. It's never good to see shades of grey.
posted by scose at 11:02 PM on September 25, 2018 [4 favorites]


Soooo...In this brave new world, if you don’t do social media, you can’t get a job?

Perhaps. After 30 years of being hired without it, during my last job hunt I had to create a strong social media presence and get a professional headshot (among other things) just to get the kind of job I've done for twenty years, and some potential employers wouldn't look at me at all without social media accounts with years of history.
posted by davejay at 11:04 PM on September 25, 2018 [4 favorites]


Oh great, so let’s add in some de facto ageism along with the sexism/racism/ableism this also allows.
posted by nat at 11:55 PM on September 25, 2018 [5 favorites]


I was talking to my buddy about this, he recently took a "job application" type course in Europe. It sounded exactly like 00s SEO type stuff... Put all of the keywords from the job advert into a tiny div, hidden somewhere in the background of your CV, to make sure the AI does not reject you.

Bullshit, man.
posted by Meatbomb at 11:59 PM on September 25, 2018 [2 favorites]


Ah yes, the rolling "punish people who believe things" party. We're all familiar with that one already, this just adds some hand-wavey jargon to the democracy-killing concept.
posted by 1adam12 at 12:08 AM on September 26, 2018 [1 favorite]


Are you suggesting that personality traits don't exist, or simply aren't measurable? Or some third problem?

Personality traits exist. These people aren’t measuring them. They are making pronouncements with significant consequences based on the flimsiest pseudoscience.

One solution allegedly measures ‘personality’ from facial expressions. This is sheer nonsense. Expressions linked to emotions are not common or uniform across all populations. Or even between individuals.

Another performs semantic analysis on the text of tweets and provides defintive sounding pronouncements on ‘personality’, as if a twitter account could be held to represent the full spectrum of a person’s various facets. This too is snake oil. Semantic analysis is a very blunt instrument at present and basically is capable at this stage of identifying ‘positive’ or ‘negative’. Using semantic analysis of text for a personality test is absolutely akin to proclaiming someone intelligent because of the spacing of the bumps on their cranium.
posted by His thoughts were red thoughts at 1:20 AM on September 26, 2018 [11 favorites]


These people aren’t measuring them. They are making pronouncements with significant consequences based on the flimsiest pseudoscience.

This is (bits of) what DeepSense has to say about Trump, based on his Twitter profile:
Donald J. Trump is a thoughtful individual... He has a very optimistic attitude and has the ability to see the glass half full rather than half empty. He expects the best to happen even when there are challenges all around... He can be skeptical of what others have to say and can question them incessantly until he gets his answers.. He is generally a planned and organized individual who is someone who can be critical but is always thorough... He can be a good team player when the situation so demands. However, it does not come naturally to him. While he can be effective in a leadership role, he can face challenges when he is part of someone else's team. He is primarily motivated by achieving results and has a very strong focus on completing any task that he takes up.
posted by lollusc at 1:53 AM on September 26, 2018 [15 favorites]


Apart from the obvious concerns about cultural, racial and sexual biases, what about people who have facial muscle issues, or Aspergers Syndrome, where a person may not present common facial expressions. What about someone who is blind, who often exhibit radically different types of facial expressions... what about someone who has learned to overcome stuttering, and takes long pauses to overcome their issue... so many concerns
posted by greenhornet at 3:34 AM on September 26, 2018 [3 favorites]


On the other hand, could this make recruiters obsolete? Hmm... I have to think about this one.
posted by like_neon at 3:47 AM on September 26, 2018


You can see it right on the Deepsense site, with fun words like "culture fit", next to (screencap)a bunch of white people sitting around a table.

I think the guy in the middle is supposed to be Asian. A very, very pale Asian.
posted by clawsoon at 4:39 AM on September 26, 2018


Expressions linked to emotions are not common or uniform across all populations. Or even between individuals.

Oh did that Paul Eckman dude get debunked? I would enjoy a good debunking
posted by schadenfrau at 5:35 AM on September 26, 2018 [1 favorite]




This. Is. Fucking. Stupid. And the companies that use this bullshit "technology" get what they deserve.
posted by ZenMasterThis at 6:31 AM on September 26, 2018 [3 favorites]


> One solution allegedly measures ‘personality’ from facial expressions. This is sheer nonsense. Expressions linked to emotions are not common or uniform across all populations. Or even between individuals.

This is not a problem if (consciously or unconsciously) you want to select a monoculture. You will not complain when your system that’s allegedly supposed to select effective employees ends up selecting nothing but white men in their 20s when really deep down what you wanted all along were white men in their 20s.
posted by Reclusive Novelist Thomas Pynchon at 7:15 AM on September 26, 2018 [3 favorites]


but the critical part isn't the algorithm, it's the training data.

I work in data science and I had to try very hard from screaming this at my computer (open floor plan...don't want to annoy my cube-mates). Yeah, they may not be deliberately programming bias into the algorithms, but its definitely being inherited from the training data. The gall of these companies claiming these products are completely unbiased is infuriating. And for them to say "hey, it's no worse than humans" is fucking ridiculous because the hiring managers using it will actually believe it's removing bias and things will get worse.

I'm off to go lobby for a small sound-proof room where I can go scream about shit like this.
posted by noneuclidean at 7:23 AM on September 26, 2018 [4 favorites]


Pymetrixs claims to make it a focus to test for bias in their own (and others') models to do something similar. I guess it also uses structured neuroscience tests? Even people trying to be rigorous suffer from the terribleness of these snake oil solutions.
posted by abulafa at 7:49 AM on September 26, 2018


(unpopular_puffin.jpg)
Well, humans definitely /do/ use micro-expressions and irrelevant bullshit and biases in their hiring decisions; at least the software can be improved (and excised of this pseudo-science bullshit)...

For further reading on this front: "How to make a Racist AI Without Really Trying", which does a great job of demonstrating a problem, then presenting ways to develop algos which reduce the biases in the data. Human recruiters OTOH are much harder to reprogram.
posted by kaibutsu at 9:15 AM on September 26, 2018 [1 favorite]


Between reading this article and the one about Disney today, I'm just in a bad mood right now.

I'm currently looking for a new job, and as a middle-aged black woman who is more or less alone in the world, I wonder what will become of me if I can't find a good series of jobs for the next 20 years? I've got my resume out for evaluation right now so that I can just keyword-tweak it as necessary for the positions I'm applying to. I must leave the place where I'm working now. I'm paid very poorly compared to the value I bring. My fear is that if these algorithms are programmed with the biases of their creators, then am I sunk?
posted by droplet at 9:21 AM on September 26, 2018 [8 favorites]


Another performs semantic analysis on the text of tweets and provides defintive sounding pronouncements on ‘personality’, as if a twitter account could be held to represent the full spectrum of a person’s various facets. This too is snake oil.

A person is writing the tweets. An extroverted person will write about the exact same thing in a different way than an introverted person will. There's a lot more going on under the surface. The twitter account isn't representing the person, the language used is.

Semantic analysis is a very blunt instrument at present and basically is capable at this stage of identifying ‘positive’ or ‘negative’.

This pronouncement is at least ten years out of date. Whatever RNNs are doing under the hood, they're making more connections than the lexicon-based sentiment dictionaries of yore.

Using semantic analysis of text for a personality test is absolutely akin to proclaiming someone intelligent because of the spacing of the bumps on their cranium.

Bias in training sets is one thing, but this is absolutely misrepresenting the state of the art in academic research.

The tech isn't the problem here. It's the available data (biased towards status quo) and the motive (biased towards status quo).
posted by supercres at 9:59 AM on September 26, 2018


Good luck for any company trying to look up my Fakebook or Linkedout profile since I absolutely refuse to use any so called "social media".
But anyway.
This reminds me of a study I was involved with many years ago concerning programming aptitude tests.
We had a group of 50 employees of known mixed abilities all do ten different aptitude tests in different sequences and checked the results against their known abilities.
We found:
People learn to do tests and improve their scores the more they do, up to a point.
Very few of the test results actually reflect the real abilities, and any person's scores across the tests varied widely as well.
We had brilliant people scoring both top and bottom of tests as well as (unfortunately) poor people doing the same.
All people were volunteers and all knew they would not be told the individual results, though all saw the scores(without names).
posted by Burn_IT at 11:51 AM on September 26, 2018 [1 favorite]


One of the most important lessons of my adult life is that you can basically sell anything in buisness if you promise it is going to remove pain from someone’s life.

Recruitment is the object lesson in this - its a shitshow where everyone is massivley incentivised to lie and make overblown claims and business puts up with incredible bullshit if it slightly reduces the huge pain in recruiting. I seriously doubt these dodgy ML models do anytging helpful, but are they worse than recruitment consultants?
posted by Another Fine Product From The Nonsense Factory at 12:37 PM on September 26, 2018 [1 favorite]


...as a middle-aged black woman who is more or less alone in the world, I wonder what will become of me if I can't find a good series of jobs for the next 20 years?

I, too, am middle-aged and more or less alone in the world. It is really scary to be in this position in the U.S., especially in the current political climate.

I started a solo house cleaning business in 2012 out of sheer desperation for cash + inability to jump through the endless hoops required to even have a realistic chance at getting a job that paid enough for me to live on. Up until about 2014-2015 I still had hope that I could somehow find a job that would sustain me, so I completed a bootcamp in web development and conducted a long job search. Still no job. (Plenty of ageism and sexism, though...)

After years of similar depressing experiences, it finally dawned on me that my days of finding conventional employment are completely over, and unconditional basic income is nowhere near being widely implemented. I strongly suspect that for the rest of my life it'll have to be self-employment. Thanks to a strong network, I lucked into a steady freelance copywriting gig that pays the bills (just barely) for now. Fortunately I'm well-suited to running my own business, and little by little I'm working toward building an online business. But I don't make enough to save any money, so I still wonder what will become of me as I age. I certainly won't waste my time applying for jobs through the accepted channels, as my application would just be weeded out.
posted by velvet winter at 1:56 PM on September 26, 2018 [3 favorites]


I was talking to my buddy about this, he recently took a "job application" type course in Europe. It sounded exactly like 00s SEO type stuff... Put all of the keywords from the job advert into a tiny div, hidden somewhere in the background of your CV, to make sure the AI does not reject you.

Bullshit, man.


Don't most application intake algorithms convert your CV into plain text in a format of their choosing, scraping metadata as they go? I'd be really worried that my CV would have a bunch of keyword salad at the end when it finally gets read by a human in HR.
posted by naju at 4:07 PM on September 26, 2018 [1 favorite]


Obligatory Better off Ted.
posted by Nanukthedog at 6:00 PM on September 26, 2018 [2 favorites]


Re semantic analysis of twitter accounts:The tech isn't the problem here. It's the available data (biased towards status quo) and the motive (biased towards status quo).

Yeah, that’s a fair point. But I also think it’s the nonsensical and profoundly unscientific extrapolation from the available data. The assumption that tweets are representative of a whole person.
posted by His thoughts were red thoughts at 1:21 AM on September 27, 2018 [1 favorite]


Tweets aren't the whole person. Tweets put you into the relative classification of people who tweet like you. If you do not like the classification of people that tweet like you, either change your style, or don't tweet.

None of us know what exactly they are optimizing for. But, if they are selling this product to these companies, there's a good chance that it produces usable results - all hail our robot overlords.
posted by Nanukthedog at 3:33 PM on September 27, 2018




« Older Wild Donkeys Become Shepherds   |   The 13th month would be named Sol—every month... Newer »


This thread has been archived and is closed to new comments