How to design AI that eliminates disability bias
January 27, 2020 10:37 AM   Subscribe

How to design AI that eliminates disability bias (Financial Times, Twitter link in case of paywall issues) — "As AI is introduced into gadgets and services, stories of algorithmic discrimination have exposed the tendency of machine learning to magnify the prejudices that skew human decision-making against women and ethnic minorities, which machines were supposed to avoid. Equally rife, but less discussed, are AI's repercussions for those with disabilities."

The Financial Times piece opens with a story about a chance encounter on the University of Pittsburgh campus between a student and a delivery robot:
When knee-high robots on wheels began delivering groceries at the University of Pittsburgh last autumn, Emily Ackerman was wary. Yet her first encounter close-up, as she was crossing a busy road, proved scarier than she expected.

Perched on a low kerb designed for wheelchair access, and blocking her path to the pavement, sat a robot that had ground to a standstill. In a panic, Ms Ackerman forced her wheelchair to climb the kerb and arrived adeptly, but painfully jolted as the traffic lights turned green.

The incident led to a pause in testing, ordered by the university, a review and some mapping amendments to the routes. Then the robots returned. Ms Ackerman was shaken, and upset by a statement from Starship Technologies, the robot’s owner which responded to video-footage by saying the company was “glad to see that Emily was able to travel past the robot without stopping.” This, she feels, “minimised” her distress and the episode’s potential for harm.

As a doctoral student who uses artificial intelligence tools, she says: “It’s important that in the development of these technologies, disabled people aren’t put on the line as collateral.”
Further discussion of delivery robots taking over city sidewalks:
Delivery robots: a revolutionary step or sidewalk-clogging nightmare? (The Guardian)
Sharing a sidewalk with one of DoorDash’s delivery robots is a bit like getting stuck behind someone playing Pokémon Go on his smartphone. The robot moves a little bit slower than you want to; every few meters it pauses, jerking to the left or right, perhaps turning around, then turning again before continuing on its way.

These are the sidewalks of the future, technology evangelists promise. Autonomous delivery robots, once the exclusive purview of 1980s sci-fi movies, are coming to a city near you, with promises of reduced labor costs, increased efficiency and the reduction of cars.

But as robot fleets proliferate – Starship robots perform food deliveries for DoorDash and Postmates in Redwood City, California, and Washington DC, while Marble robots will begin making deliveries for Yelp Eat24 in San Francisco on Wednesday – the question none of these companies seems to want to answer is this: are these the sidewalks that we actually want?
Out of the Way, Human! Delivery Robots Want a Share of Your Sidewalk (Scientific American)
To gain public trust, these machines must demonstrate they can safely and unobtrusively share pedestrian spaces. Some U.S. cities have fairly empty streets and sidewalks, which might not see a pedestrian pass for long stretches of time. These paths could accommodate robots, says Renia Ehrenfeucht, chair of the Community and Regional Planning Department at The University of New Mexico in Albuquerque and co-author of the book Sidewalks: Conflict and Negotiation in Public Space. When the pavement gets more crowded, however, even robots rolling along at walking speeds will face challenges, which will get worse in U.S. cities with narrow sidewalks. “It’s actually really hard to navigate crowded sidewalks and not bump into people, and do it smoothly,” Ehrenfeucht says. “Until delivery robots are that skilled, if they could be, they will be disruptive.”
The FT piece pivots from dangerous robot encounters to discuss how AI applications that purport to minimize bias in hiring may introduce even more bias against disabled applicants:
The inability of AI to handle the unexpected also threatens to worsen the historic underemployment of disabled people, by robbing candidates of job opportunities, according to a report published in November by AI Now Institute at New York University. In particular, it highlights the popularity of remote video interviewing technologies, sold by HireVue and other providers to companies such as Unilever.

In combination with other assessments, the AI analyses the speech patterns, facial expressions and candidates' tone of voice, and draws inferences about their employability. Yet critics note this risks disadvantaging applicants with disabilities, such as speech disorders, facial paralysis or even autism, that affect how people look and sound. In a statement Unilever said "wherever a disabled candidate feels their disability may disadvantage them, they can contact a talent adviser and, wherever appropriate, they will be offered a direct interview". Yet, to benefit, people must declare their disability, which not all do because they fear being stigmatised.
Additional discussion of AI's potential to adversely affect job seekers:
Artificial intelligence will help determine if you get your next job (Vox)
Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments.

Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group.

But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended.
Artificial Intelligence Poses New Threat to Equal Employment Opportunity (Forbes)
The bottom line is that HireVue’s algorithm is secret so the public has no way of knowing whether it discriminates against older workers, women, the disabled and minorities, etc. EPIC says HireVue refuses to provide job applicants with their assessment score.

EPIC charges that HireVue violates the Federal Trade Communications Act because it lacks a  “reasonable basis” to support its claims and cannot be held accountable for the proper functioning of its secret algorithmic assessments. EPIC also disputes HireVue’s contention that it does not use facial recognition technology for identity recognition purposes, which has been found to be a violation of the FTC Act.

EPIC alleges HireVue’s platform also fails to meet minimal standards for AI-based decision-making set out in AI Principles approved by the 36-member countries of the Organisation for Economic Co-Operation and Development (OECD), including the U.S. These principles say everyone has a right to know the basis for an AI decision that concerns them, AI systems should be deployed only after an adequate evaluation of its risks and institutions must ensure that AI systems do not reflect unfair bias.
The piece closes with a discussion of strategies to counteract AI's tendency to reinforce biases:
To limit harm, Meredith Ringel Morris, a computer scientist at Microsoft Research, wants companies to communicate the shortcomings of AI to customers. For some recreational purposes, a simple statement pointing out that an AI tool often misidentifies what it sees might be sufficient warning. But, when slip-ups could cost lives, the protections should be proportionate. "If the system has 95 per cent accuracy that the street is clear, is that enough accuracy to safely cross the road? I think the answer to those questions is unclear," she says. "The other thing that I think might be important is stronger consideration of when it is and isn't appropriate to build AI systems under the current limits of AI."

Other AI experts, such as Professor Noel Sharkey, are calling for pharmaceutical-style regulation and a moratorium on the use of decision-algorithms in situations that could have life-altering consequences. In Ms Ackerman's opinion, if not a ban, then at least a pause in the headlong rush into AI is woefully overdue. "Designing something that's universal is an extremely difficult task," she says. "But, getting the shorter end of the stick isn't a fun experience."
posted by tonycpsu (29 comments total) 33 users marked this as a favorite
 
I had missed that we were doing ground based delivery drones now JFC this is all so dumb.

And terrifying.
posted by PMdixon at 11:08 AM on January 27 [1 favorite]


On the plus side, this might encourage certain cities *coughBaltimorecough* to install and maintain actual usable sidewalks.
posted by Faint of Butt at 11:10 AM on January 27 [3 favorites]


Who asked for these delivery drones? They benefit essentially no one, except corporate CEOs

Like would they even be cheaper than human delivery people? (Why would they be?) What is the actual stated benefit??
posted by captain afab at 11:23 AM on January 27 [3 favorites]


I trust that these corporations are paying their fair share of taxes to maintain the infrastructure that they're benefiting from?
posted by Mogur at 11:25 AM on January 27 [9 favorites]


A century ago we ceded the roads to cars. Now we'll be ceding the sidewalks (and the roads) to robots. What are humans good for anyway?
posted by sjswitzer at 11:57 AM on January 27 [7 favorites]


That CityLab piece is goddamn horrifying. Especially since this appears to be one of those facets of technology that legislators are slow to respond to,* which means it's going to play out as a dystopian laissez-faire capitalist hellscape (but I repeat myself) whose primary victims are, as always, going to be minorities and the disabled.

I guess what I'm saying is, where are the grey-market portable EMP cannons when we need them? I guess I'd settle for a handheld 500-milliwatt green laser, but those run the risk of blinding people as well as optical sensors.

* Or possibly are actively incentivized NOT to respond to, c.f. the thread last week about police departments playing dumb about harassment via the internet
posted by Mayor West at 11:57 AM on January 27 [3 favorites]


What is the actual stated benefit??

To identify and solve problems that autonomous vehicles might run into the future. No one is going into this just to deliver Pad Thai to college dorms.

I trust that these corporations are paying their fair share of taxes to maintain the infrastructure that they're benefiting from?

I mean, I guess they are paying just as much as you and I are for that sidewalk.
posted by sideshow at 12:00 PM on January 27


One of the benefits of robot labor is the reduction or elimination of human labor. Humans are messy; they require time off for family and illness and a host of things that may cost a company, but for which there is no corresponding benefit or at least, is not calculable. Humans make mistakes, but robots do not - any errors by a robot can be traced back to a person or persons responsible for data input or programming. Robots can work nearly 24/7, constantly generating revenue, which can recoup their cost quickly and actually can cost less over its lifetime, compared to human labor.

There’s probably plenty more reasons that robot labor is preferred to human labor, some kinda evil, some quite benign, some even quite good, but they are specific to industry. An automatic fireman would recuse the risk of life in a fire. A robocop could take the place of humans for more mundane and repetitive policing tasks. An autolaborer wouldn’t be susceptible to RSIs or boredom, or spend valuable company time on Reddit or Metafilter.

Of course, those reasons hide the dystopian vision behind robotocism. What do you do with the mass of excess humans who have no job, but that’s a different question.
posted by drivingmenuts at 12:34 PM on January 27 [3 favorites]


Suppose we stipulate that: If the sidewalk can no longer function as a place to safely walk (see also: scooters) then there's a problem.
posted by sjswitzer at 12:35 PM on January 27 [8 favorites]


Of course, those reasons hide the dystopian vision behind robotocism. What do you do with the mass of excess humans who have no job, but that’s a different question.

But see, that's how we defeat the robuts. They put all humans out of work. The humans then have no money to spend on things. Companies then will not have any money and will have to decommission all of the robuts. Humans get their jobs back! Huzzah!

Of course, this "market correction" will last so long that by the time companies finally feel the ill effects of no humans working, there will be no humans left on the planet since they all starved to death. You know, after feasting on all of the CEO flesh first. Then, as you can plainly see, there will be no humans left on this planet at all.

With the humans and machines out of the way, fungus and cockroaches can finally rule the planet. Their long plans have finally paid off for them.
posted by NoMich at 12:47 PM on January 27 [3 favorites]


What are humans good for anyway?

Consumption, preferably in mass quantities.
posted by nubs at 3:02 PM on January 27 [3 favorites]


The HR hiring AI that judges facial expressions is so terrifying that I can't even comprehend it. Real live people are hard enough to interview with; their expressions unreadable, their own personal algorithms totally opaque, but to add a layer of even more unaccountable judgment on top of that? I don't think I'd ever be able to get a new job with something like that.
posted by mittens at 3:37 PM on January 27 [4 favorites]


This HireVue thing is terrifying as fuck especially given that it’s a known problem that digital camera tech does a horrible job with darker-than-pale skin AND it’s a proprietary blackbox. Just...what? Why even use it?
posted by zinful at 4:45 PM on January 27 [7 favorites]


I mean, I guess they are paying just as much as you and I are for that sidewalk.

Taking SF municipal taxes arbitrarily as the relevant thing to look at, corporations pay 0.5% tax, individuals pay 1.5%
posted by PMdixon at 5:08 PM on January 27 [9 favorites]


Oh great another piece of debris to yeet off the sidewalk so humans can use it.
posted by toodleydoodley at 6:19 PM on January 27


Where are these delivery robots going that they can enter buildings and actually deliver items? I mean, there's no indication they can climb stairs or open doors. Or is the idea that people come out to meet them? I can see usability problems with that idea, too: not everybody can rush outside without preparation; not everyone can convey a package inside.
posted by Joe in Australia at 8:11 PM on January 27 [2 favorites]


Taking SF municipal taxes arbitrarily as the relevant thing to look at, corporations pay 0.5% tax, individuals pay 1.5%

Prop C (your 0.5% example) is extra and just for homeless services. Business in SF actually pay a higher amount of tax than individuals because their 1.5% tax is on “gross receipts”, not income like individuals.
posted by sideshow at 8:14 PM on January 27 [1 favorite]


Humans make mistakes, but robots do not - any errors by a robot can be traced back to a person or persons responsible for data input or programming.

Kinda laughed at this, to be honest. Good luck doing this kind of causal analysis on anything relying on ML.
posted by mhoye at 9:28 PM on January 27 [2 favorites]


See also - Predatory Predictive Policing.
posted by JohnFromGR at 3:13 AM on January 28 [2 favorites]




Maybe people will "kidnap" these robots and jailbreak 'em.
posted by Obscure Reference at 4:52 AM on January 28


Business in SF actually pay a higher amount of tax than individuals because their 1.5% tax is on “gross receipts”, not income like individuals.
posted by sideshow at 22:14 on January 27


It has to be gross because none of the companies we're talking about make any kind of GAAP profit! Thank you for correcting my lazy research on the rates, and helping me to realize that my actual issue is that none of these companies with billion dollar valuations and negative cash flow are anything other than money laundromats, and obvious, well known, flagrant money laundromats are scum and it makes me sick that they're tolerated and adulated and allowed to reshape the landscape and literally run people off the street. I literally live in a world defined by the power struggles between people laundering money through real estate and people laundering money through loss-making tech companies, and I'm profoundly disinterested in hearing that the mobsters paid for the sidewalks.
posted by PMdixon at 5:10 AM on January 28 [5 favorites]


Where are these delivery robots going that they can enter buildings and actually deliver items? I mean, there's no indication they can climb stairs or open doors.

If you're asking about the Starship robots specifically, no, they can't climb stairs, open doors, etc. They're pretty strictly lockboxes on wheels that trundle along the sidewalk and then wait outside of buildings. When they return to Starship, they just idle outside their own building until someone comes and collects them. (I see them fairly often when I'm out and about in that area. They're cute but also a menace. I wasn't at all surprised to hear they were blocking crosswalks.)
posted by Stacey at 5:45 AM on January 28


Wouldn't it be something if the food delivery robots ended up battling it out with the electric scooters for sidewalk supremacy?

Seriously though, I can think of another giant looming problem with AI... the autonomous vehicle designers aren't taking wheelchair users into account. Remember that video of the self-driving bus being tested in (I think) Philadelphia? No wheelchair access on that bus. No plans for an accessible autonomous van for the consumer market, either, even though it's the disabled and elderly populations who would benefit most from them. It's not a good idea to have the "we'll fix it later" mentality when these technologies require massive datasets. We should begin as we mean to continue.
posted by Soliloquy at 7:00 AM on January 28 [5 favorites]



I trust that these corporations are paying their fair share of taxes to maintain the infrastructure that they're benefiting from?

I mean, I guess they are paying just as much as you and I are for that sidewalk.


But I am paying for that sidewalk. My property tax bill goes directly to the municipality and my provincial tax bill goes to, among other things, financial grants from the provincial government to the municipality.

My question still stands - are the drone operators paying their fair share of the infrastructure cost? I hope they are, but I'm cynical, given that I'm also familiar with municipalities giving incentives to "job creators" and/or the wonderful things that a large company can accomplish with its own legal team.
posted by Mogur at 7:12 AM on January 28 [5 favorites]


Delivery robots should be required to be unicycles, to raise the bar on robotics competence need to deploy them... (and presumably they could then use the road shoulder instead?)
posted by kaibutsu at 7:41 AM on January 28


I'd like to throw out a recommendation for Janelle Shane's "You Look Like A Thing and I Love You" for a realistic and accessible look at machine learning and how predictive models recapitulate and amplify bias (and other garbage) in the training data.

I'm both mystified and angry that anyone would trust a predictive model, much less a proprietary one, to make hiring decisions. Of course it's going to amplify bias against people with disabilities. I don't see how including physical and behavioral features would do anything except make that worse. Now it has a whole new set of difficult to interpret features, from which it's going to build even less interpretable latent features that are certainly associated with chronic illness, disability, gender presentation, race, etc.

I liked that these articles pointed out some of the people/organizations that are working against this - I'd like to learn more about their work.

Thanks for this post.
posted by esker at 10:19 AM on January 28 [3 favorites]


To identify and solve problems that autonomous vehicles might run into the future. No one is going into this just to deliver Pad Thai to college dorms.

I'm sure there are companies that do plan to achieve full autonomous operation eventually but the first "delivery robots" I saw pop up turned out to be piloted remotely by workers in Central America, and very literally were essentially a gimmick meant to impress college students.
posted by atoxyl at 8:50 PM on January 28 [2 favorites]


As fate would have it, I had an encounter with one of these Starship droids walking back from lunch today, and learned that they are apparently programmed with New Yorker "I'M WALKIN' HERE" logic, by which any foreign object outside of a 3 foot radius may as well not exist, and it makes a bee-line for whatever destination it has in mind, trusting its ability to quickly react to anything that gets in its way, and not understanding the understandable concern that a robot veering toward pedestrians creates as it cruises toward them, even if they know that it will eventually route around them.
posted by tonycpsu at 1:08 PM on January 31


« Older Adam Savage Tests Boston Dynamics' Spot Robot!   |   "The Influencer's Ouroborous" Newer »


You are not currently logged in. Log in or create a new account to post comments.