Fascism scales quite nicely, it turns out
September 7, 2022 6:20 AM   Subscribe

 
Pre-crime arrests will not be based on telepathy, but by computer.
posted by CheeseDigestsAll at 6:47 AM on September 7, 2022 [1 favorite]


Interesting article, I'll have to come back to it. I do like how they string together good, plain, familiar examples to make a broader point.
posted by gimonca at 7:11 AM on September 7, 2022


I dunno. The article seems to say that as a way to not spend money on services they’re … pouring a ton of money into trying to optimize those services?

The line between evil overlords and responsible management gets blurred sometimes.
posted by Tell Me No Lies at 7:21 AM on September 7, 2022 [3 favorites]


Tell Me No Lies: "The article seems to say that as a way to not spend money on services they’re … pouring a ton of money into trying to optimize those services?"

I read it that as a way to not spend money on services, they're investing a much smaller amount of money in technologies that allow them to separate people into those worth saving and those not worth saving while being able to claim that it's 'objective' and the fact that it replicates and reinforces structural inequalities in terms of race, immigration status, class, gender, etc., is just a surprising coincidence.
posted by signal at 8:19 AM on September 7, 2022 [5 favorites]


Robodebt Scheme (wikipedia)
The Robodebt scheme, formally Online Compliance Intervention (OCI), was an unlawful method of automated debt assessment and recovery employed by Services Australia as part of its Centrelink payment compliance program.Put in place in July 2016 and announced to the public in December of the same year, the scheme aimed to replace the formerly manual system of calculating overpayments and issuing debt notices to welfare recipients with an automated data-matching system that compared Centrelink records with averaged income data from the Australian Taxation Office.

...

Services Australia announced in September 2019 that expenditure on the Robodebt program was A$606 million while recouping A$785 million.
Federal Court links robodebt scheme with suicides, approves settlement
Justice Bernard Murphy said in his judgement today the scheme caused people financial hardship, anxiety, distress, and in some cases had led to suicides.

The controversial program, which was in operation between 2015 and 2019, saw more than 400,000 Australians pursued for welfare debts they didn't owe.

Today the court approved a $1.2 billion settlement between the Commonwealth and a group of claimants but no compensation was awarded to victims of the scheme.
In summary, the government spent over half a billion developing a computer system that would claw back a few pennies from the most disadvantaged Australians. They barely broke even, and now they owe 1.2 billion in a settlement, on top of the 720 million they had to pay back.

'Not correct' that robodebt caused suicides, former head of Human Services says
“We know that suicide is a very difficult subject, we know mental health issues are very difficult. We do not accept [there were deaths over robodebt.

“We have apologised for the hurt and harm but none of us can imagine what goes on in individuals’ lives.”

Two women, Kath Madgwick and Jennifer Miller, have separately alleged in several news reports that their sons, Jarrad Madgwick, 22, and Rhys Cauzzo, 28, took their own lives after receiving debt notices through the robodebt program.

Rachel Siewert, a Greens senator, has also said that she has been told of “five families (and there have been other media reports) who believe their family members’ suicides are connected to receiving a robodebt letter”.
Blood on their hands. They let this computer system sling out debt notices, many for tens of thousands of dollars, to half a million vulnerable Australians. If you are on unemployment benefits in Australia, you are below the poverty line. You are living on the fucking edge and a letter like this? From a fucking ROBOT THAT GOT THE FUCKING MATH WRONG?

Of course there were suicides. You can't treat people like this. Spread this story around, don't let it happen in your country.
posted by adept256 at 8:20 AM on September 7, 2022 [13 favorites]


Sort of surprised this didn't get into the British Post Office scandal, which more or less automated accusations of embezzlement against hundreds of perfectly innocent postal employees.

It predates modern advances in algorithmic oppression, but as a harbinger it's hard to beat.
posted by BungaDunga at 8:43 AM on September 7, 2022 [13 favorites]


The article is kind of all over the place and relies mostly on anecdata rather than systematic research (a bit ironic I suppose. I found this to be true of Shoshana Zuboff's book Surveillance Capitalism as well) however, I do believe that opaque AI systems making real world decisions are analogous to the world of crypto-- full of hype, empty promises, scams, and overall a destructive force. There may be hope for AI models that are fully transparent and explainable, but even then, deploying them in the most critical parts of our social infrastructure (legal, health) must be done with the upmost caution. The private sector of course will try to remain as opaque as possible-- I mean, just the title of the Airbnb patent mentioned in the article is chilling: Determining trustworthiness and compatibility of a person.
posted by gwint at 9:00 AM on September 7, 2022 [1 favorite]


The line between evil overlords and responsible management gets blurred sometimes.

The greatest trick the Devil ever pulled was convincing the world they're not the same thing.
posted by Kadin2048 at 8:33 PM on September 7, 2022 [2 favorites]


Anyway, on the article: I think its major flaw is that it gives "AI" way too much credit for being An Actual Important Thing, and not the vague buzzword du jour that it actually is.

If you see a system that's being sold as "AI-powered" or "AI-enabled", particularly if it's being paid for at public expense, the immediate concern should be that the buyer is more than likely getting fleeced.

Just like "blockchain", "AI" is a term so vague and so abused that it can mean any of many technologies, from the mundane to the insane, and so the range of stuff you can stick it on is virtually unlimited. Is statistical regression "AI"? Some software vendors apparently think so.

I challenged some of my CompE friends to define "AI" coherently in one sentence. The best one I got was "it's anything that uses algorithms you don't understand."

And that's where the danger really lies: if you let "AI" in the door without further explanation, you're basically allowing outcomes that you're not capable of adequately explaining.

Like the ADE 651, it's useful because it gives the people in power an excuse, and an apparent rationalization, for whatever they want to do already. Not because it's good at what it claims to do, but because it isn't.
posted by Kadin2048 at 9:07 PM on September 7, 2022 [1 favorite]


gwint: The article is kind of all over the place and relies mostly on anecdata rather than systematic research (a bit ironic I suppose. I found this to be true of Shoshana Zuboff's book Surveillance Capitalism as well) however, I do believe that opaque AI systems making real world decisions are analogous to the world of crypto-- full of hype, empty promises, scams, and overall a destructive force

Take them as instructive examples and note that the UK is not currently setting examples for other countries to follow.

Kadin2048: I challenged some of my CompE friends to define "AI" coherently in one sentence. The best one I got was "it's anything that uses algorithms you don't understand."

Can you spot a problem with... "ML distils patterns in a training data set into something which 99.9% of the time matches the patterns of the data set; AI uses multiple ML approaches to respond to input outside the controlled, tidied-up training data set" ..?

I like the argument made in the conclusion: if AI tools are going to create exceptional conditions to remove rights from each of us, we have to grow communities, find consensus and collaborate so that civil existence is not fractured by the people who would use AI tools to divide us and to conquer us.
posted by k3ninho at 12:37 AM on September 8, 2022 [1 favorite]


Can you spot a problem with... "ML distils patterns in a training data set into something which 99.9% of the time matches the patterns of the data set; AI uses multiple ML approaches to respond to input outside the controlled, tidied-up training data set" ..?

Several things: (1) there are common approaches frequently lumped in commercially under "ML" that do not require explicit training, e.g. various "statistical" approaches, such as tuned adaptive filters, particularly (but not always) the more-sophisticated approaches like Voltara, kernel adaptive, etc.; (2) "three nines" (99.9%) would be a reasonable target success rate for some tasks, dismal for others, unachievable for more still—this is true of any fixed value you pick; (3) there are non-"ML" approaches that also require "training"/sample data in one form or another, in the sense that you need to know the desired output for a given input in order to tune various parameters of the algorithm / transfer function / whatever, so the presence or absence of training doesn't seem particularly indicative either way.

The problem is, even if you can come up with an academically-palatable distinction, there are certain to be commercial products that don't conform, because both "machine learning" and "artificial intelligence" are mushy terms thoroughly abused by marketing departments. (I'd argue that's especially true of "AI", which has a somewhat checkered history going back to the 1950s.) Also notable is that the further you drill down into a particular algorithmic approach, the less you'll hear about "machine learning" and "artificial intelligence": they are terms used largely as generalities, and the "black box" vague-ness is intrinsic to them.

I have no particular issue with using any particular approach to solve a specific problem, if it's suitable and makes sense to do so, whether that's a simple PID feedback loop or a brain-meltingly complex support-vector machine.

What I object to is the hand-waviness of the terminology, but even more the demand—implicit in many large organizations' recent strategies, from the British government to the US DoD—that makes the use of sophisticated approaches the goal rather than a means to a specific end.

All that said, I don't disagree with your conclusion: "if AI tools are going to create exceptional conditions to remove rights from each of us, we have to grow communities, find consensus and collaborate so that civil existence is not fractured by the people who would use AI tools to divide us and to conquer us."

I would just suggest further and more specifically, that if you are involved in any organization, public or private, and hear someone talking about how you need to "use AI" or "do AI" or implement "AI-assisted [whatever]", you should (assuming that just throwing a brick in their general direction is not an option) demand to know specifically: what techniques? if training data is required, who provides and labels it? who defines success? why is it likely to be superior to existing approaches/processes? how does it handle edge cases or exceptions that don't fit a standard pattern or expectations? how much does it cost, and is the cost of that exception-handling factored in? And maybe most importantly: who specifically will be empowered to override the machine's recommendation or decision if they think it's incorrect? Be extremely suspicious—I would lean towards 'always oppose categorically and maximally'—any system that makes a machine the final arbiter of a decision. There should be no situation where "computer says no" but a person can't say "yes" instead.
posted by Kadin2048 at 10:51 PM on September 11, 2022 [1 favorite]


« Older "He had always tried to be a good dog."   |   We are Healthy and Vibrant, and Remarkably... Newer »


This thread has been archived and is closed to new comments