STET
October 16, 2018 9:49 AM Subscribe
STET is a new short story by Sarah Gailey, written “entirely out of spite” and published online by Fireside Magazine. Don’t skip the footnotes.
Is either the editor or the writer also the car?
posted by GuyZero at 9:59 AM on October 16, 2018 [3 favorites]
posted by GuyZero at 9:59 AM on October 16, 2018 [3 favorites]
Damn.
posted by JohnFromGR at 10:07 AM on October 16, 2018
posted by JohnFromGR at 10:07 AM on October 16, 2018
Well that hit like a ton of bricks. Excellent use of footnotes for telling the whole story.
posted by Aya Hirano on the Astral Plane at 10:09 AM on October 16, 2018 [5 favorites]
posted by Aya Hirano on the Astral Plane at 10:09 AM on October 16, 2018 [5 favorites]
Ooooooooooof.
There are a lot of engineers who need to read that.
posted by Making You Bored For Science at 10:13 AM on October 16, 2018 [5 favorites]
There are a lot of engineers who need to read that.
posted by Making You Bored For Science at 10:13 AM on October 16, 2018 [5 favorites]
I can think of one in particular at the moment.
posted by Holy Zarquon's Singing Fish at 10:23 AM on October 16, 2018 [4 favorites]
posted by Holy Zarquon's Singing Fish at 10:23 AM on October 16, 2018 [4 favorites]
From the Twitter convo:
posted by Etrigan at 10:42 AM on October 16, 2018 [8 favorites]
I'm so sorry no I'm notAlso, Gailey says "STET will be in the quarterly with gorgeous, handwritten annotations."
I HAD TO WRITE IT, A HET DUDE SAID I COULDN'T
posted by Etrigan at 10:42 AM on October 16, 2018 [8 favorites]
this is unbearably good.
"Do you know how long it took me to learn to read it? Nine and a half months, which is some kind of joke I don’t get."
holy shit.
posted by Kybard at 10:58 AM on October 16, 2018 [5 favorites]
"Do you know how long it took me to learn to read it? Nine and a half months, which is some kind of joke I don’t get."
holy shit.
posted by Kybard at 10:58 AM on October 16, 2018 [5 favorites]
Yeah, the format is awesome. Which kind of hides the fact that it's a sanctimonious and pretty empty restatement of the trolley problem. Implying that 'the engineers' need to be educated about the ethics of the situation at a level this basic is both insulting and inaccurate, Making You Bored For Science... And if this story included any attempt to answer the question, or to formulate relevant ethics, things might be different, of course. But as it is, really, it's just this
[Which doesn't mean it's not an important thing to think about. I just can't see how this advances anything but narrative style.]
posted by kleinsteradikaleminderheit at 11:00 AM on October 16, 2018 [7 favorites]
[Which doesn't mean it's not an important thing to think about. I just can't see how this advances anything but narrative style.]
posted by kleinsteradikaleminderheit at 11:00 AM on October 16, 2018 [7 favorites]
Which kind of hides the fact that it's a sanctimonious and pretty empty restatement of the trolley problem.
I don't think so; to me it is, among other things, about the interplay between writer and editor vis-a-vis acceptability/respectability politics; the socially enforced limitations of certain contexts of communication; the stigma around anger as a response and outward display of grief; etc. that's just the stuff that fizzles in my head minutes after reading it
I don't read it as mere restatement of the trolley problem, which I find as ultimately banal as you do; a hypothetical enactment thereof is baked into the premise, merely the backbone upon which much more interesting emotional work is done
posted by Kybard at 11:09 AM on October 16, 2018 [27 favorites]
I don't think so; to me it is, among other things, about the interplay between writer and editor vis-a-vis acceptability/respectability politics; the socially enforced limitations of certain contexts of communication; the stigma around anger as a response and outward display of grief; etc. that's just the stuff that fizzles in my head minutes after reading it
I don't read it as mere restatement of the trolley problem, which I find as ultimately banal as you do; a hypothetical enactment thereof is baked into the premise, merely the backbone upon which much more interesting emotional work is done
posted by Kybard at 11:09 AM on October 16, 2018 [27 favorites]
Implying that 'the engineers' need to be educated about the ethics of the situation at a level this basic is both insulting and inaccurate, Making You Bored For Science...
Citation needed.
posted by corb at 11:18 AM on October 16, 2018 [15 favorites]
Citation needed.
posted by corb at 11:18 AM on October 16, 2018 [15 favorites]
the stigma around anger as a response and outward display of grief
This read to me less as a re-packaging of a thought exercise and more like the exact opposite; that it eschewed the purely intellectual thought game of the trolley problem for how people actually respond to grief. It defies the Pure Vulcan Logic of how the trolley problem operates by pointing out that none of the justifications for one choice or the other are going to make anyone feel any better in the event they lose someone to it.
posted by Aya Hirano on the Astral Plane at 11:21 AM on October 16, 2018 [23 favorites]
This read to me less as a re-packaging of a thought exercise and more like the exact opposite; that it eschewed the purely intellectual thought game of the trolley problem for how people actually respond to grief. It defies the Pure Vulcan Logic of how the trolley problem operates by pointing out that none of the justifications for one choice or the other are going to make anyone feel any better in the event they lose someone to it.
posted by Aya Hirano on the Astral Plane at 11:21 AM on October 16, 2018 [23 favorites]
I can't believe I'm saying this, but as a parent of a small child recently getting through some serious health scares, I wouldn't mind a trigger warning for "CONTAINS REFERENCES TO DEATH OF SMALL CHILDREN". god knows there's TW postings for literally everything else on here.
I could have done without that story today. it was emotional, I'll give it that. ouch.
posted by EricGjerde at 11:26 AM on October 16, 2018
I could have done without that story today. it was emotional, I'll give it that. ouch.
posted by EricGjerde at 11:26 AM on October 16, 2018
Well, and I think also it said some interesting things about how society actually values things compared to what they claim to value. Reading in the back-and-forth on the footnotes, it talks about the AI learning process including social media/eyeviews - how the AI learned what society valued from the actions of other people rather than what they said their priorities were. It kinds of sets up this interesting reversal in terms of 'social media pictures of kids vs random animals' in an interesting and neat way. The other parent, her editor, making playdates with her own child, but not coming to the funeral. The implication that the deaths of engineering are the deaths we tolerate.
Honestly I'm really excited about the author's further work! Thanks for sharing.
posted by corb at 11:27 AM on October 16, 2018 [6 favorites]
Honestly I'm really excited about the author's further work! Thanks for sharing.
posted by corb at 11:27 AM on October 16, 2018 [6 favorites]
I wouldn't mind a trigger warning for "CONTAINS REFERENCES TO DEATH OF SMALL CHILDREN". god knows there's TW postings for literally everything else on here.
The story begins with "CONTENT NOTE: This story contains references to the death of a child."
posted by Aya Hirano on the Astral Plane at 11:29 AM on October 16, 2018 [10 favorites]
The story begins with "CONTENT NOTE: This story contains references to the death of a child."
posted by Aya Hirano on the Astral Plane at 11:29 AM on October 16, 2018 [10 favorites]
Eric, the story itself opens with "CONTENT NOTE: This story contains references to the death of a child."
No harm in adding the warning to the FPP as well, though.
posted by Holy Zarquon's Singing Fish at 11:29 AM on October 16, 2018
No harm in adding the warning to the FPP as well, though.
posted by Holy Zarquon's Singing Fish at 11:29 AM on October 16, 2018
Anna, I’m concerned about subjectivity intruding into some of the analysis in this section of the text. I think the body text is fine, but I have concerns about the references. Are you alright? Maybe it’s a bit premature for you to be back at work. Should we schedule a call soon? — Ed.stet - verb
----
STET — Anna
1. let it stand (used as an instruction on a printed proof to indicate that a correction or alteration should be ignored).At first I thought STET was an acronym, but then I realized it was shouted via text.
posted by filthy light thief at 11:35 AM on October 16, 2018 [8 favorites]
13 — Per Foote, the neural network training for cultural understanding of identity is collected via social media, keystroke analysis, and pupillary response to images. They’re watching to see what’s important to you. You are responsible.
That part sent a chill down my spine.
posted by officer_fred at 12:05 PM on October 16, 2018 [11 favorites]
That part sent a chill down my spine.
posted by officer_fred at 12:05 PM on October 16, 2018 [11 favorites]
This is so good.
posted by OrangeDisk at 12:11 PM on October 16, 2018
posted by OrangeDisk at 12:11 PM on October 16, 2018
That part sent a chill down my spine.
It's already happening (see also: Amazon's abandoned ML tool for weeding candidates, which became sexist because they fed it sexist data.) The problem with algorithms is that they too often have biases baked in, and people ignore this.
posted by NoxAeternum at 12:17 PM on October 16, 2018 [12 favorites]
It's already happening (see also: Amazon's abandoned ML tool for weeding candidates, which became sexist because they fed it sexist data.) The problem with algorithms is that they too often have biases baked in, and people ignore this.
posted by NoxAeternum at 12:17 PM on October 16, 2018 [12 favorites]
The exact duration of bereavement leave, which is another kind of joke that I don’t think is very funny at all, Nanette in HR.
holy shit.
posted by We put our faith in Blast Hardcheese at 12:19 PM on October 16, 2018 [7 favorites]
holy shit.
posted by We put our faith in Blast Hardcheese at 12:19 PM on October 16, 2018 [7 favorites]
Is there any work being done to actually answer these sorts of considerations. It's uncomfortable and nobody likes it, but there is a relative value to different forms of life and AI is where answers to the trolley problem stop being hypothetical and start needing to be hammered out. It's important to have this stuff laid out for programmers as they continue making these technologies. Letting it form organically or progressively as bad shit happens is not ideal. However disgusting one might find trying to decide if Ursula or woodpecker dies in a situation, it is something a program will need to be able to determine. I assume companies would like to not be liable for every death, same for car-owners. Seems like a standard and universal system or metric needs to be laid out so anyone programming these things has a clear set of guidelines to follow and adhere to, for lots of good reasons.
posted by GoblinHoney at 12:46 PM on October 16, 2018
posted by GoblinHoney at 12:46 PM on October 16, 2018
Patrick Lin, at Cal Poly SLO, has a number of good takes on the ethics of autonomous vehicles; his most striking observation is that given that autonomous vehicles will be in situations where they will crash, that any crash-optimization algorithm is basically also a targeting algorithm.
In the work companies are doing for this, I rather suspect the machine learning algorithms are so complex that simply no one knows -- and no one is capable of knowing -- what vehicles are being trained to do. On streets. With people.
posted by Homeboy Trouble at 12:51 PM on October 16, 2018 [10 favorites]
In the work companies are doing for this, I rather suspect the machine learning algorithms are so complex that simply no one knows -- and no one is capable of knowing -- what vehicles are being trained to do. On streets. With people.
posted by Homeboy Trouble at 12:51 PM on October 16, 2018 [10 favorites]
Goddamnit MetaFilter, that hit me right in the gut.
(People complaining about the trolley problem: you are missing the point. This piece is about bereavement. Sometimes the footnotes are the only part of the story that matters. Sometimes the footnotes are not so conspicuous. Sometimes there are no footnotes.)
(Also, very bittersweet that the editor is reaching out and the one who could use that helping hand won't take it. This too is real.)
posted by sjswitzer at 12:53 PM on October 16, 2018 [7 favorites]
(People complaining about the trolley problem: you are missing the point. This piece is about bereavement. Sometimes the footnotes are the only part of the story that matters. Sometimes the footnotes are not so conspicuous. Sometimes there are no footnotes.)
(Also, very bittersweet that the editor is reaching out and the one who could use that helping hand won't take it. This too is real.)
posted by sjswitzer at 12:53 PM on October 16, 2018 [7 favorites]
I would say it's about bereavement and the irreducibly fraught ethics of creating machines that must, as part of their operation, choose who lives and who dies.
posted by Holy Zarquon's Singing Fish at 12:56 PM on October 16, 2018 [8 favorites]
posted by Holy Zarquon's Singing Fish at 12:56 PM on October 16, 2018 [8 favorites]
I think I understand the gist of the story but there’s a part that I’m having a difficult time parsing.
Absolutely love the story, even though (or because) I am having a bit of a hard time understanding the storyline.
posted by gucci mane at 12:57 PM on October 16, 2018
Per Foote, the neural network training for cultural understanding of identity is collected via social media, keystroke analysis, and pupillary response to images. They’re watching to see what’s important to you. You are responsible.
How long did you stare at a picture of an endangered woodpecker vs how long did you stare at a picture of a little girl who wanted a telescope for her birthday? She was clumsy enough to fall into the street because she was looking up at the sky instead of watching for a car with the ability to decide the value of her life. Was that enough to make you stare at her picture when it was on the news? How long did you look at the woodpecker? Ten seconds? Twelve? How long? STET — AnnaI don’t quite understand this exchange. Is she saying that if you look at a picture of a woodpecker longer than a child, the autonomous car crashes into the child? Or am I totally misunderstanding this?
Absolutely love the story, even though (or because) I am having a bit of a hard time understanding the storyline.
posted by gucci mane at 12:57 PM on October 16, 2018
I attended a presentation by a professor working in machine learning. The process he uses to develop the machine learning involves a set of inputs, a desired output, and a system which adjusts the weighting of the various inputs. That system processes the inputs over multiple iterations to get closer to the desired output, so that by the time the weightings are finally computed, it is virtually impossible to determine exactly how those individual values were derived.
In other words, machine learning generates a black box that has learned to generate outputs based on inputs. Value judgements need to happen before the input, or after the output. As Homeboy Trouble stated, no body knows what exactly the machines are learning. Sara's brilliant story has it completely correct.
posted by Jefffurry at 1:04 PM on October 16, 2018 [7 favorites]
In other words, machine learning generates a black box that has learned to generate outputs based on inputs. Value judgements need to happen before the input, or after the output. As Homeboy Trouble stated, no body knows what exactly the machines are learning. Sara's brilliant story has it completely correct.
posted by Jefffurry at 1:04 PM on October 16, 2018 [7 favorites]
I like this creation a lot and for me it isn't about the trolley problem or concerns about AI per se. Rather, I think this is about absurdity and grief and the power of stubborn pettiness and not just letting it go. The question of endangered bird vs small child who do I kill? seems like a canard, the techno babble does not for me obscure the timelessness of the story. I can imagine almost the same story could have taking place during the plague years. If it has anything to say that is particular to our era I would say that it is to comment on the manners of our time and the smart but vapid nature of the techno ethos that does seem a little like a comfortless religion. That doesn't seem that far from an embarrassed clergy unable to answer in a compelling way a mother's grief and why her child was murdered by an all powerful god.
On the other hand I suppose it does sort of illuminate what it would mean if an intelligence decided what to do on the basis of our behavior rather than what we say we want. Which is always a little startling. It doesn't take to much probing before a lot of our moralizing seems to be much more in the word than the deed.
posted by Pembquist at 1:20 PM on October 16, 2018 [1 favorite]
On the other hand I suppose it does sort of illuminate what it would mean if an intelligence decided what to do on the basis of our behavior rather than what we say we want. Which is always a little startling. It doesn't take to much probing before a lot of our moralizing seems to be much more in the word than the deed.
posted by Pembquist at 1:20 PM on October 16, 2018 [1 favorite]
I don’t quite understand this exchange. Is she saying that if you look at a picture of a woodpecker longer than a child, the autonomous car crashes into the child? Or am I totally misunderstanding this?
She's pointing out that ML systems will determine their values from the values in the data they are trained on, not the values we want them to have. Again, we just had a real world example of this, when Amazon's abandoned ML recruitment tool was revealed - the system was trained on resumes of hired Amazon employees, which skewed male, so the system picked up on that bias and codified it.
posted by NoxAeternum at 1:25 PM on October 16, 2018 [10 favorites]
She's pointing out that ML systems will determine their values from the values in the data they are trained on, not the values we want them to have. Again, we just had a real world example of this, when Amazon's abandoned ML recruitment tool was revealed - the system was trained on resumes of hired Amazon employees, which skewed male, so the system picked up on that bias and codified it.
posted by NoxAeternum at 1:25 PM on October 16, 2018 [10 favorites]
Okay, I'm dumb, so please correct me on this:
Anna's daughter Ursula was killed by a self-driving car which swerved to save an endangered woodpecker. 9.5 months later she's back at work and writing an article on this sort of thing and Ed is Not Getting It?
(Also, very bittersweet that the editor is reaching out and the one who could use that helping hand won't take it. )
Ed is Not Getting It. Anna is not going to be the slightest bit comforted by him reaching out to "help." He's only gonna make her madder.
posted by jenfullmoon at 1:26 PM on October 16, 2018 [4 favorites]
Anna's daughter Ursula was killed by a self-driving car which swerved to save an endangered woodpecker. 9.5 months later she's back at work and writing an article on this sort of thing and Ed is Not Getting It?
(Also, very bittersweet that the editor is reaching out and the one who could use that helping hand won't take it. )
Ed is Not Getting It. Anna is not going to be the slightest bit comforted by him reaching out to "help." He's only gonna make her madder.
posted by jenfullmoon at 1:26 PM on October 16, 2018 [4 favorites]
I ended up reading the "body text" (the small paragraph at the bottom of all the footnotes and comments) four times. First time before I had read anything, then after reading the footnotes, again after reading the comments, and a final time after I had let the story sink in. Each time it read completely differently to me. What a marvelously constructed story.
posted by Kattullus at 1:30 PM on October 16, 2018 [3 favorites]
posted by Kattullus at 1:30 PM on October 16, 2018 [3 favorites]
I had to read it twice to fully get it.
The point about the woodpecker is that the AI system is weighing inputs that you don't necessarily consider inputs, like how long you looked at a picture of the bird. Is that the deciding factor in how the AI made it's decisions? No - but it would be, upon the post accident deconstruction of it's algorithms, among the list of factors. And the length of viewing time would be a simple factoid that a human could synthesize as or re-contextualize as representative of how the AI 'thought' or 'learned' or deviated from it's programming.
In the comment for #11 they write: "weighted decision matrix they used to seed the Sylph AI" which is not babble - that is how you describe self taught AI systems. Same for the "neural network training for cultural understanding of identity". The timeless story I see is Frankenstein monster.
posted by zenon at 1:32 PM on October 16, 2018 [2 favorites]
The point about the woodpecker is that the AI system is weighing inputs that you don't necessarily consider inputs, like how long you looked at a picture of the bird. Is that the deciding factor in how the AI made it's decisions? No - but it would be, upon the post accident deconstruction of it's algorithms, among the list of factors. And the length of viewing time would be a simple factoid that a human could synthesize as or re-contextualize as representative of how the AI 'thought' or 'learned' or deviated from it's programming.
In the comment for #11 they write: "weighted decision matrix they used to seed the Sylph AI" which is not babble - that is how you describe self taught AI systems. Same for the "neural network training for cultural understanding of identity". The timeless story I see is Frankenstein monster.
posted by zenon at 1:32 PM on October 16, 2018 [2 favorites]
Ed is Not Getting It. Anna is not going to be the slightest bit comforted by him reaching out to "help." He's only gonna make her madder.
I didn't see anywhere that implied the Editor is male. They have a partner (presumably) named Brian and a child named Nathan, but that's all the info we get. Anna notes that Ursula's best friend's mother didn't show up for her funeral, in an accusatory tone that could imply that the editor is this same mother, but it's not 100%.
Really powerful story and innovative format. Thanks for posting!
posted by numaner at 1:37 PM on October 16, 2018 [5 favorites]
I didn't see anywhere that implied the Editor is male. They have a partner (presumably) named Brian and a child named Nathan, but that's all the info we get. Anna notes that Ursula's best friend's mother didn't show up for her funeral, in an accusatory tone that could imply that the editor is this same mother, but it's not 100%.
Really powerful story and innovative format. Thanks for posting!
posted by numaner at 1:37 PM on October 16, 2018 [5 favorites]
Fireside Magazine's quarterly will include this piece with the annotations as handwritten. It looks really great.
posted by numaner at 1:44 PM on October 16, 2018 [5 favorites]
posted by numaner at 1:44 PM on October 16, 2018 [5 favorites]
Yes, I assume Nathan was ursula's best friend and the editor is thus the best friend's mother who did not come to the funeral. Ursula was killed by a self-driving car. The self-driving car was trained with all sorts of data about people's behaviour (including how long they look at things from eyeball looking-at cameras). So when it had to decide to hit a woodpecker or hit the child, it's AI data told it that people cared more about woodpeckers than children (presumably spent more time starting at woodpecker pics than kid pics) and thus it assumed the woodpecker was more valuable and killed the child.
posted by If only I had a penguin... at 1:45 PM on October 16, 2018 [7 favorites]
posted by If only I had a penguin... at 1:45 PM on October 16, 2018 [7 favorites]
I mean, just consider this aside on a modified Google self driving Prius:
posted by zenon at 1:48 PM on October 16, 2018 [25 favorites]
the Prius accidentally boxed in another vehicle, a Camry.... speeding down the freeway side by side. The Camry’s driver jerked his car onto the right shoulder. Then, apparently trying to avoid a guardrail, he veered to the left; the Camry pinwheeled across the freeway and into the median....Yea, the engineers and developers need to work on understanding the implications of what they are creating.
The Prius regained control and turned a corner on the freeway, leaving the Camry behind. Levandowski and Taylor didn’t know how badly damaged the Camry was. They didn’t go back to check on the other driver or to see if anyone else had been hurt. Neither they nor other Google executives made inquiries with the authorities. The police were not informed that a self-driving algorithm had contributed to the accident.
posted by zenon at 1:48 PM on October 16, 2018 [25 favorites]
On thinking about this story further, one of the wonderful aspects is how much it relies on understanding through nuance and implication and reading between the lines; these are the exact things that cannot be replicated by algorithms.
posted by Homeboy Trouble at 2:05 PM on October 16, 2018 [16 favorites]
posted by Homeboy Trouble at 2:05 PM on October 16, 2018 [16 favorites]
zenon: The timeless story I see is Frankenstein monster.
What Frankenstein's creature can really tell us about AI – Eileen Hunt Botting for Aeon Essays
Homeboy Trouble: On thinking about this story further, one of the wonderful aspects is how much it relies on understanding through nuance and implication and reading between the lines; these are the exact things that cannot be replicated by algorithms.
This has many marks of really impressive writing and creativity, for this and the fact that we can argue over the exact meaning or focus of the piece. Zonker, thanks for sharing it!
posted by filthy light thief at 2:18 PM on October 16, 2018 [7 favorites]
What Frankenstein's creature can really tell us about AI – Eileen Hunt Botting for Aeon Essays
Godmother of intelligences
Mary Shelley foresaw that artificial intelligence would be made monstrous, not by human hubris but by human cruelty
Mary Wollstonecraft Shelley’s 200-year-old creature is more alive than ever. In his new role as the bogeyman of artificial intelligence (AI), ‘the monster’ made by Victor Frankenstein is all over the internet. The British literary critic Frances Wilson even called him ‘the world’s most rewarding metaphor’. Though issued with some irony, this title suited the creature just fine.
From the editors of The Guardian to the engineers at Google have come stiff warnings about AI: it’s a monster in the closet. Hidden in computer consoles and in the shadows of the world wide web, from Moscow to Palo Alto, AI is growing stronger, faster, smarter and more dangerous than its clever programmers. Worse than the bioengineered and radiated creatures of Cold War B-movies, AI is the Frankenstein’s creature for our century. It will eventually emerge – like a ghost from its machine – to destroy its makers and the whole of humanity.
Homeboy Trouble: On thinking about this story further, one of the wonderful aspects is how much it relies on understanding through nuance and implication and reading between the lines; these are the exact things that cannot be replicated by algorithms.
This has many marks of really impressive writing and creativity, for this and the fact that we can argue over the exact meaning or focus of the piece. Zonker, thanks for sharing it!
posted by filthy light thief at 2:18 PM on October 16, 2018 [7 favorites]
What I read was an angry screed about tone policing rage and grief. How society won‘t allow the truths she says to be said. How this insane crime in which we are all culpable may only be spoken of in the pseudo-objective terms of a scientific analysis so that we don’t have to feel bad. How displaying any anger or grief at it makes other peopleso uncomfortable they tell you you have obviously not spent enough time grieving on your own, and you should step away from public discourse until you’re over it. You should stop forcing your disquieting emotional truths on people and only come back when you are prepared to sound objective and neutral.
And this is one woman who‘s had it with being silenced! She uses a faux objective piece of writing to force people to listen and she subverts the format repeatedly. And she refuses to let anyone edit away her incandescent accusations to make the text more palatable.
posted by Omnomnom at 2:56 PM on October 16, 2018 [14 favorites]
And this is one woman who‘s had it with being silenced! She uses a faux objective piece of writing to force people to listen and she subverts the format repeatedly. And she refuses to let anyone edit away her incandescent accusations to make the text more palatable.
posted by Omnomnom at 2:56 PM on October 16, 2018 [14 favorites]
I like the way that unfolded. (I was reading each of the footnotes as I read through)
posted by rmd1023 at 3:00 PM on October 16, 2018
posted by rmd1023 at 3:00 PM on October 16, 2018
I keep wondering if the AI knew how probable it was that the bird could fly away before impact.
posted by rewil at 3:11 PM on October 16, 2018 [1 favorite]
posted by rewil at 3:11 PM on October 16, 2018 [1 favorite]
For me, the woodpecker thing tore a little hole of absurdity in this story through which all of the pathos was able to escape.
posted by prize bull octorok at 3:24 PM on October 16, 2018 [2 favorites]
posted by prize bull octorok at 3:24 PM on October 16, 2018 [2 favorites]
There's probably a lot more kids in suburbia than there are critically endangered woodpeckers. However, this whole story wouldn't happen if there were less fucking cars.
posted by kzin602 at 3:34 PM on October 16, 2018 [2 favorites]
posted by kzin602 at 3:34 PM on October 16, 2018 [2 favorites]
The reason it saved the woodpecker over the girl isn't because people looked at pictures of woodpeckers a lot. It's because (as per paragraph 14) its data set included...
posted by DangerIsMyMiddleName at 5:53 PM on October 16, 2018 [5 favorites]
... the World Wildlife Foundation’s endangered species list, and the American Department of the Interior’s list of Wildlife Preservation Acts, four of which were dedicated to the preservation of Carter’s Woodpecker.and so weighted the woodpecker as *super important*.
posted by DangerIsMyMiddleName at 5:53 PM on October 16, 2018 [5 favorites]
I think it's entirely possible to engineer self driving cars to avoid human casualties and the trolley problem altogether - the problem is no one will buy a car that drives so cautiously and slows randomly for no reason apparent to them. I think the true horror of self driving cars is going to be that we will have very, very good data on exactly how many people humans are willing to sacrifice for the sake of their convenience.
posted by Zalzidrax at 6:30 PM on October 16, 2018 [6 favorites]
posted by Zalzidrax at 6:30 PM on October 16, 2018 [6 favorites]
If such cars come to pass, they will not be trained by social media. They will be trained by corporate lobbyists and insurance companies.
There is only so much weighting you can do before you run out of room to weigh things. The amounts each kind of thing can be "weigh" will (if not immediately) will be legislated, and then it will potentially be a dumpster fire where groups fight over how much their pet thing counts in the algorithm.
And that's before the "rootkit" cars which can be loaded with the the "kill all ____ people first" mod or we discovered that $make has inserted its own clause or two where $make $model cars are accorded a preference over all other brands.
Good story. I think the hand-written version is way more powerful, though.
posted by maxwelton at 7:25 PM on October 16, 2018 [2 favorites]
There is only so much weighting you can do before you run out of room to weigh things. The amounts each kind of thing can be "weigh" will (if not immediately) will be legislated, and then it will potentially be a dumpster fire where groups fight over how much their pet thing counts in the algorithm.
And that's before the "rootkit" cars which can be loaded with the the "kill all ____ people first" mod or we discovered that $make has inserted its own clause or two where $make $model cars are accorded a preference over all other brands.
Good story. I think the hand-written version is way more powerful, though.
posted by maxwelton at 7:25 PM on October 16, 2018 [2 favorites]
I am reminded of Stig Dagerman's 1948 classic To Kill a Child. This story is almost the total opposite of it in construction, the one initially obfuscated, the other's ending telegraphed from the first paragraph.
And they both end in devastation and despair, and the useless knowledge that thoughtless decisions will ripple outward.
posted by ivan ivanych samovar at 8:22 PM on October 16, 2018 [4 favorites]
And they both end in devastation and despair, and the useless knowledge that thoughtless decisions will ripple outward.
posted by ivan ivanych samovar at 8:22 PM on October 16, 2018 [4 favorites]
Is there any work being done to actually answer these sorts of considerations.
Yes! This is a really fascinating area right now, and lots of people are interested in and actively pursuing problems around algorithmic fairness. Here are a few things to keep an eye on:
Yes! This is a really fascinating area right now, and lots of people are interested in and actively pursuing problems around algorithmic fairness. Here are a few things to keep an eye on:
- Algorithmic Fairness and Opacity Working Group at Berkeley
- Recent publications on CS>AI Arxiv
- Google publications on machine learning fairness
- AI Now (their 2018 symposium is currently happening!)
- Microsoft FATE: Fairness, Accountability, Transparency, and Ethics in AI
- Harvard's Berkman Klein Center for Internet & Society, Ethics and Governance of AI
There is another devastating aspect not mentioned above: in her grief, the writer used her expertise to discover exactly why the self-driving car killed Ursula. After an FOIA request failed to reveal the cause - or anything much in particular - she spent months learning how to decipher the decision matrix and discovered the woodpecker glitch. Something nobody anywhere had ever programmed or decided: a murder by algorithm.
The paper is an an explosive investigative whistle-blower's report, and a product of anger and grief. But stripped of emotion into the driest academic language.
posted by Enkidude at 10:38 PM on October 16, 2018 [12 favorites]
The paper is an an explosive investigative whistle-blower's report, and a product of anger and grief. But stripped of emotion into the driest academic language.
posted by Enkidude at 10:38 PM on October 16, 2018 [12 favorites]
This has a very similar setup to one of my favorite jokes: https://www.reddit.com/r/Jokes/comments/3vkf7o/a_guy_is_caught_by_a_ranger_eating_a_bald_eagle/
But in the joke version, a black box trained on society's biases (that is, a federal judge) is making the decision.
posted by novalis_dt at 7:32 AM on October 17, 2018 [2 favorites]
But in the joke version, a black box trained on society's biases (that is, a federal judge) is making the decision.
posted by novalis_dt at 7:32 AM on October 17, 2018 [2 favorites]
Sarah Gailey confirmed that the editor is in fact Ursula's best friend's mother, who didn't attend the funeral.
What I find stunning about this (as others have noted upthread) is the simple use of "STET" as a stand-in for "I, a female voice speaking her truth, will not be silenced." It's so effective.
posted by Ben Trismegistus at 7:51 AM on October 17, 2018 [8 favorites]
What I find stunning about this (as others have noted upthread) is the simple use of "STET" as a stand-in for "I, a female voice speaking her truth, will not be silenced." It's so effective.
posted by Ben Trismegistus at 7:51 AM on October 17, 2018 [8 favorites]
That was my understanding as well Enkidude - that whole bit about learning to read the AI taking a gestational term, and that the woodpecker is a glitch. The accusation and very personal "They’re watching to see what’s important to you. You are responsible." tone makes me wonder who the intended 'you' is, but clearly shifts the blame from the AI singularly to this AI as reflection of 'your' priorities/interests/values.
The use of the formal summary of a technical report is not something widespread in IT yet, but is common in the transportation industry. It reminds me of the Air France Flight 447 Final Report, a summary of which runs all of three paragraphs, and this is the language of how the accident is described: Inappropriate pilot inputs led the jet to exit its flight envelope after the autopilot disconnected because the pilots failed to understand the situation and the de-structuring of crew cooperation fed on each other until the total loss of cognitive control of the situation. Which is to say, the co-pilot had relied on faulty information which introduced a situation neither pilot was adequately trained for or fully understood and they stalled the plane, flying their functional plane into the ocean and killing everyone on board.
The version that allows for human empathy, the 'footnotes' version of that AF 447 report would fill a book, 228 people died on that flight.
.
A longform article by mefi fav Langewiesche provides a digestible version of the AF 447 crash (discussed here on mefi). It's worth repeating his conclusion, which could be axiomatic: automation has made it more and more unlikely that ordinary people will ever have to face a raw crisis—but also more and more unlikely that they will be able to cope with such a crisis if one arises.
posted by zenon at 7:53 AM on October 17, 2018 [4 favorites]
The use of the formal summary of a technical report is not something widespread in IT yet, but is common in the transportation industry. It reminds me of the Air France Flight 447 Final Report, a summary of which runs all of three paragraphs, and this is the language of how the accident is described: Inappropriate pilot inputs led the jet to exit its flight envelope after the autopilot disconnected because the pilots failed to understand the situation and the de-structuring of crew cooperation fed on each other until the total loss of cognitive control of the situation. Which is to say, the co-pilot had relied on faulty information which introduced a situation neither pilot was adequately trained for or fully understood and they stalled the plane, flying their functional plane into the ocean and killing everyone on board.
The version that allows for human empathy, the 'footnotes' version of that AF 447 report would fill a book, 228 people died on that flight.
.
A longform article by mefi fav Langewiesche provides a digestible version of the AF 447 crash (discussed here on mefi). It's worth repeating his conclusion, which could be axiomatic: automation has made it more and more unlikely that ordinary people will ever have to face a raw crisis—but also more and more unlikely that they will be able to cope with such a crisis if one arises.
posted by zenon at 7:53 AM on October 17, 2018 [4 favorites]
"Ed" is short for editor, I think, rather than being someone's name.
posted by rmd1023 at 9:32 AM on October 17, 2018
posted by rmd1023 at 9:32 AM on October 17, 2018
The decision the car made (lol) is not a "woodpecker glitch." The car is working as designed, and can be thought of as a reification of the collective decisions that people made, and taught, to the car-driving-system.
Implying that 'the engineers' need to be educated about the ethics of the situation at a level this basic ... is entirely correct and appropriate.
Engineering is the application of politics, policy, ethics and judgement in a permanent (for values of permanent) mode. Engineering is social outcomes in steel, and concrete, and glass, and wires, and code. We build a road here, and not there. The road is built to these standards, and not those. The road carries this kind of traffic, and not that. The sight lines are here, and not there. The speeds are this, and not otherwise.
These are all engineering decisions, and yet these decisions shape our lives in a million million ways, large and small, constantly, and the lives of everyone around us.
I was cycling the other night, and I came across an accident, at an intersection that seems to always have accidents - a T-shape, a stop sign at larger 4 lane road, where the predominant turn is a left onto the main road, where the main road comes down a hill and on a slight curve, next to a privacy/sound wall, and the cars tend to go the usual 10% over the posted speed limit, which all means that they come a little fast around corner that is somewhat obscured. Nothing beyond the engineering guidelines. And yet there's always a little pile of crushed taillights and mirror fragments in the intersection, eternally replenished.
This was a set of deliberate decisions that created this intersection, a social outcome in asphalt. Just like the decision to train an "AI" on certain kinds of information, and train it in a certain way.
This was a very good story, thank you for posting it.
posted by the man of twists and turns at 9:49 AM on October 17, 2018 [14 favorites]
Implying that 'the engineers' need to be educated about the ethics of the situation at a level this basic ... is entirely correct and appropriate.
Engineering is the application of politics, policy, ethics and judgement in a permanent (for values of permanent) mode. Engineering is social outcomes in steel, and concrete, and glass, and wires, and code. We build a road here, and not there. The road is built to these standards, and not those. The road carries this kind of traffic, and not that. The sight lines are here, and not there. The speeds are this, and not otherwise.
These are all engineering decisions, and yet these decisions shape our lives in a million million ways, large and small, constantly, and the lives of everyone around us.
I was cycling the other night, and I came across an accident, at an intersection that seems to always have accidents - a T-shape, a stop sign at larger 4 lane road, where the predominant turn is a left onto the main road, where the main road comes down a hill and on a slight curve, next to a privacy/sound wall, and the cars tend to go the usual 10% over the posted speed limit, which all means that they come a little fast around corner that is somewhat obscured. Nothing beyond the engineering guidelines. And yet there's always a little pile of crushed taillights and mirror fragments in the intersection, eternally replenished.
This was a set of deliberate decisions that created this intersection, a social outcome in asphalt. Just like the decision to train an "AI" on certain kinds of information, and train it in a certain way.
This was a very good story, thank you for posting it.
posted by the man of twists and turns at 9:49 AM on October 17, 2018 [14 favorites]
The central storytelling conceit here - all the details are second hand, everything is an oblique reference - reminded me of *something* but I couldn't place it. Then I listened to some Richard Buckner (admittedly, an acquired taste) on the way to work. One of Buckner's albums is comprised of lyrics from Edgar Lee Masters' "Spoon River Anthology"; and it hit me.
This writing style - everything is tied up in footnotes and responses - echoes Spoon River Anthology. In both, there is no explicit "this, then this, then that" narrative. Just references that the reader is left to piece together.
It's an interesting juxtaposition, and one that (for me, anyway) broadens the emotional resonance of both works.
posted by notsnot at 10:29 AM on October 17, 2018 [1 favorite]
This writing style - everything is tied up in footnotes and responses - echoes Spoon River Anthology. In both, there is no explicit "this, then this, then that" narrative. Just references that the reader is left to piece together.
It's an interesting juxtaposition, and one that (for me, anyway) broadens the emotional resonance of both works.
posted by notsnot at 10:29 AM on October 17, 2018 [1 favorite]
Engineering is (or is supposed to be) very concerned with ethics. It was taught in my very first engineering course in school, and there are Professional Engineering standards which in the US are close to mandatory in some fields of engineering which are considered critical to public safety, health, and welfare. This year, the NCEE announced that they are discontinuing the PE in software engineering exam due to incredibly low interest.
posted by muddgirl at 10:52 AM on October 17, 2018 [4 favorites]
posted by muddgirl at 10:52 AM on October 17, 2018 [4 favorites]
And there's a solid plank in the argument that software "engineering", isn't.
posted by rmd1023 at 2:00 PM on October 17, 2018 [2 favorites]
posted by rmd1023 at 2:00 PM on October 17, 2018 [2 favorites]
There are some engineers who do, in fact, think about the ethics and the human interactions inherent in what they design and build. Unfortunately, they're a minority in autonomous driving. The vast majority of the engineers in autonomous driving, machine learning and artificial intelligence - whose work I see in a professional capacity on a daily basis - are not interested in the human side of the problem.
They're interested in the engineering.
Which, fundamentally, is fine - if they're willing to let the people who study human behavior inform what they build. Very few do that. Much more often, they treat human behavior as an engineering problem, and then they wonder why the answers they get are objectively wrong.
I read this story as a far more interesting - and probable - version of the trolley problem, which, in the way of many philosophical starting points, is only of limited (if any) applicability to the world as it exists. I'm fully intending to use this story to pound the point home to engineers who don't think enough about the human consequences of what they build, because, unlike the trolley problem, this might make them think.
posted by Making You Bored For Science at 4:09 PM on October 17, 2018 [3 favorites]
They're interested in the engineering.
Which, fundamentally, is fine - if they're willing to let the people who study human behavior inform what they build. Very few do that. Much more often, they treat human behavior as an engineering problem, and then they wonder why the answers they get are objectively wrong.
I read this story as a far more interesting - and probable - version of the trolley problem, which, in the way of many philosophical starting points, is only of limited (if any) applicability to the world as it exists. I'm fully intending to use this story to pound the point home to engineers who don't think enough about the human consequences of what they build, because, unlike the trolley problem, this might make them think.
posted by Making You Bored For Science at 4:09 PM on October 17, 2018 [3 favorites]
Something in the water encouraging trolley problem fiction, apparently - Tor has put "AI and the Trolley Problem" by Pat Cadigan up online.
posted by rmd1023 at 9:08 AM on October 18, 2018
posted by rmd1023 at 9:08 AM on October 18, 2018
I credit The Good Place and the general hellishness of the world in equal measure.
posted by Holy Zarquon's Singing Fish at 9:57 AM on October 18, 2018 [2 favorites]
posted by Holy Zarquon's Singing Fish at 9:57 AM on October 18, 2018 [2 favorites]
[Somewhat relevant to the discussion of purposeful decision-making, whether in the future or now (just because it's jumping out at me each time we're talking about it): Say "Crash," Not "Accident"]
This story was extraordinary.
posted by knownassociate at 11:57 AM on October 18, 2018 [2 favorites]
This story was extraordinary.
posted by knownassociate at 11:57 AM on October 18, 2018 [2 favorites]
Interestingly, knownassociate, working in the research area that I do, I've been trained to never call road incidents accidents (because, as the link you posted points out, it suggests no one was at fault and it was unavoidable). I've been trained to call them collisions.
posted by Making You Bored For Science at 1:18 PM on October 18, 2018 [5 favorites]
posted by Making You Bored For Science at 1:18 PM on October 18, 2018 [5 favorites]
Similarly, the U.S. military changed the terminology from "accidental discharge" (of a weapon) to "negligent discharge" a few years back, to emphasize that it is virtually impossible for a firearm to simply "go off" without someone somewhere in the chain of events screwing up.
Oh gods, that was probably like 15 years ago, wasn't it. sigh
posted by Etrigan at 1:29 PM on October 18, 2018 [4 favorites]
Oh gods, that was probably like 15 years ago, wasn't it. sigh
posted by Etrigan at 1:29 PM on October 18, 2018 [4 favorites]
Oh gods, that was probably like 15 years ago, wasn't it.
Etrigan, if it's any consolation, I've been realizing lately that everything I think was a few years ago was 10-15 years ago. The things that legit happened a couple years ago feel like yesterday / still in progress.
posted by mabelstreet at 3:18 PM on October 19, 2018 [1 favorite]
Etrigan, if it's any consolation, I've been realizing lately that everything I think was a few years ago was 10-15 years ago. The things that legit happened a couple years ago feel like yesterday / still in progress.
posted by mabelstreet at 3:18 PM on October 19, 2018 [1 favorite]
« Older Universal Childcare | Abandoned in America Newer »
This thread has been archived and is closed to new comments
posted by NoxAeternum at 9:54 AM on October 16, 2018 [8 favorites]