Dude, you broke the future.
January 2, 2018 4:28 PM   Subscribe

The AI future that's already here. "What do our current, actually-existing AI overlords want?" In a speech to the 34th Chaos Communication Congress in Leipzig, MeFi's cstross offers some thoughts on our current dystopia and where it may be headed.
posted by bitmage (55 comments total) 95 users marked this as a favorite
 
Dopamine Labs is one startup that provides tools to app developers to make any app more addictive, as well as to reduce the desire to continue a behaviour if it's undesirable.
What if we're already in Roko's Basilisk's simulation, but he's not a very imaginative writer, and likes to hit you over the head with things (so to speak), and also probably has a penchant for dad jokes?

I'm attempting to separate myself from Twitter, and I'll admit it's eerily similar to quitting smoking. But I did that successfully, so I can do this too.
posted by curiousgene at 4:58 PM on January 2, 2018 [12 favorites]


enjoyed this very much! email forwarded it to my beloved pops, in the hopes he initiates an email-forwarding chain campaign in the manner of aged relatives everywhere (he won't, but I hope he reads it thoughtfully, and I expect him to).

Charlie, can I call this your TED Talk to encourage Pop's interest? He's very pro-TED Talk these days, bless him. And also, thanks!
posted by mwhybark at 5:08 PM on January 2, 2018 [2 favorites]


No wealth but the commonwealth.
posted by The Whelk at 5:08 PM on January 2, 2018 [5 favorites]


We are so very fucked.
posted by Artw at 5:18 PM on January 2, 2018 [11 favorites]


Though, if the accusations about CCC coddling rapists and dismissing complaints against them are correct and representative of CCC's ethical culture, it's possible that the person who writes the dopamine-driven pogrom app could have been in the very audience.
posted by acb at 5:37 PM on January 2, 2018 [5 favorites]


34c3 talks are available at media.ccc.de/c/34c3. Aside from cstross' talk and exploit talks, there are many talks on human rights, climate change, science, culture, cryptography, and organizing, including topics like lattices in cryptography, home distilling / schnaps, mix nets (me), spies on social media, Iran, formal verification, DRM, etc.
posted by jeffburdges at 5:40 PM on January 2, 2018 [10 favorites]


That final example he gives... holy shit. If you want the tl;dr version just scroll down for the final few paragraphs.
posted by COD at 6:27 PM on January 2, 2018 [5 favorites]


This is darkly humorous given the "Game addiction" thread not too far down the front page. How could you possibly deny the idea that people can be addicted to video games when there are people out there right now whose career is figuring out how to get you addicted to whatever they want you to be addicted to?
posted by Mr.Encyclopedia at 6:38 PM on January 2, 2018 [9 favorites]


See also the "Elon Musk wants to put AI in your head" thread.
posted by Artw at 6:39 PM on January 2, 2018


C Stross brought up many interesting things, first and foremost his definition of AI already existent, via corporate thinking, goals and actions. He also says "year" better than anyone. What a great sound! Great talk!
posted by Oyéah at 6:44 PM on January 2, 2018 [1 favorite]


Doesn't bloody matter. Case NIGHTMARE GREEN will happen before then.
posted by Samizdata at 6:55 PM on January 2, 2018 [7 favorites]


I think the criticism of the AI singularity as just a new form of religion is interesting and on point. It has a certain psychological appeal that really captures some people.

I personally doubt that the transcendent artificial superintelligences in most AI apocalypse stories are actually plausible within the limits imposed by the speed of light, the laws of thermodynamics, the halting problem, and P != NP. Our physical universe is not a habitat where Roko's Basilisk could ever live.

I don't know if I agree with the analogy of corporations as slow-motion paperclip maximizers. Corporations exist in an ecosystem (society, the economy, other corporations) and are capable of some degree of introspection and adaptation because their utility functions involve external interaction. The core idea of the paperclip maximizer is that it has a totally arbitrary monomaniacal objective that does not involve a connection to external feedback, so it eats the universe to satisfy its internal utility function.
posted by allegedly at 7:07 PM on January 2, 2018 [2 favorites]


I think Stross is drawing too long a bow with his last example. But, an attacker could totally use existing apps (like Pokémon, which he mentions) to create disruptive swarming behavior. And I bet people have already been talking to Nintendo about making shopping malls (e.g.) more or less attractive to players. The ultimate app of this sort would be one that can create selective swarming behavior so that the right demographics could be nudged around the map to maximise profit. E.g., clear the schoolkids away from the high-end shops and send 'em to the food court during the afternoon slack period, but get office workers to walk past accessible gift and fashion stores during lunchtime.
posted by Joe in Australia at 7:31 PM on January 2, 2018 [3 favorites]


"Corporations are first generation AI's," seems right to me. Wonder if another candidate for what AI's will act like could be the standing military. Corporations and military are built on automatons.
posted by brewsterkahle at 7:38 PM on January 2, 2018 [7 favorites]


I think the criticism of the AI singularity as just a new form of religion is interesting and on point. It has a certain psychological appeal that really captures some people.

I remember reading an article by an intellectually curious former hardcore Christian (can't remember the sect) who drew that exact analogy from the other direction - based on her experiences with religion, the AI singularity looked a lot like a religion.
posted by Merus at 7:38 PM on January 2, 2018 [1 favorite]


Though, if the accusations about CCC coddling rapists and dismissing complaints against them are correct and representative of CCC's ethical culture, it's possible that the person who writes the dopamine-driven pogrom app could have been in the very audience.

The computer security field took #MeToo very hard, from what I recall. People who prided themselves on not trusting anyone because anyone could be a bad actor had exactly the same blindspots as lesser mortals like Hollywood stars and corporate executives. I'd expect a very strong correction from CCC in 2018, and if not, I'd expect that CCC will find themselves at the pointy end of a correction themselves.
posted by Merus at 7:43 PM on January 2, 2018 [4 favorites]


We’re building a dystopia just to make people click on ads (Warning: TED talk)
posted by chrchr at 7:44 PM on January 2, 2018 [6 favorites]


It's really interesting (and terrifying) that our own biological hard- and software limitations essentially make anything but dystopia the overwhelmingly likely outcome of technical progress.
posted by maxwelton at 7:54 PM on January 2, 2018 [3 favorites]


Also, I wonder whether in this might not also lie the answer to the Fermi Paradox. Perhaps developing and deploying AI inevitably leads to self-annihilation, or at least an unrecoverable fall from the heights that spawned it.
posted by acb at 8:01 PM on January 2, 2018 [1 favorite]


I think Stross is drawing too long a bow with his last example. But, an attacker could totally use existing apps (like Pokémon, which he mentions) to create disruptive swarming behavior.

This idea seems like it could be an elaboration on an idea laid out in his novel Halting State (2007) in which Pokémon-type games are used for purposes of espionage, coordinating the actions of players which unbeknownst to them are furthering the objectives of a spy agency.
posted by XMLicious at 8:59 PM on January 2, 2018 [9 favorites]


> XMLicious:
This idea seems like it could be an elaboration on an idea laid out in his novel Halting State (2007) in which Pokémon-type games are used for purposes of espionage, coordinating the actions of players which unbeknownst to them are furthering the objectives of a spy agency."

The "useful idiots" theory. Given what I have learned about mass datamining in Ingress, this is already happening to allow cheating.
posted by Samizdata at 10:01 PM on January 2, 2018


everything we know is still wrong
posted by philip-random at 12:28 AM on January 3, 2018


We are so very fucked.

The New York Times says that AI under the control; of corporations will win the War on Poverty.
posted by rough ashlar at 2:49 AM on January 3, 2018


Thanks for the NYT link, I'll read it over a frothy glass of raw water.
posted by adept256 at 3:15 AM on January 3, 2018 [5 favorites]


Perhaps my memory is playing tricks on me, but I seem to recall that when I was a young computer scientist (in the early 1980s), there were people around who saw it as a goal to create some form of silicon-based intelligence.

The idea was that we carbon-based lifeforms are fragile and mortal, but possibly we could build silicon-based lifeforms that could endure and ensure that "life" continued on.

I didn't buy into this notion myself - if it indeed existed - but I am curious as to whether this was actually an idea that was circulating, or whether it's something that I just made up.
posted by tallmiddleagedgeek at 5:24 AM on January 3, 2018


We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.
Charlie Stross has thereby presented an impeccable set of credentials for entry into my little personal Register of the Enlightened, alongside Vance Packard and Bill Hicks. Well played, that man.
posted by flabdablet at 6:05 AM on January 3, 2018 [6 favorites]


So we depend on google and Apple to prevent people from publishing the abortion swarming app, or the gay bashing app, right? Or any “us vs them” app.

Like the fact that Apple and Google would find that inconvenient, if it ever got out, is the only thing standing between us and this particular human-hunting Black Mirror episode.

That’s...broken. We are broken.

Oh god we’re so fucked.
posted by schadenfrau at 6:23 AM on January 3, 2018 [3 favorites]


I am curious as to whether this was actually an idea that was circulating, or whether it's something that I just made up.

Definitely circulating.

posted by flabdablet at 6:44 AM on January 3, 2018


See also "Silicon Valley Is Turning Into Its Own Worst Fear" by Ted Chiang.

Very similar arguments and features an extremely descriptive line of Silicon Valley's approach: "...treating the rest of the world as eggs to be broken for one’s own omelet..."
posted by slimepuppy at 6:56 AM on January 3, 2018 [2 favorites]


Silicon Valley Is Turning Into Its Own Worst Fear

To be fair, one of humanity's favourite pastimes for as long as there have been humans is the making of gods in our own image.
posted by flabdablet at 7:01 AM on January 3, 2018


As for our coal-maximizing, surveillance-maximizing, corruption-maximizing, power-maximizing artificial overlords: my own favoured solution is

JUST KILL THE FUCKERS

KILL THEM

SUCK THEIR SOULS OUT AND MAKE THEM DIE

The rot set in, as Charlie correctly mentioned, with the invention of corporate personhood.

I have no problem with corporations existing as accounting conveniences and being allowed to own property. But granting these lumbering institutional automata anything even vaguely resembling the other human rights of natural persons has clearly proved to be a horrible mistake. We should have un-personed them decades ago. The European Parliament and the UN need to get on that, as the English-speaking world is clearly too far gone to be trusted with the job.

You'd have to work pretty damn hard to convince me that limited liability for company directors and shareholders is still justifiable in 2018 as well; it seems to me that this is exactly where the disconnect between corporate aims and human wellbeing is rooted. The only reason that turning a profit is indeed the Prime Directive of every corporate business is that the direct financial interests of shareholders specifically do not extend to having the companies we own behave in an ethical, responsible, accountable, humane, environmentally sustainable fashion.

Corporate management is answerable to the Board, and the Board to the owners, but the owners are not answerable to the rest of us and they fucking well should be.
posted by flabdablet at 7:26 AM on January 3, 2018 [10 favorites]


The thing that we didn't anticipate about AI is that it is so fucking dumb. It's not some three laws shit persuading us to act against our own interests, it's just simple machines, networked, with feedback loops. And it works really well because it turns out we are even dumber than that, with stupider feedback loops, and just raring to go when it comes to trashing ourselves and our planet for dumb reasons.

So very, very fucked.
posted by Artw at 7:44 AM on January 3, 2018 [10 favorites]


Move to Amend
posted by flabdablet at 7:49 AM on January 3, 2018


eh it all seems like a horrible, selfishly motivated end-products of capitalism which, I suppose if you broaden AI to the definition that it's any structure of which humans are unactualized pawns within then that plus the litany of other structural organizations / societies / governments / etc we were part of are also "AI"

it seems like most of Stross's premise rests on the idea that we actually don't have other "AI" ie systems to counteract existing ones. for example, the gay-bashing and anti-abortion apps would likely be stopped by a criminal justice apparatus that would start implementing methods of intercession once a crime of that nature is actually committed. the gamergate article linked on the front page a while back does make a very effective claim that these mediating systems are slow to change but that seems like the effect of another system, namely the patriarchy, enforcing certain lax cultural norms regarding specific kinds of crime

which is to say that I think he's conflating systems with AI. and it's also to say that it sounds naive to me and somewhat technolibertarian to think that we don't already live in a world where systems of oppression make life a living hell for a great many people for completely arbitrary, irrational, and culturally-mediated reasons. Arendt already wrote about the banality of evil - it's not that much of a stretch to take that from its Nazi context and apply to to your own day-to-day on issues of police brutality, economic re-segregation, gentrification, etc. we're all pawns in systems of white supremacy, the patriarchy, ableism, and so on, it's just a matter of recognizing your own complicity
posted by runt at 8:13 AM on January 3, 2018 [3 favorites]


speaking of naive and technolibertarian, the RationalWiki page on Eliezer Yudkowsky is quite fun
posted by runt at 8:20 AM on January 3, 2018 [1 favorite]


I think he's conflating systems with AI

Show me another autonomous system that isn't a human being but has been specifically granted human rights and I'll concede that you have a point.

The issue is not so much that systems do what systems do, but that we defend the right of these specific systems to do unto us what they will, whether it's in our collective best interest or not - and it frequently isn't; the entire advertising industry, for example, is a massive protection racket composed of rent-seeking corporate parasites with no higher purpose than self-perpetuation.
posted by flabdablet at 10:14 AM on January 3, 2018 [4 favorites]


specifically granted human rights

like... sovereignty? I mean, the codification of corporate personhood by a legal institution is not inherently dissimilar from the codification of sexism within the film industry and subsequent mass cultural normalizations of problematic power dynamics - it doesn't matter how oppression is upheld, all that matters is that there is power that backs it and gains from it and power, itself, takes many different forms
posted by runt at 10:46 AM on January 3, 2018 [1 favorite]


The govt just needs to make a legal AI that will replace itself, one with all the ethical and human considerations baked right in to it, and then viola, problem solved!
posted by Grither at 10:53 AM on January 3, 2018


I mean, the more likely alternative is that we'll have yet another oppressive system of power that will increase human suffering in order to make a few people very rich/powerful/etc because our govt, culture, and social systems are all too intrinsically apathetic to how other people experience the world to ever do anything about it, esp not under capitalism

but ymmv, sure
posted by runt at 11:22 AM on January 3, 2018


"Things are in the saddle,
And ride mankind."
-same Emerson who also had something to say about societies and joint-stock companies
posted by doctornemo at 1:15 PM on January 3, 2018 [1 favorite]


my own favoured solution is JUST KILL THE FUCKERS

*eyeroll* Who do you suggest should risk their own life to do this? "Someone else should take action!" is just lazy.
posted by AFABulous at 8:53 PM on January 3, 2018


> runt:
"speaking of naive and technolibertarian, the RationalWiki page on Eliezer Yudkowsky is quite fun"

Been down that rabbit hole once, from a different entrance. Still haven't washed all the dirt off.
posted by Samizdata at 10:24 PM on January 3, 2018 [1 favorite]


> Grither:
"The govt just needs to make a legal AI that will replace itself, one with all the ethical and human considerations baked right in to it, and then viola, problem solved!"

Right, because government IT projects go SO well...
posted by Samizdata at 10:37 PM on January 3, 2018


Who do you suggest should risk their own life to do this?

Why assume that removing the rights of natural persons from corporate entities must require risking the life of any natural person? It's something that could be achieved by persuading enough people to take seriously enough to vote consistently for.

It's corporate personhood I wish to see a stake driven through the heart of, not that of any natural person working in or for or with any corporation.
posted by flabdablet at 1:05 AM on January 4, 2018 [1 favorite]


I agree with runt and others that the corporation half of this essay is mainly just a rediscovery of systems analysis (and related stuff) in sociology and many other domains. There have been collections of humans analogized to a single human and feared as a sort of super-human for centuries. The nation-state and the populace was a common site of such things for a long time -- see the cover of Hobbs's Leviathan, as Stross knows so well -- but since the very invention of democracy the demos has been feared as its uncontrollable own creature. Which exact legal rights these creatures were granted has of course varied -- and the greatest terror of the demos was that it itself became the rights-granter -- but the structure of this anxiety has persisted for a very long time. The machinery of capitalism from Smith to Marx has similarly been personified and feared, along with at various times most religions, TV audiences, unions, secret societies, and virtually anything else that provided a unified organization and direction for collections of people. That doesn't mean that the AI metaphor is misguided -- all of these things might be slow AI of one form or another, and indeed well-illustrate how many different kinds of institutional AI there can be. But it does mean that this recent fad for corporate AI is not very new, nor does it shed much more additional light on the general social phenomenon, Citizens United notwithstanding.

Similarly, in the second half Stross has a number of additional techno-fears that I think of as the left-wing version of singulatarianism -- equally Gladwell-esque in breathless certainty, but a bit more in the left (rather than libertarian or conservative) corner of the political space. As someone who knows a fair number of criminologists, for one minor instance, confident statements like this are usually a red flag to beware of casual sociology: "And we banned tetraethyl lead additive in gasoline, because it poisoned people and led to a crime wave." Likewise, the folks I know in political advertising are pretty skeptical of the vaunted powers of Cambridge Analytica: skeptical of their expertise, the size of their effects, and even the possible effectiveness of their approach even if they were doing everything right. Statements like the following just sound like a left-wing version of conspiracy-mongering: "But if social media companies don't work out how to identify and flag micro-targeted propaganda then democratic elections will be replaced by victories for whoever can buy the most trolls." No one I know working on the cutting edge of social media and advertising thinks these things do or can have that much power.

This isn't to deny the conscious and non-conscious malevolence of corporations, social media companies, advertisers, etc. I am one of those left-wingers! But to the degree that they are, and have ever been, destroying our efforts to build a better society, it's much less via these exciting and mysterious AI-like back-channels, and mostly just via the age-old methods: outlawing, bribing, shooting, cheating, and all the rest. Heck, a traditional example from the US context of these sorts of fears is the "faction" from the Federalist Papers -- political parties, in particular, were one of the original US bugaboos. But it turned out parties themselves, with their super-human structures, weren't actually the problem. The problem is that, via money, culture, and brute force, one of our parties is an explicit tool of the powerful to keep down the rest. Not very mysterious!
posted by chortly at 9:35 PM on January 5, 2018 [3 favorites]


But if you looked for scientific studies to furnish the same sort of numbers to prove the effectiveness of conventional propaganda in modern history around the world, as has actually happened, would you find it? Would the numbers available in literature be able to empirically show that, for example, the Creel Committee could secure U.S. entry into World War I? Even with 20/20 hindsight? Or empirically show that when Ryszard Siwiec spectacularly committed suicide as a political protest in Communist Poland in a stadium during a festival, Polish and other Eastern Bloc officials would be able to suppress knowledge of the event enough that people in Czechoslovakia doing the same thing a few months later were evidently unaware of Siwiec's protest?

Are numbers from political and commercial online advertising scholarship of any use in predicting the conversion rate for Al-Qaeda and ISIS online propaganda efforts?

I agree that popular coverage of the efficacy of specific documented measures used in the 2016 election has been overblown (and I'm inclined to think that Stross may have taken some liberties on that count for the sake of assembling a public speaking narrative for delivery to a hacker conference) but I am skeptical (though I assume with much less relevant expertise than you, chortly) that an analysis based on metrics for presumably-legal conventional marketing techniques would have enough predictive power to dismiss the potential dangers from this type of evolution in propaganda methods ten years from now and further out, particularly in the context Stross points out of the falsification of video evidence further undermining trust in sources more broadly.
posted by XMLicious at 11:39 PM on January 5, 2018 [1 favorite]


And noting that Nazis have already been mentioned above, I would fear that in dismissing the impact that a bunch of apps and algorithmic individually-targeted messaging could have as described by Stross we might be making the same category of mistake which someone looking at an IBM-type tabulating machine ten or twenty years before the Holocaust might have made in discounting the magnifying effect it could have, despite the fact that they were entirely familiar with Kafkaesque forces dominating society and historical events like pogroms.
posted by XMLicious at 11:58 PM on January 5, 2018 [2 favorites]


but I am skeptical... that an analysis based on metrics for presumably-legal conventional marketing techniques would have enough predictive power to dismiss the potential dangers from this type of evolution in propaganda methods ten years from now and further out, particularly in the context Stross points out of the falsification of video evidence further undermining trust in sources more broadly.

I very much agree that one shouldn't dismiss these potential dangers; while a false positive (wasting too much time worry about an overblown threat) is real, a false negative (ignoring a growing threat) is definitely dangerous too. Rather than wanting to talk down the threat, I guess my concern was more about how the threat is described, and in particular, the use of science fictional tropes and language to describe it. That's the analogy with singulatarianism: we all agree that AI (whatever that is) will get smarter and better, but the singularity stuff cloaks it in a sinister reflective/gray wall of science-fictional mystery, and in fact what Stross and others are doing is criticizing that science fictional -- or perhaps just fantastic -- aura of mystery that both precludes closer analysis, and leads so many singulatarians to focus on the dangers of this overblown fear instead of the more immediate and less science-fictional dangers directly in front of us. Stross then suggests that one of those more immediate and real dangers is slow AI in the form of corporations, but my feeling is that he is now making the same (though lesser) mistake as those he critiques, cloaking a complex, dangerous but quite comprehensible phenomenon -- human organizations and institutions, such as corporations -- in science fictional language and an aura of mystery that likewise hinders analysis and leads to overblown -- or at least, misdirected -- fears. I'm a science-fiction-loving lefty, so I certainly enjoy framing my enemies in terms of sinister AI. But while I strongly believe in the dangers of corporations and other powerful social entities, I'm not sure whether the AI framing is the best way to understand or confront those dangers, in the same way that the "singularity" is probably not the best way to think about the dangers of machine learning and automation.
posted by chortly at 11:06 PM on January 6, 2018


I would agree that this way of describing things wouldn't be great for a general audience and public discussion of these topics, but for the audience Stross is addressing "artificial intelligence" isn't the mystery or unexplained plot device you're referring to: I've never been to the CCC, or any other hacker conference, but I would expect that a significant majority of the people who listened to him in person have probably at least created "Hello World" (heh) simplistic AIs based upon software libraries as well as at least basic examples of the software systems referred to by names like "artificial life" and "evolutionary algorithms".

Another thing that members of the live audience may be familiar with, which doesn't seem to have been mentioned in the thread so far, is philosopher John Searle's "Chinese room" thought experiment, which explores some AI-related questions by positing a human or a group of humans who don't read or speak the Chinese language manually executing a natural language processing algorithm to respond to written questions in Chinese.

So, they may have experienced Stross's figurative equation of corporate and AI behavior differently than would people who haven't had their hands in the guts of an AI, as it were, or aren't familiar with that comparison of humans executing a complex software algorithm.
posted by XMLicious at 1:52 AM on January 7, 2018


That's a very apropos comparison: I count Searle's Chinese Room as another classic example of science-fictional mystification in order to rhetorically persuade. This is the same view folks like Dennett have when he refers to Searle's thought experiment as an "intuition-pump," where the strategy is "to elicit intuitive but incorrect answers by formulating the description in such a way that important implications of the experiment would be difficult to imagine and tend to be ignored." Thought experiments are tricky things, and maybe exist somewhere in between the explicit science fiction story, and the implicit metaphors Stross is deploying with his "slow AI" descriptions of corporations. In all three cases, it is quite easy -- especially if you're a good writer -- to portray the phenomenon in question as more vast, complex, mysterious and incomprehensible than it really is: super-human AI becomes the singularity, the Chinese room becomes a disproof of functionalism, and "slow AI" paints corporations and other human organizations as more ineffable and unstoppable than they actually are.

Anyway, you don't have to believe my argument -- just explaining in a little more detail, since the Chinese Room example was so exactly apropos to my way of thinking about this!
posted by chortly at 9:10 PM on January 7, 2018 [1 favorite]


Actually, that confuses me more... wasn't Searle's point, at least partly, (and I'm not attempting to agree or disagree with him, or agree or disagree with the legitimacy of "Strong AI" as something that could be reified) that the Chinese room manifestly does not contain anything like a mind that understands the Chinese language?

So, if you are calling Searle's proposition something which prompts "intuitive but incorrect answers", wouldn't the consequence of declaring Searle's conclusion incorrect be that the Chinese room can be substantially equivalent to a mind that understands Chinese? Which would seem to imply that Stross's figurative equation is even more than a metaphor, it's a valid analogy: that a group of humans acting in concert in a corporation could be equivalent to an AI, particularly if the latter is loosely defined.

The thing is, if you substitute the mundane, finite, real-world counterparts for the things in Stross's discourse which you're calling magical and mysterious, vastly complex and incomprehensible science fiction, it appears to me that his points still hold. The bits which you are objecting to do not seem essential, and possibly aren't what he was even trying to express to that particular audience.

He's saying that there will behaviors of AIs that will need to be regulated, and "need to be regulated" is basically the opposite of saying that AIs are "ineffable and unstoppable": controlling, regulating, and even criminalizing certain behaviors of corporations are something we have done before, even to the most voracious and exploitative and lethal corporations, and if we stay on top of things we need not fear any inability to confine and restrict and yoke the AIs of the twenty-first century in the same way.
posted by XMLicious at 10:59 PM on January 7, 2018 [1 favorite]


I believe that future generations will be resistant to the kind of manipulation described, by way of having grown up in a world where they are aware of them. Indeed, I'm hoping future generations will be aware of this stuff enough to turn it on its head. After all, society has come up with lots of ways to shape the behavior of its members, and youth have been pretty good at resisting them. Of course, I must believe this because the alternative is terrifying.

Technology can work in both directions. People could employ their own AI to generate online activity for them when they're personally offline, generating enough noise to make attempts at profiling more difficult. Or, going off Stross' final horrifying mob-violence app example, generate a profile of a person who would be the most appealing target for such a violent group, attach their mobile phone to a bomb, leave it in a dark alley, and wait.
posted by subocoyne at 2:41 PM on January 9, 2018


Yes, Searle being incorrect means that the Chinese room can indeed be conscious, understand Chinese, etc. But to see that requires a much more detailed model than his intentionally vague one, and as that model becomes more detailed -- as others have done with his story, by adding short-term and long-term memory, inputs, outputs, neural-like structures, etc -- it just sounds much more like a standard brain, and loses the rhetorical mystification Searle deploys in order to make his anti-functionalist argument.

But similarly, if you want to say that some organization like a corporation is analogous to a mind in a realistic way and not just as a metaphor, you need to assume all sorts of functions that might loosely apply (bookkeeping like long-term memory, etc), but really don't work in ways that are truly analogous to how actual brains work. A corporation is no more like a person than a self-driving car or a school of fish or maybe a weather system -- something that's roughly analogous in various coarse or metaphorical ways, but not in the myriad detailed ways it would need to actually function like a thinking, planning, scheming person. Corporations certainly scheme, but mainly via the scheming of their component humans.

We all agree that AIs need to be regulated. The issue is whether (a) the singularity framework is useful for thinking about his regulation, (b) whether the corporation framework is useful for thinking about how to deal with AI, and (c) whether the AI framework is in turn useful for thinking about how to deal with corporations. I argue that all three of these things are false, and that (c) in particular is false in the same way that Stross argues (a) is false: AIs aren't very useful for thinking about corporations, nor vice versa, because they are either very different things, or are just generically similar in that they are both complex systems. We know very well how to regulate corporations, we just don't do it because they have (along with the rich and powerful) corrupted one of our two political parties from time immemorial. Maybe AIs will pose a new kind of challenge and maybe not, but I don't know how useful corporations are for thinking about it (except as one example of how we think about fighting institution-like power). But in any case, calling corporations a kind of AI sounds much cooler than it is useful, especially if we're imagining a sinister conscious-like superhuman AI. And I argue the metaphor makes the regulation job harder because it makes it seem more impossible to succeed.
posted by chortly at 10:52 PM on January 11, 2018 [1 favorite]


A corporation is no more like a person than a self-driving car or a school of fish or maybe a weather system -- something that's roughly analogous in various coarse or metaphorical ways, but not in the myriad detailed ways it would need to actually function like a thinking, planning, scheming person. Corporations certainly scheme, but mainly via the scheming of their component humans.

How they implement their scheming seems to me less important than that they do scheme.

I can see no good justification for making some structural resemblance to an individual human brain a defining characteristic to qualify as an AI. If an entity can reasonably be held to be exhibiting adaptive, goal-driven behaviour, and has at least as much skill at generalized pattern recognition and classification as a person, that's enough for me. And as you implicitly point out, corporations do have access to all the underlying intelligences of their component humans.

Some of the goals of the corporate AI proceed from those of the people who fund and run it: each of them desires to make a living, and sees the corporate structure as an appropriate organizational form to pursue that goal, and this translates to a fundamental goal of self-preservation when viewed from the corporate perspective.

Others proceed from the laws and regulations that define the circumstances under which the creation of corporations is allowed, and those that define the rights and responsibilities of the corporations so created with respect to their constituent people.

Still others are fluid and dynamic, adapting the corporation's intent and behaviour to the business and social environments it finds itself operating in.

The corporate behaviour that arises in response to these goals can always be broken down into descriptions of the behaviours of individual corporate workers, but that doesn't remove the value of thinking about the behaviour of the corporations as individual intelligences. I think they are far more deserving of that description than are weather systems, self-driving cars or schools of red herring.
posted by flabdablet at 12:51 AM on January 12, 2018 [1 favorite]


But in any case, calling corporations a kind of AI sounds much cooler than it is useful, especially if we're imagining a sinister conscious-like superhuman AI. And I argue the metaphor makes the regulation job harder because it makes it seem more impossible to succeed.

This has been a great conversation and I believe your post has whittled down to the knot of our disagreement as embodied in the two sentences above. I just think that we've reached a point in history where for someone who has been a decorated sci-fi author over the course of decades—having extensively engaged in informed and well-researched speculative thinking about the near future—addressing a room full of hackers up to date on the cutting edge of technology, talking about these phenomena in this unified fashion is pretty straightforward and is really closer to a rectification of names exercise than anything else.

The entire legitimate reason to start breaking down these distinctions is that the things we call "AI" aren't going to become an alien post-human form of life reified science fiction trope any time soon and we need a less-anthropomorphizing way of talking about them: a way to talk about what a software system "wants" which firmly places that characterization in the same category as saying a corporation wants to maximize the profits of its shareholders or that a photosynthesizing plant wants sunlight.

I think that all Stross is talking about is further proliferation of examples of the category of AIs and software-that-doesn't-fit-into-finicky-technical-definitions-of-artificial-intelligence that caused the stock market "Flash Crash" of 2010 and resulted in the prohibition of behaviors like layering—which human traders being directed and coordinated by corporations could certainly employ before that, but which the AIs could exploit with more facility and frequency, to a market-altering degree rather than as an occasional gambit.

He's basically saying we should get ready for the use of AIs of that sort to come to online propagandizing, and that they may employ the same "dark arts" of cognitive manipulation which video commercials and games do, but with individualized targeting of propagandees and in furtherance of more granular coordinated goals, like organizing a flash mob or an activist meeting rather than just giving people vague warm and fuzzy feelings about General Electric or Koch Industries because some corporate marketing budget somewhere had spare cash to burn on TV ads this month.

Not that we or his audience should disregard the direness of his warnings; but the fear should be based in the history-repeats-itself banal-evil well of the mundane and prosaic becoming monstrous, like the tabulating machines—Cow Clicker becoming an impassive instrument of collective human vice and gluttony and hubris as a corporation so often is—rather than a fear of Skynet or the Lawnmower Man's uploaded virtual brain or The Architect from The Matrix franchise, which I really think he was trying to group with singulatarianism rather than invoke himself.
posted by XMLicious at 5:13 AM on January 12, 2018 [1 favorite]


« Older If birds left tracks in the sky, they'd look like...   |   "I wanted to be special. I wanted to be somebody."... Newer »


This thread has been archived and is closed to new comments