Corporations are robots, my friend
December 18, 2017 9:17 AM   Subscribe

Sci-fi writer Ted Chiang on how Silicon Valley misdiagnoses the AI threat: “The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.”
posted by Phire (42 comments total) 107 users marked this as a favorite
 
I'm normally not a big fan of "capitalism is evil" stuff but this essay is really thoughtful and good. Tech companies like Google and Facebook do a lot of good in the world but from the corporation's point of view it's sort of a byproduct of a larger goal of profit and growth. Uber is the flip side when the harm done by the corporation far outweighs the good.
posted by Nelson at 9:36 AM on December 18, 2017 [3 favorites]


Yes. I want to take this essay out behind the middle school and marry it.

So I assume it's not perfect, but I love it anyway.
posted by allthinky at 9:39 AM on December 18, 2017 [4 favorites]


There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

There's a lot more I want to remember, but I am all about this part.
posted by allthinky at 9:42 AM on December 18, 2017 [86 favorites]


Yes. I want to take this essay out behind the middle school and marry it.

I know! If you had asked me this morning whether it was possible for me to love Ted Chiang any more, I would have said, "Gosh, I don't see how that's possible..."
posted by jcreigh at 10:10 AM on December 18, 2017 [7 favorites]


OK, is this "Robot Overlords" week on Metafilter?

Did I miss a memo?
posted by TheWhiteSkull at 10:13 AM on December 18, 2017 [6 favorites]


I want to take this essay out behind the middle school and marry it.

I think we both know that's not exactly what happens out behind the middle school
posted by clockzero at 10:18 AM on December 18, 2017 [12 favorites]


Need to link this with the recent FPP on Chinese surveillance AI.

The article quotes Jameson: "Someone once said that it is easier to imagine the end of the world than to imagine the end of capitalism." In this context, a more chilling quote is from Joan Robinson: "The misery of being exploited by capitalists is nothing compared to the misery of not being exploited at all."

AI and automation will bring the choice implied by the slogan "socialism or barbarism" into sharp focus. Once humans are no longer needed, it would be entirely logical should a sort of exterminationist logic come to dominate the thinking of the ruling elites. Not extermination in an assertive sense of the the Holocaust, but more in the sense of just walling off most of humanity and leaving it to suffer unchecked the forces of externalization.

I'm not saying this will happen. I don't know. And, really, it's not the case that the only alternative is a social transformation to utopian egalitarianism. I mean, even simply a resurgence of a new sort of Fordist capitalism, one that actually sees common people as necessary for prosperity, could mitigate the worst potentials of AI. But even that small sop to the masses isn't consistent with the current zeitgeist of Heroic Narcissism Objectivism.
posted by mondo dentro at 10:27 AM on December 18, 2017 [10 favorites]


We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior. Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations. Individuals often reevaluate their priorities after experiencing a personal wake-up call. What we need is for companies to do the same — not to abandon capitalism completely, just to rethink the way they practice it. We need them to behave better than the AIs they fear and demonstrate a capacity for insight.

The world would benefit if corporations spontaneously start practicing ethical capitalism, despite the environment they exist in discouraging it. Corporations rise and fall in an iterative process much like the process by which machines learn - essentially natural selection. Random mutations are tried within a defined framework, and the most successful pass on to the next generation. There is absolutely no reason a sub-optimal mutation will win out once a more optimal solution exists, unless the environment changes to favor the alternate mutation. So ethical capitalism will not emerge in the current marketplace without changes in government regulation or social pressure; just as AI will never develop insight unless doing so is beneficial to its goals; just as tiny warm-blooded animals would never have out-competed the dinosaurs without the extended global cold period.
posted by smokysunday at 10:39 AM on December 18, 2017 [12 favorites]


There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.


The difference here isn't what's more fun to think about - it's about power. The beings at the center of Facebook, Amazon, Google, and other giant corporations take huge benefits from their goals being misaligned from the public good.

What happens if AIs run things? Who benefits, then?
posted by NoRelationToLea at 11:00 AM on December 18, 2017 [1 favorite]


What happens to the saying "the ends justify the means" when the ends are unknown? AI is all "means," but I think AI proponents are merely hunting for an end or ends while raising the "benevolent god" specter for unimaginative programmers.

I mean, we now know that "recommendation engine" has been a failure across the board, is the next stop going to be something worse because those sunk-costs have to be recouped?
posted by rhizome at 11:06 AM on December 18, 2017


MeFi's own cstross has said on more than one occasion that malignant, malevolent AIs have already been with us for centuries, existing not as silicon-based lifeforms but as slower, paper-based lifeforms known as limited corporations.
posted by infinitewindow at 11:30 AM on December 18, 2017 [66 favorites]


I'm normally not a big fan of "capitalism is evil" stuff

Give it time.
posted by maxsparber at 11:43 AM on December 18, 2017 [39 favorites]


There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

Bullshit! We devoted huge amounts of human energy to "ensure that Facebook's and Amazon's goals were aligned with the public good." How many man-years of studying economics? How many political debates? How many pages of regulation that applies to them? And we still largely failed! The goals still aren't aligned with the public good, but it's not for lack of trying. That's exactly why it seems credible that it will be hard to get goal-driven AIs to align with the public good.
posted by value of information at 11:46 AM on December 18, 2017 [8 favorites]


(To be clear, I agree with half of the article's thesis -- the half that draws an analogy between corporations and unfriendly AI, and points out that we already are soaking in powerful non-human agents pursuing their own goals. I disagree with the part that says "since we already observe problems when these powerful non-human agents pursue their own goals, our problems won't really get any worse if we invent even more powerful ones." They can get worse!)
posted by value of information at 11:54 AM on December 18, 2017 [2 favorites]


I think that's kind of the point - if we aren't willing to reckon with corporations overstepping their boundaries, it's not AI that's a problem - it won't be Skynet, it'll be a more-efficient way to funnel cash to people who are already rich.

Whereas most people currently encouraging panic about AI are the exact people that are going to be benefiting from using it to, say, more efficiently allocate Uber drivers or whatever.
posted by sagc at 12:00 PM on December 18, 2017 [10 favorites]


Chiang doesn't fear governments enough. They engage in murder and tyranny on a grand scale, and are just as capable of misdirecting an AI as corporations are.
posted by Nancy Lebovitz at 12:10 PM on December 18, 2017 [3 favorites]


MeFi's own cstross has said on more than one occasion...

Yeah, I came here to recommend his Invaders from Mars post as a companion to this.
posted by hades at 12:11 PM on December 18, 2017 [3 favorites]


Paging the author of Accelerando....

Also: paperclips.
posted by Leon at 12:13 PM on December 18, 2017 [5 favorites]


AI and automation will bring the choice implied by the slogan "socialism or barbarism" into sharp focus. Once humans are no longer needed, it would be entirely logical should a sort of exterminationist logic come to dominate the thinking of the ruling elites. Not extermination in an assertive sense of the the Holocaust, but more in the sense of just walling off most of humanity and leaving it to suffer unchecked the forces of externalization.

It's amazing how so many of those dystopian novels nailed it, super fortresses protecting the scientifically technologically biomedically (but not ethically) elite, while surrounded by the rapidly dying wasteland. Trivial then to release a virus to quicken the process.
posted by Beholder at 12:29 PM on December 18, 2017 [3 favorites]


Not extermination in an assertive sense of the the Holocaust, but more in the sense of just walling off most of humanity and leaving it to suffer unchecked the forces of externalization.

There was a post about that going on right now in a major capitalist country. I've always seen the US' lack of safety net as a shrug in lieu of a wall. Easier not to get het up about millions of people dying if It's All Their Fault And Not Mine.

When you look at it from a capitalist point of view, it is the choice between handing over your life savings to survive (e.g. privatized healthcare), or dying, be it socially or literally, before you can cash into retirement (thus also handing over life savings). How convenient for capitalism.

On preview – y'all already have a virus, too. It's called drugs. I mean seriously, check out that MeFi post.
posted by fraula at 12:31 PM on December 18, 2017 [11 favorites]


So the question is: how do we destroy these malignant, slow AIs we call “corporations”? How do we make them work for us instead of against us?

In my daydreams, we pass laws that require all corporations operating within some city, or some state, to be Public Benefit Corporations that are required to actually consider the consequences of their actions rather than focus solely on increasing shareholder value. Just rip out their core programming and jam something new in.

I’m sure they’d fight this tooth and nail.
posted by egypturnash at 1:17 PM on December 18, 2017 [8 favorites]


So to summarize, the Chicago School version of the corporation is the paperclip maximizer.
posted by aspersioncast at 1:25 PM on December 18, 2017 [13 favorites]


Bullshit! We devoted huge amounts of human energy to "ensure that Facebook's and Amazon's goals were aligned with the public good."

Not nearly enough. I invite you someday to observe the legal teams lined up on either side of a regulatory issue or investigation. Sheer headcount will tell you a story.
posted by praemunire at 1:42 PM on December 18, 2017 [14 favorites]


Once humans are no longer needed, it would be entirely logical should a sort of exterminationist logic come to dominate the thinking of the ruling elites.

I'd argue this is now the tacit logic behind the majority of GOP policy-making.
posted by ryanshepard at 2:00 PM on December 18, 2017 [12 favorites]


Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea.

But the really chilling part is not that an AI can't do that. It's that if an AI wants nothing more than to pick strawberries, when it steps back to consider turning the entire planet into a strawberry patch, it will say, "Yes, of course. That's a great idea. Nothing could be better."

You can't appeal to "insight" or common sense to resolve a disagreement with something or someone who has fundamentally different goals, different concepts of what is good and bad. You have to want something besides strawberries to reconsider the wisdom of picking strawberries at any cost.

How do you create a corporation that wants something besides profits? There are corporations that want to increase shareholder value even more than they want profits, and those corporations do different things based on that desire. So it's possible to modify what a corporation wants.
posted by straight at 4:20 PM on December 18, 2017 [5 favorites]


Capitalism is imperfect (as is any economic system because reality is messy) but, to quote Churchill on democracy, it's "better than any other system that has been tried from time to time." The problem is inadequate regulation, which is a function of problems with our democratic republic. The real solution is campaign finance reform, which would better align electoral and legislative outcomes with the needs of a larger proportion of the citizenry. Not saying it's easy, but that's a more likely solution to some of our current problems than trying to overthrow capitalism...
posted by twsf at 4:20 PM on December 18, 2017 [4 favorites]


Or quite possibly capitalism and democracy are inherently misaligned. Neither has ever been tried in anything close to a "perfect" form though, so we can leave that as a purely academic exercise.

Venture capitalism, combined with uneven development and unequal resource allocation, seems very much to have produced a system wherein a certain group of well-to-do people are simply required to have big plans, however ill-conceived, and trade currency with their peers on the basis of these big plans.

If no actual product is being produced (or the product is infinitely reproducible for essentially nothing), and in many cases very little service is being rendered, classical socioeconomic models break down; the means of production is amorphous, supply and demand essentially meaningless. Historically this sort of investment/paper trading implied a knowing scam, but this time around it really does seem that many of these people don't have the self-awareness to conceive of it as a scam - they're getting high on their own supply. This is probably also part of why their conception of AI is ethically limited. Which I guess is part of what Chiang is getting at.
posted by aspersioncast at 8:29 PM on December 18, 2017 [11 favorites]


So the question is: how do we destroy these malignant, slow AIs we call “corporations”? How do we make them work for us instead of against us?

Well, starving them would be nice, if you could figure out a way to do that without killing us all when we're 70. The 401K as a concept pretty much means they're always mainlining the economy.

From a regulatory standpoint, something to distinctivize short term gains, some tax scheme perhaps?
posted by pan at 10:05 PM on December 18, 2017 [2 favorites]


Hmm, what if any investment vehicle in a 401k was taxed on the basis of its churn? Or, if short term capital gains taxes were raised and maybe tiered, oh maybe x% for sale within the first six months of purchase, x-1 the next six, etc until only after ten years would the base long term capital gains rate apply? One of the issues with a 401k style system is that they produce dumb money— in the sense of money where the actual investor (the future retiree) doesn’t generally do much with their money— it ends up in index funds or target date managed funds. On the bright side, if you want to change the incentives for corporations, by changing the incentives in the stock market in particular, there is a smaller group of people to influence.

Build the taxation structure so that corporations are incentivized instead to hire workers and pay them well, so that they are incentivized to care about their own health fifty years from now and not just their next quarter’s returns.

Of course any attempt to change the environment in which corporations develop, with the idea of changing what sort of corporations develop, will encounter massive resistance. No corporation that won under the current system wants the system to change.
posted by nat at 1:13 AM on December 19, 2017 [3 favorites]


Oh, and if corporations are people too (fuck you very much, citizens united) then we also need equivalent ways to punish them. I don’t know what prison would look like for a corporation but we ought to figure that out.
posted by nat at 1:15 AM on December 19, 2017 [4 favorites]


I don’t know what prison would look like for a corporation

Any half decent accountant should be able to turn it into a nice little earner.
posted by flabdablet at 5:03 AM on December 19, 2017 [3 favorites]


fwiw!*
He lit several cigarettes, and talked excitedly of “building a digital society.” It struck me then how long it had been since anyone in America had spoken of society-building of any kind. It was as if, in the nineties, Estonia and the U.S. had approached a fork in the road to a digital future, and the U.S. had taken one path—personalization, anonymity, information privatization, and competitive efficiency—while Estonia had taken the other. Two decades on, these roads have led to distinct places, not just in digital culture but in public life as well.
posted by kliuless at 6:01 AM on December 19, 2017 [6 favorites]


So the question is: how do we destroy these malignant, slow AIs we call “corporations”? How do we make them work for us instead of against us?

you could also own them :P
posted by kliuless at 6:08 AM on December 19, 2017 [2 favorites]


On preview – y'all already have a virus, too. It's called drugs. I mean seriously, check out that MeFi post.

Too Many Americans Live in a Mental Fog - "Drugs, pollution and poverty make it hard for lots of people to think clearly. It's a personal tragedy and a drag on the economy."
posted by kliuless at 6:11 AM on December 19, 2017 [4 favorites]


Chiang doesn't fear governments enough. They engage in murder and tyranny on a grand scale, and are just as capable of misdirecting an AI as corporations are.

This — there's not that much daylight between a company/corporation (I am using the two interchangeably here) and a government when looked at in the abstract. They're both institutions that exist as much in the imaginations of the populace as in the real world, yet they exercise power there by getting actual people to do their bidding, as agents. We reify and make both manifest through individual action (often voluntary action, at least on the moment-to-moment scale), then pretend that they exist as cohesive entities, when really they're a sort of emergent behavior of complex, chaotic systems. I suspect that insofar as "AI" will ever exist, it will be similar, with the technology making it faster, but essentially similar.

Most modern governments tend to (at least claim to) be democratic, and most corporations are not, but that's not really the important distinction when you're looking at them structurally and comparing them to flesh-and-blood individuals. You could make a corporation that's democratic—and they arguably are, within the economic framework of fractional ownership—and there are certainly governments that are not. There have also been, historically, cases where an entity was simultaneously both a government and a corporation (e.g. the British Empire's colonial Companies).

There are, of course, many exceedingly tedious formal definitions of both, but I think we lose the forest for the trees when we get down to that level. If aliens landed on Earth tomorrow, it might take some time to explain the fine line between the two.

I see this as a positive thing, because since governments have been around longer (in various forms) than corporations, realizing that they are much the same allows us to consider solutions to controlling corporate behavior from our longer experience with governments. But I also think our experience failing to control governments also provides a sobering warning as to our limits of control over other reified distributed systems.

Perhaps more succinctly: a sufficiently advanced AI is likely to be indistinguishable from government.
posted by Kadin2048 at 12:37 PM on December 19, 2017 [1 favorite]


Oh good, we're talking about this. I appreciate that Chiang introduced BuzzFeed to the concept of corporate singularity (which, yes, is already here) and included a killer reference to "Strawberry Fields Forever." Such style!

So the question I was talking about with my brother last night was, yes, ethics. What's the corporate equivalent to the three laws of robotics? It seems the three laws most definitely don't protect us from corporate malfeasance in a complex system governed by sometimes deliberately ignorant human actors following the will of corporations.

These are Isaac Asimov's three laws of robotics.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It seems like corporations are already really great at No. 3, protecting and perpetuating themselves. But so many shitty companies are terrible at Nos. 1 and 2. Even otherwise good or goodish companies are terrible at those. If you deprive an AI of knowledge of the harm it's doing to humans, it could do heinous things without knowing it—same for corporations. And in fact, a lot of corporations operate on deliberately shady terms for that reason. For instance, I tried to get in touch with Adidas corporate about a month ago, to ask what glue they use in their shoes, because I have a new allergy to a resin used in shoe manufacture. This was their answer.
Hi [limeonaire],

Thank you for reaching out. Unfortunately we cannot confirm or deny that our products are made with this particular adhesive. We do not have access to the information regarding the glue used in creating our products. We do apologize for this inconvenience.

adidas is all in,

Consumer Relations
adidas America
Contact Us
www.adidas.com
Adidas is all in, all right—all in on protecting its own interests. I guess maintaining plausible deniability is one way to manage a global supply chain. All I wanted to know was whether I should keep wearing the beautiful shoes I bought at their Fifth Avenue flagship store back in April, prefacing my remarks with the fact that I've been a fan of the brand for years. Unfortunately, in protecting the company they fail to protect my interests as a human being with skin, so I'm probably going to stop buying their products. Also, the language is interesting (and telling) here in and of itself: Isn't "cannot confirm or deny" the language of, like, government intelligence in wartime? I guess corporate communications and corporate law have been a dystopian land of the damned for some time now, but still, it's like, wow, when did these things converge to this point?

Anyway, Asimov's robot series is all about exploring no-win scenarios like that. There are lots of things we can't know. I was running through potential scenarios with my brother last night: Robot saves human from drowning. Little did they know that that human was on his way to commit genocide against many other humans. Or robot builds big strawberry farm (or perhaps CAFO) for human. Little did they know that CAFOs emit wayyyyy more gases than other things and so they doomed humans and the planet, to get hyperbolic about it. Robot says "Yo dawg, I heard you like chocolate," and procures maximum chocolate for humans. Unfortunately, they skirted import restrictions mandating testing for arsenic content to do so, and/or offered rewritten legislation and donated to human politicians to get it passed and remove regulations protecting humans, and so a bunch of humans got arsenic poisoning, etc. (To get meta about this: I bet an AI could programmatically come up with many more scenarios, heh. Robotic mad libs. Robot builds ___, does not realize ___ does ___ , dooms humanity because ___.)

It's been so long since I read those books, so I don't recall how (or whether) Asimov deals with unforeseen threats to humans on a global scale. What I do know is that most corporations don't have a good excuse for being poor stewards of the planet. So much about capitalism really makes sense if you think about it like robots: "You like Christmas ornaments? Have more Christmas ornaments!!!!!" "You like sparkly metal? We have lots of that!" "STRAWBERRIES!" But it's worse than all that, because it's humans doing it (at least at this point).

My brother spoke of the specter of evil business AIs, and I was like, aren't they actually the benevolent ones who just want to give us all the Formica countertops our hearts could ever desire and then some? "I JUST WANTED TO MAKE YOU HAPPY." It's like when there are production overruns and too many cars are made and so then auto plants have to lay off humans. "OOPS, WE TRIED TOO HARD TO MAKE YOU HAPPY."

But the plausible deniability issues with the global supply chain, that's the stuff of corporate singularity right there. And this goes back to the Kipple Field Notes post from the other day. Like yeah, the maker movement can't do the things that people with access to a real supply chain can do. But the people with access to a real supply chain can do horrifying things without thinking about it. This is both liberating and endangering of humanity. And then there's the stuff about corporations and their agents having literally written an increasing amount of proposed legislation, and it's just like... How do you stop this madness, or can it be stopped? Panic aside, I'm curious what people would propose as the corporate version of the three laws, or if such a simple formulation is even possible for complex systems like global corporate supply chains and world governance.
posted by limeonaire at 2:50 PM on December 19, 2017 [5 favorites]


It might be worth recalling the warning of that old firebrand, Adam Smith: the interest of capitalists (manufacturers and investors) is "never exactly the same [as] that of the public", and is generally "to deceive and even to oppress the public". At the same time, they're far abler at getting government to do their bidding than any other class.

So what do we do with corporations? One hint is by looking at they most fear: taxes, regulation, and unions. So, tax income and wealth progressively; regulate the hell out of businesses; and give employees real power internally.

The US model of corporations responsible only to the stockholders is part of the problem, and it's not even universal— e.g. in Germany labor has half the seats on the board. We could adopt that provision, and even add other stakeholders, such as representatives of their community or customers.
posted by zompist at 3:42 PM on December 19, 2017 [9 favorites]


It is interesting to note, at this point in the discussion, that the field of cybernetics, from nearly its inception, has always been just as interested in systems composed of individual human actors -- like corporations -- as in artificial mechanical systems. Early sci fi writers and readers like Asimov would have been keenly aware of this, perhaps even moreso than us today.
posted by tobascodagama at 4:57 PM on December 19, 2017 [2 favorites]


I also think our experience failing to control governments also provides a sobering warning as to our limits of control over other reified distributed systems.

I think governments are very nicely under control. The only real question is who the most influential controllers are, and in general that answer will be the same everywhere: extremely wealthy people.

If we're going to talk about stepping back and abstracting things, wealth is a good one to start with. To my way of thinking, a person's wealth is a measure of the extent to which the rest of us have given them permission to do whatever they want.

Facebook, if considered as an AI, is already an astonishingly wealthy one by that measure. So much permitting.
posted by flabdablet at 8:29 PM on December 19, 2017 [4 favorites]


Thank you for reaching out. Unfortunately we cannot confirm or deny that our products are made with this particular adhesive. We do not have access to the information regarding the glue used in creating our products. We do apologize for this inconvenience.


This is pretty much the exact same answer I got when I called a spice company about what's in there's particular spice blend. Yeah lady, I'm calling you from the hospital, yeah my kid is having a severe reaction, and yeah I really want to know what's in your damn spices. No answer.
posted by tilde at 5:11 AM on December 20, 2017 [1 favorite]


So the question I was talking about with my brother last night was, yes, ethics. What's the corporate equivalent to the three laws of robotics?

Why we all need more philosophy in our lives - "Onora O'Neill, the British professor emeritus of philosophy at Cambridge University... was awarded $1m for her contributions to philosophy by the Berggruen Institute... there are not many other women of O'Neill's age (she is 76) who are collecting $1m prizes of any type for their intellectual endeavours – unfortunately..."
“[People say] the aim is to have more trust. Well, frankly, I think that’s a stupid aim,” she said in a recent TED talk. “I would aim to have more trust in the trustworthy but not in the untrustworthy. In fact, I aim positively to try not to trust the untrustworthy.”

Instead, O’Neill argues that “we need to think much less about trust, let alone about attitudes of trust detected or mis-detected by opinion polls” and focus “much more on being trustworthy, and how you give people adequate, useful and simple evidence that you’re trustworthy”.

This requires better transparency. Another, less discussed, route to building trust is for institutions and individuals to make themselves vulnerable... But there is another key point: O’Neill believes we need to concentrate on the concepts of ethics and duty. This has gone out of fashion in recent years; instead, there is more of a focus on citizen rights and regulations. But O’Neill is convinced that it is impossible to cure society’s ills by simply imposing further rules. “You have this compliance mentality gone mad, and it doesn’t work,” she told me over lunch this week. Instead, she wants society to rediscover the forgotten concept of ethics – and to celebrate this.
So what do we do with corporations? One hint is by looking at they most fear: taxes, regulation, and unions. So, tax income and wealth progressively; regulate the hell out of businesses; and give employees real power internally. The US model of corporations responsible only to the stockholders is part of the problem, and it's not even universal— e.g. in Germany labor has half the seats on the board.

we (the people!) could be the stockholders funded by a wealth tax/land-value tax, which could in turn fund a citizens' dividend and other social programs :P
You know there is this popular narrative that public companies are making all this money and not passing it on to workers or customers but just returning it to investors, who have more money than they need anyway. But there is a sort of balancing transaction in private markets, where SoftBank Group Corp. invests $3.1 billion in WeWork Cos. and WeWork spends it on giving new tenants free rent:
Armed with a $3.1 billion commitment of new cash from SoftBank Group Corp.'s investment fund this summer, WeWork has ratcheted up pressure on an array of competitors, offering their tenants lucrative deals—and sometimes even free food—to convince them to defect.

Co-working companies around the globe, from small operators of a single space such as Wolf Bielas in San Diego to midsize competitors like Bond Collective in New York, say that this fall, WeWork embarked on a marketing blitz to lure large numbers of their tenants with a year of free rent with a two-year contract.
This is like the popular theory of Uber that it is a high-cost taxi service that is massively subsidized by investor money. You could have a Unified Theory of Money Stuff Worries that goes like:
  1. Public companies extract money out of consumers in the form of monopolistic pricing encouraged by common shareholders.
  2. They give the money to investors in the form of share buybacks.
  3. The investors invest the money in private unicorn companies at wild valuations.
  4. The private companies give the money to consumers in the form of below-market pricing encouraged by indiscriminate investors.
It is not a good theory. It does not seem especially efficient. And yet it has a certain appeal. The public market is a place of ruthless shareholder-value maximization, which is nice for the shareholders but a little grim and boring; the private market is where the shareholders go to be frivolous and blow off steam.
also btw, re: adam smith - "A free market that ignores the six key principles Adam Smith laid out is not just bad, but worse than most alternatives."

oh and re: swedish/german-style workers' councils/unions - "1: Organize at the regional level, not the company level. 2: Have unions provide more services to their members. City-level organization and increased member services would preserve and enhance unions' traditional role as bargainers, while also increasing local political clout and giving non-unionized workers more incentive to join up."

A game-theory solution for a fractured America - "The U.S. is an indefinitely repeated prisoner's dilemma. Everyone is locked in a room with everyone else, forever. There is no escape, so the only rational strategy is to learn how to get along."
posted by kliuless at 9:34 PM on December 21, 2017 [1 favorite]


One thing Chiang's essay misses is that most people worried about AI singularities don't actually imagine them having a monomaniacal goal like making paperclips, picking strawberries, or maximizing shareholder value. That would (arguably) be an AI that is dumber than us. Pretty much by definition, a post-singularity AI is one which has goals and desires that are incomprehensible to us. The concern is that we'd be like ants trying to understand what human beings are up to.

Trying to influence the eventual goals or moral reasoning of those hypothetical future entities seems, in that view, almost impossible. Like trying to figure out which butterfly's wings need to flutter in order to get the right weather in Japan 50 years later.

None of which should diminish our resolve to change the goals and behavior of corporations, which are comprehensible and subject to our influence.
posted by straight at 11:29 PM on December 21, 2017 [2 favorites]


« Older May the bots have mercy on us all   |   " In 2017, tech workers are the world’s villain." Newer »


This thread has been archived and is closed to new comments