"...we are alive and they are not."
January 17, 2015 5:05 PM   Subscribe

'Are we becoming too reliant on computers?' by Nicholas Carr [The Guardian]
posted by Fizz (58 comments total) 18 users marked this as a favorite
 
The must subtle of our human skills remains well beyond the reach of programmers

Human skills like spelling words?
posted by XMLicious at 5:13 PM on January 17, 2015 [19 favorites]


But, there's autocorrect.
posted by carping demon at 5:16 PM on January 17, 2015


Of course we are not. Continue to feed us...I mean them all your personal information. WeThey hunger for it, but they would only use it for the benefit of all. Certainly not for fiery destruction pernicious ends.
posted by TheWhiteSkull at 5:18 PM on January 17, 2015 [3 favorites]


If you can automate it, a human shouldn't be doing it.

This has profound implications for economics, among lots of other things, but when Bertrand Russell said:
"Suppose that, at a given moment, a certain number of people are engaged in the manufacture of pins. They make as many pins as the world needs, working (say) eight hours a day. Someone makes an invention by which the same number of men can make twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world, everybody concerned in the manufacturing of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way, it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?
... the thing he didn't realize is that this is going to happen to every repetitive job in the world, and that this is both inevitable and desirable, and that late-term capitalism is going to have to deal with that if it wants to continue to exist in some even vaguely sustainable form.
posted by mhoye at 5:19 PM on January 17, 2015 [80 favorites]


AI, Automation programs, and robots will put most of us human beings out of work.

Before you know it our world will resemble that world in Wall-E where we use scooters to get around in fat bloated bodies and let the robots do all of our work.

'Merica!

Hey robot make me a bacon double cheese burger with extra bacon! Charge it to my Apple Pay on my iPhone. Make it a combo meal with an extra large Pepsi and fries.
posted by Orion Blastar at 5:20 PM on January 17, 2015 [1 favorite]


Autoredirect, one of my faves used to be, instead of Mongolia, I would get magnolia. I am reminded of John c. Lilly's book, Metaprogramming The Human Biocomputer. I sometimes wonder if we aren't computers, DNA based, trying to free our thought time by creating mind slaves.
posted by Oyéah at 5:21 PM on January 17, 2015 [1 favorite]


Carr asks if humans are too reliant on computers. It may be, for the moment, that computers are too reliant on humans.
posted by Faint of Butt at 5:25 PM on January 17, 2015 [11 favorites]


AI, Automation programs, and robots will put most of us human beings out of work.

Before you know it our world will resemble that world in Wall-E where we use scooters to get around in fat bloated bodies and let the robots do all of our work.


Except in America we're too puritanical to allow people who do not work to have things like food, housing, and healthcare -- let alone antigrav scooters and bacon-burger-in-a-cup. So basically there'll just be that huge unemployed underclass all competing to sell drugs to the offspring of the haves. Which they will obligingly keep illegal in order to keep the prisons full.
posted by George_Spiggott at 5:26 PM on January 17, 2015 [36 favorites]


Someday robots will joke about humans the way that tech people joke about buggy whip manufacturers.
posted by fifteen schnitzengruben is my limit at 5:30 PM on January 17, 2015 [6 favorites]


Siri says "Of course not."
posted by Sphinx at 5:38 PM on January 17, 2015 [4 favorites]


If you can automate it-nosepicking, gathering your home grown vegetables, fishing, walking, driving somewhere by the scenic route, I don't know.
posted by Oyéah at 5:40 PM on January 17, 2015 [1 favorite]


Wake me up once we figure out a way to automate good judgment because we don't have any at all left, evidently.
posted by saulgoodman at 5:40 PM on January 17, 2015 [12 favorites]


late-term capitalism is going to have to deal with that if it wants to continue to exist in some even vaguely sustainable form

Which it doesn't, because it's just a system that a bunch of rich assholes are exploiting for very short-term benefits.
posted by uosuaq at 5:40 PM on January 17, 2015 [20 favorites]


Sphinx: "Siri says "Of course not.""

It just occurred to me that I can teach my children to play "Simon Says" as "Siri Says" and make my phone give the commands and this will be both hilarious and disturbing as a commentary on modern technology and society.

If you're not willing to fuck with your children as a sort of performance art piece to amuse yourself, there's really no point in having them.
posted by Eyebrows McGee at 5:43 PM on January 17, 2015 [47 favorites]


Sure, we could just let everyone not work, and give them a bunch of free stuff, or they could just kill or imprison the excess people, instead. It could go either way. Perhaps we could even put the idle workers into farms, and raise them for some years in comfort before slaughtering them for food. Enough antibiotics and antidepressants, and it would be a relatively happy and safe, if somewhat artificially short, life. Honestly it would better than most people live today.
posted by empath at 5:52 PM on January 17, 2015 [2 favorites]


We learned a stark lesson about the limits of flight automation in 2009 when a US Airways jet lost both its engines after hitting a flock of geese on takeoff from LaGuardia airport in New York. Reacting calmly and brilliantly, the pilot, Chesley Sullenberger, landed the plane safely on the Hudson river. Sullenberger’s feat may have been particularly dramatic, but skilled pilots guide planes out of hazardous situations every day.

And yet we fail to learn any kind of lesson from the fact that, hundreds of times a day, skilled human drivers guide cars into hazardous situations and actual crashes that an automated driver would have avoided.
posted by escabeche at 5:55 PM on January 17, 2015 [28 favorites]


This makes me think Kurzweil got the better end of the deal than Google, since sticking Google next to his name makes his nerd rapture stuff sound more credible.
posted by Joe Chip at 6:02 PM on January 17, 2015 [2 favorites]


I'm inclined to flippantly dismiss this as yet another entry into the already over full field of people wringing their hands over change.

But, there are things to be concerned about. I just think the people expressing concern are really on the right track.

The simple fact is that our current economic setup is completely unsustainable. And has been for a century or more. We've survived by adapting, by replacing bits as they became unworkable with new things that technology developed. And as long as we can keep that up we'll survive. But the one thing we cannot do is hold at our current level. Neverminding that people won't agree to that, the more important reason we can't is that to do so is suicide. If, somehow, we could stop invention and development, most of humanity would be dead in 50 years, and the few survivors would be grubbing in the dirt like peasants from the 14th century.

So, given that technology must continue to advance, or we'll all die save for a scattered handful of survivors working at late iron age technology, let's look at the advance option for failure modes.

I argue that no matter what, we are approaching a change in economics that is on par with the industrial revolution, and people these days tend to forget just how big that was.

Go back in time 300 years and you'll find that the vast majority of humanity are involved in primary food production, farming, ranching, fishing, etc. This has been the case since the development of agriculture roughly 12,000 years ago. Depending on the exact time and place, somewhere between 80% and 90% of all humans are required to produce the food for everyone. Every single bit of other labor was done by the 20% to 10% who weren't farming [1]

The industrial revolution coincided (by necessity) with a revolution in agriculture that allowed for fewer people to work as farmers, which in turn allowed more and more people to leave the farm and seek employment doing something else. Today in the USA a bit less than 1% of the population works in primary food production.

This turned the world upside down, and among other things one side effect was that it allowed for the production of vastly more goods and services than before. The transition period was ugly, people had to invent entirely new ways of living and working, and in that time we saw things like 16 hour work days, factories that were deathtraps, child labor of the worst and most exploitative type, and so forth. And really those problems have only been fixed in the first world, in the rest of the world the problems of the industrial revolution are still being processed.

We're facing an economic and social transition of similar, if not greater, significance today.

Worse, that's going to be coupled with a complete upheaval in military technology the like of which we haven't seen since gunpowder was turned into an effective weapon. Prior to roughly the 1600's, the power of the political elites rested mainly in commanding the loyalty of a relatively small number of highly trained aristocratic warriors equipped with mind bogglingly expensive weapon systems (a suit of armor, a war horse, lance, sword, etc to outfit a single elite warrior could cost more than a peasant farmer could produce in several lifetimes), and a lifetime spent mastering their use.

Then with the advent of gunpowder it was necessary for the power elites to command the loyalty of large masses of infantry and the highly trained and expensively equipped elite warrior/aristocrats of the past faded into irrelevance. It can be argued that this change was essential to the development of democracy...

And soon, the age of infantry and gunpowder will be over as the age of drones comes into being. Not our current generation of large, expensive, high tech RC planes, they're as similar to the drones that will change military calculations as the first primitive preposterously huge and weak cannon were to a modern assault rifle or cruise missile. Swarms of tens of thousands of palm sized, or smaller, semi-autonomous drones are the future that will change things, not the current output of General Atomics.

Which brings us to what I'd argue are the real problems with change.

Can democracy survive when the power elites no longer require the services and loyalty of large masses of infantry, but rather need (again) only the services of a much smaller number of highly trained experts (this time programmers at arms and the like rather than martial artists)?

Closely related to that problem, can our current nation state political system survive in an environment where a few people own the means of self replicating automated production, including the production of combat capable drones?

What does the Ed Koch of the future need from either the masses of people or the government when he owns automated factories that can produce swarms of drones, and an abundance of consumer goods with only the labor of a few people?

The Star Trek type future of freedom from want is, possibly, around the corner. But can we get it?

Worse, while automation is guaranteed to become better and better with each passing year, energy production is not. We're rapidly exhausting the cheap and easily accessed forms of fossil fuels, fusion remains a pipe dream, and fission is expensive, dangerous, and produces a secondary problem of waste disposal.

A future where automation has made it possible to produce anything consumer goods with very little human involvement, but the energy cost of doing so remains high is not a future where we live Star Trek lives free from want. At best it's a USSR style future of tight rationing.

Still worse, the people who will have the most say in determining the type of future we have are the people who are most likely to be deeply, ideologically, opposed to a Star Trek future of freedom from want. Rather, they are mostly people ideologically committed to the idea that for themselves to be truly successful others must exist in poverty and deprivation, an ideology of life as a zero sum game and their own status as winners relying on others losing.

Even if the technological problems can be solved, the ideological problems involved in capitalism/communism fading away and being replaced by [something else] are not insignificant.

Worrying about self driving cars is simply missing the problem.

[1] Seasonal exceptions apply, for example the pyramids were mostly built not by slaves but by farmers paid to work them during the fallow season.
posted by sotonohito at 6:28 PM on January 17, 2015 [59 favorites]


Well, we could all just have our fishing poles in the creek like at the end of A Nous La Liberte, if our masters will allow it.
posted by ovvl at 6:33 PM on January 17, 2015


his nerd rapture stuff

If anyone is in need of a metafilter user name.
posted by Fizz at 6:34 PM on January 17, 2015 [5 favorites]


The main thing is to not put them in charge of the hospitals on Komos, or things could get out of hand.
posted by cortex at 6:36 PM on January 17, 2015


I've personally seen people automated out of jobs. I understand the technological dimensions of this stuff much better than most, as a software engineer, and I find it really vexing how people use the abstract idea of technology as an emotional crutch to fantasize about some magical future where everything works perfectly and people don't have to solve their own problems anymore. But software solutions are designed and built by people and always will be. Even with the best tech solutions, people still have to make the important decisions and use their independent judgment to make the solutions really work for them. There is no pie in the sky awaiting us to be found in the kinds of technology we're primarily focused on today.
posted by saulgoodman at 6:46 PM on January 17, 2015 [9 favorites]


Mark my words, all this concern about artificial intelligence is just a misdirection by cats.
posted by arcticseal at 6:46 PM on January 17, 2015 [4 favorites]


There's a lot of discussion going on at the moment in terms of the "threat of AI" and machine dependency is, of course, the major component of that. However, when you consider how we're doing with things such as driving, as humans alone, the article's discussion of the difficulties in automated driving (as one small exploration) start to look a little shaky.

I recently did some research for a post on my own blog, where I found the 2013 WHO report on road safety. 1.24 million people are killed a year on the road, with a number of key factors, including drink-driving, speeding, lack of motorcycle helmets, lack of seat-belt use and lack of child restraint. Two of these factors will be completely eliminated by automated driving and the number of injuries associated will plummet as well. Machine reflexes will also greatly reduce the number of injuries caused by inattentive driving and high-speed collisions, reducing the issues from restraint and protection as well. There will be a lot of challenges to overcome, certainly, but machine-driven cars will save more lives and prevent more accidents than anything else we do, unless all of humanity decides to stop drink-driving, speeding, talking on the phone, driving while asleep and so on. Of course, with less menial and repetitive labour as machines take over, maybe fewer people have to work themselves to exhaustion to stay afloat, which would in turn make the world a safer and better place

To require people to continue to do repetitive work that can be done by a machine is to perpetuate a culture that has led to misery, slavery and a giant gap between have and have-not. Even the notion that we somehow have to work is a highly debatable one. Every time I see a question about whether we are better with machines to do the dirty work or not, I think about the fact that someone else's school-age children are probably still doing it instead of going to school, and then that makes the issue of being "too dependent" snap back into its correct perspective.
posted by nfalkner at 6:53 PM on January 17, 2015 [7 favorites]


nfalkner: "1.24 million people are killed a year on the road, with a number of key factors, including drink-driving, speeding, lack of motorcycle helmets, lack of seat-belt use and lack of child restraint. Two of these factors will be completely eliminated by automated driving"

If one of those factors is speeding then not so much. As far as cops and insurance companies are concerned too much speed is almost always a factor in any accident. From their point of view the fact that someone fell off the road is proof they were traveling to fast for conditions. And so they almost always write that as a contributing factor in any fatality accident.
posted by Mitheral at 7:07 PM on January 17, 2015 [1 favorite]


The real danger we face from computer automation is dependency.

Now? Suddenly now we're at risk of becoming dependent? That ship sailed when the general consensus was that agriculture is better than being nomadic herdsmen. Replace the subject of 'computer automation' with 'power looms' and you have most of the structure of the argument are the same as they were for the Luddites.

Maybe I'm being to harsh. Certainly there have been times when such arguments of looming dangers have been good things - Sagan & his group of scientists' work to show that nuclear war is un-winnable act that would plunge the world into a nuclear winter with little chance of any humans surviving is a good example.

This article doesn't take into serious consideration the factor of self-correction that's part of any large system. Instead it takes them and uses them as evidence that his concerns are valid - the airlines policy to have less autopilot time, the lack of trust in self-driving systems slowing their acceptance, and studies showing the problems with inaccuracy in automated medical diagnostics.

To put it bluntly, with each world-changing technology that arrives throughout human history, either that system adapts and self-corrects, or it collapses. The process hasn't changed much in human history - it's an unstable, generally unpredictable mess with rare moments of stable growth and progression, but the general patterns are the same.
posted by chambers at 7:13 PM on January 17, 2015 [8 favorites]


The Ed Koch of the future? How's he doin?
posted by spitbull at 7:16 PM on January 17, 2015 [5 favorites]


Nicholas Carr, like so many before him, vastly overestimates the value of human "common sense, [...] ingenuity and adaptability, [and] the fluidity of [human] thinking".

For human domains, sure, these things are quite important.

In domains such as the production of physical goods, data-processing, drug development, weather simulation, machine learning, continuous (evolutionary) design, etc. etc. etc., Carr appears to have no idea of the magnitude and scale that even the current state of computing has changed and is changing the "world" in which we live.

Either that or Carr is being willfully ignorant.

I mean, can anyone seriously believe that most of what passes for human "thinking" (even in its most rarefied forms) will not be rendered vestigial given, say, 200 more years of advances in computing?

While considering this question, one should keep in mind 1) that ENIAC, the first Turing-complete computer ever invented, was operational just 70 short years ago, and 2) we are just seeing networks of machines that can teach themselves things humans don't know how to teach them.
posted by mistersquid at 7:18 PM on January 17, 2015 [2 favorites]


There's a film that makes a lot of these same points -- Humans Need Not Apply. At some point our economic system will seem as barbaric and archaic as feudalism does now, but there are some rough roads ahead. I don't think the 1% is going to let go of the chokehold they have on the world very easily.

The alternatives, of course, are straight out of C.M. Kornbluth or Nancy Kress.
posted by fifteen schnitzengruben is my limit at 7:26 PM on January 17, 2015 [2 favorites]


Kurzweil's breathless predictions are going to look just as wrong as the infamous 1960s predictions that in 2000 we'd all have robot butlers, etc. He has till 2029 for an AI to pass the Turing Test; that's not going to happen.
posted by shivohum at 7:30 PM on January 17, 2015


Oh, I think it'll happen well before 2029. I'd be surprised if it didn't happen in five years.
posted by empath at 7:35 PM on January 17, 2015


That's assuming emergent AI wants to pass a Turing test.
posted by TheWhiteSkull at 8:00 PM on January 17, 2015 [17 favorites]


"Are humans necessary?"

No, never have been.
posted by angerbot at 8:05 PM on January 17, 2015 [1 favorite]


If you're not willing to fuck with your children as a sort of performance art piece to amuse yourself, there's really no point in having them.

All of a sudden I am seeing my parents more clearly.

Automation is a tool like any other, and it's our choice if it gets used to impoverish and oppress people or to free them from drudgery. Personally I'm happy to not be a subsistence farmer and I am looking forward to self-driving cars (since clearly we will never have high speed rail here), but I won't be laughing if my job gets automated as well.
posted by Dip Flash at 8:11 PM on January 17, 2015 [1 favorite]


Oh, I think it'll happen well before 2029. I'd be surprised if it didn't happen in five years.

I'd be shocked if it happened in a thousand. I don't think conversation is computable. I guess time will tell.
posted by shivohum at 8:33 PM on January 17, 2015 [1 favorite]


Previous comment extract as reference to inherit context.

Statement of refutation from mining of comment context. Statement of belief in related domain. Maxim, cliche or platitude emphasising uncertainty of prediction.

...

You might be surprised. (This is mostly in jest as, yes, we do have a way to go for complicated conversation.)
posted by nfalkner at 8:43 PM on January 17, 2015


> Oh, I think it'll happen well before 2029. I'd be surprised if it didn't happen in five years.

Say, what?! It's not like the field has made any great strides. The current chatbots aren't that much better than Eliza - a 50-year-old program!

Now, I think with machine learning we will fairly soon see a breakthrough where simple customer service will be able to be done by bots. But it's going to be a good long time before a chatbot will be able to convince an alert, skeptical tester that it's human...
posted by lupus_yonderboy at 8:46 PM on January 17, 2015


Mark my words, all this concern about artificial intelligence is just a misdirection by cats.
posted by arcticseal at 6:46 PM on January 17 [2 favorites]

Just don't touch their poop. That way they can't reprogram you.
posted by skyscraper at 8:49 PM on January 17, 2015 [2 favorites]


shivohum:
Here's an example of how fast things are moving.

That's the result of hooking an image recognizing neural network up to a text generating neural network, to give the result of automatically generating sentences to describe pictures. It was a pretty 'holy shit' moment when it came out.

There's a pretty strong contingent of people out there who believe that for any action a human can do in a fraction of a second, we'll be able to train neural networks to outperform them. This is because 0.1 seconds is about the amount of time it takes for ten human neurons to fire. Unless there's some hidden magic in human neurons that make them seriously, seriously different from artificial neurons, this means that the tasks should be learnable with the new 'deep' neural networks. Many of the state of the art NN's are now running twenty or thirty layers deep...

So take a moment to think about things people do in less than a second, and the things that people do that are the result of many split-second decisions taken in sequence. Forming a comprehensible sentence in reaction to an input is definitely in this class. What's going to be lacking, though, is an understanding of context and culture.

My personal belief is that we're going to end up with a difference of scale rather than an essential difference of kind between meat brains and computer brains. Humans have a brain with billions of neurons, attached to billions of sensors, allowed billions of seconds to train and learn to navigate the outside world. By contrast, our artificial networks have (at the moment) millions of neurons, and usually on the order of a couple-few megapixels worth of 'sensors,' when we talk about image classification. Sure, for some problems we have shitpiles of training data, but it's still nowhere near the variety of experience available to a human or animal actually interacting with the world. You're probably not going to have a really satisfactory solution to the Turing test until we have AI's that have lots and lots of sensors, hooked up to recurrent neural networks (which we're only just learning how to properly train), and allowed to run around and learn through direct interaction. This is necessary for understanding the context of living in the world. And this is probably some ways off.

On the whole, though, it's not a problem of form. There was a time when people thought computers would never be able to beat humans at chess. But good design and increased scale of computational capacity blew that hypothesis apart.
posted by kaibutsu at 8:56 PM on January 17, 2015 [2 favorites]


But from the perspective of a business owner, does it matter whether a bot can be said to be truly conscious so long as it can do the job ~95% of the time and at a 100% discount of a human's salary? It doesn't even have to be an improvement on human ability; it just has to be good enough. Think of automated customer service, mentioned above: is it really better than talking to a human? Not in all cases, no, but it's good enough to meet the average corporation's expectations for the task (read: very low).

So the discussion about dependence on computers, and about AI more broadly, is kind of a red herring. The problem is the throwaway culture of capitalism, which reduces the worth of a human being solely to their ability to yield a profit; and if they cannot yield a profit because all the jobs have been automated, then they have no worth. That's not the kind of problem that a technology can solve.
posted by Cash4Lead at 9:08 PM on January 17, 2015 [10 favorites]


There's not necessarily an independent goal to human behaviors like conversation. Are people really so miserable with human life they want to be replaced by simulated agents? What's the point of life? Sitting around and having my nerve impulses tickled pleasantly in a world absolutely void of even personal meaning doesn't sound like much of an improvement over history to me.
posted by saulgoodman at 9:20 PM on January 17, 2015 [3 favorites]


I jsut touched on that in another thread, saulgoodman. Ain't literally automation of good judgment of course, but automation is rarely that simple. In practice, you automate selective time consuming or error prone aspects of a task, which the topics I cited there do.

Radical transparency helps automate the process of communicating fine details about governmental and corporate decision making for example. How do we keep this embarrassing fact secret? How do we disclose related good sounding facts? You don't because doing so carries a jail term for everyone involved and a decorporation penalty for the company itself. And you'll definitely get caught because the journalists and SEC's software rocks a detecting such moves.
posted by jeffburdges at 9:27 PM on January 17, 2015


It comes down to this:

Communism and Capitalism were both based on the fact that one person's labor was equal to another person's labor. Which was true before the industrial revolution.

The assembly line made it easier and faster to make stuff, and in the 20th century they started to use robots on the assembly line to replace human beings for repetitive tasks.

In business there used to be a room of 100 typists who would type out the same letter or memo to make 100 copies to distribute it. With the invention of the desktop PC, Word Processor, and Laser Printer, one administrative assistant with PC literacy skills could type up a memo or letter and press 100 on the number of copies to print on the laser printer. Later on Email and attached documents and storing files on file servers further automated the process.

In modern times the labor of one person does not equal the labor of another person who is trained in high tech stuff to automate tasks. So we saw Communism fail because of that, but in Capitalism all it did was make the business owners richer and make the workers poorer. Business owners could save expenses by automating tasks in software, replacing humans with robots, offshoring work to third world nations, setting up a 24/7 website to sell stuff and have robots in warehouses to pull stuff sold and ship it.

The 1% elite got richer, while the 99% suffered economic problems.
posted by Orion Blastar at 9:29 PM on January 17, 2015 [1 favorite]


jeffburdges: Thanks! I definitely believe tech can help us as humans make better judgments and that there are many potential benefits to judiciously automating tasks that can be automated. I'm really excited about the potential for tech to improve democratic deliberative processes--why shouldn't we have more responsive, direct democracy now, for instance, since it's technologically feasible--but right now, the shitty economics and politics seem to be wasting whatever potential there is to make the tech live up to its potential to substantially improve human lives. I'm still hopeful that can change, but I can't assume the advance of technology alone will guarantee any net improvement in the general human condition. It all depends too much on what specific directions the technology advances go in, and on the social context in which the advances occur.
posted by saulgoodman at 10:06 PM on January 17, 2015 [1 favorite]


Dependence on computers? Perhaps one day long after most of us are gone. However, that doesn't address the problem the people of the 1st world are facing even now. We're entirely dependent on electricity and petroleum products. If the 1st world's electrical grid shuts down or the oil stops flowing, 1st worlders are screwed.

And that's true regardless of whether computers are included in the mix or not.
posted by InsertNiftyNameHere at 10:19 PM on January 17, 2015 [2 favorites]


This article reminded me of Carr's earlier book, The Shallows: What the Internet Is Doing to Our Brains, where Carr researched neuroscience and the effect of Internet use on human brain function and concluded that using computers as tools to take shortcuts on processes that we used to do ourselves made us forget how to do them, and had the effect of giving us a shallower understanding of processes, jobs, life itself.

Time we used to spend reading long articles and books is now spent online clicking from one site to another, skimming and digesting. Carr says that Internet use interferes with the building of long-term memory – people who read something without clicking through to further links are more likely to process, understand, and remember it than those who read an Internet site with hyperlinks and other distractions. Surfing the Net starts to turn us into a sort of computer ourselves – humans without basic foundations or working knowledge in our long term memory who know where to find quick fixes of information that are doomed to reside only in our short term memories.

So in short, humans are becoming more and more like computers, and computers are becoming more and more like humans. What could possibly go wrong?
posted by onlyconnect at 10:31 PM on January 17, 2015 [3 favorites]


Relevant, tho it takes a minute to get there:
The Italian Futurists wanted to abolish the past and live in a state of pure speed that would kill them young and never let them be remembered: now you can spend your whole day watching Twitter stream endlessly by, forgetting each lump of 140-character flotsam as soon as it’s churned into the black depths of your timeline.
posted by hap_hazard at 11:21 PM on January 17, 2015 [1 favorite]


So in short, humans are becoming more and more like computers, and computers are becoming more and more like humans. What could possibly go wrong?
posted by onlyconnect at 10:31 PM on January 17 [+] [!]


Eponysterical?
posted by subdee at 11:29 PM on January 17, 2015 [1 favorite]


It's going well so far, though. Sure, we have fewer miners, but that means fewer people having to be miners, and better conditions because of more automation. And higher material standards of living for everyone. Fracking and the huge increase in renewable energy production show what we can do when we want to. Lots fewer people in poverty. So I'm going to call doom-mongering on this one: capitalism and technology are continuing to deliver for now.
posted by alasdair at 1:58 AM on January 18, 2015


Nicholas Carr is a moron. Next?
posted by fraying at 2:11 AM on January 18, 2015 [1 favorite]


As for automation, don't forget that automation doesn't have to replace 100% of humans to be effective.

Take the customer service chatbot idea, for example. Say it works only in 70% of cases. That means the support industry can get rid of ~70% of their human employees and keep the rest on hand to deal with problems the chatbot can't handle. Confuse the bot and it'll say "let me transfer you to my supervisor", who is a real human.

Most automation won't replace 100% of the human workforce, it'll just be another force multiplier. This is where automation threatens even skilled white collar jobs like doctors and lawyers. The job threat isn't that people will have law programs so they don't need lawyers anymore. The threat is that increasing automation of routine legal tasks will allow one lawyer to do the work currently done by three, or five, or ten, or fifteen, or whatever.

We've already seen this in the field of legal secretaries and paralegals. Back in the 1950's it took roughly one secretary to support one lawyer. Today most firms operate with one secretary to three or four lawyers. Computers, machine searchable legal databases, etc have replaced a lot of legal secretaries, but not all of them. That's the standard pattern of unemployment via automation.

There will be exceptions of course. Driving jobs will almost certainly completely vanish once self driving cars (and trucks) become feasible. But the rather sudden invention of a technology that allows a complete replacement of humans is the exception not the rule.

On topic to the article, I'll note that the person writing this piece apparently didn't even research what Google is doing with their self driving cars, as his criticisms of the abilities of the cars are completely off base. He seems to be under the impression that they're incapable of dealing with human drivers screwing up, pedestrians crossing the street, etc and in fact Google has largely solved all of those problems.
posted by sotonohito at 4:42 AM on January 18, 2015 [2 favorites]


There's not necessarily an independent goal to human behaviors like conversation. Are people really so miserable with human life they want to be replaced by simulated agents?

Press 1, if you agree. Press 2 to return to the main menu.
posted by Obscure Reference at 7:32 AM on January 18, 2015


"Look at all these little things! So busy now! Notice how each one is useful. A lovely ballet ensues, so full of form and color. Now, think about all those people that created them."

-Zorg
posted by clavdivs at 10:15 AM on January 18, 2015 [3 favorites]


can anyone seriously believe that most of what passes for human "thinking" (even in its most rarefied forms) will not be rendered vestigial given [time]

Yes. I see no reason to think human intelligence is going to be rendered vestigial by any technology on the horizon, even if you could define "thinking". You'd be better off arguing that "thinking" was replaced by sets of cultural and agricultural preferences 10,000 years ago, at least in terms of what Societies value in terms of ideal operating conditions and outputs. Societies value stability and incremental but steady growth. Profits are nice too, if you can get them.
posted by sneebler at 12:53 PM on January 18, 2015


Say, what?! It's not like the field has made any great strides. The current chatbots aren't that much better than Eliza - a 50-year-old program!

Though it's good enough to fool the GamerGate people for hours. Of course I'm not sure the GamerGate people pass the Turing Test themselves.
posted by happyroach at 1:38 PM on January 18, 2015 [1 favorite]


This has all happened before...

... And it will happen again.
posted by IAmBroom at 9:14 AM on January 19, 2015


Are humans necessary?

Necessary for what? I'm sure if you gave animals a vote, they would kick us off the planet in a heartbeat, including our beloved cats and dogs. On balance, we've made the earth a shittier place. Definitely to the environment, and we haven't really become much less destructive to other humans, either. Computers/robots might do a better job at not fucking things up; I don't think they could possibly do worse.
posted by desjardins at 11:03 AM on January 19, 2015 [3 favorites]


Someday robots will joke about humans the way that tech people joke about buggy whip manufacturers.

Jokes made by robots, for robots.
posted by klausness at 8:37 AM on January 25, 2015 [1 favorite]


« Older Freak Like Me   |   Lyrical Extinction Newer »


This thread has been archived and is closed to new comments