Elon Musk: Evangelist of the Brain-Computer Interface
January 1, 2018 8:02 AM   Subscribe

At the World Government Summit in Dubai in February, Elon Musk said that people would need to become cyborgs to be relevant in an artificial intelligence age. Musk wants to use brain-computer interfaces (BCI) in a bi-directional capacity, so that plugging in could make us smarter, improve our memory, help with decision-making and eventually provide an extension of the human mind.

“[Musk] wants to directly tap into the brain to read out thoughts, effectively bypassing low-bandwidth mechanisms such as speaking or texting to convey the thoughts. This is pie-in-the-sky stuff, but Musk has the credibility to talk about these things,” said BCI expert Professor Pedram Mohseni of Case Western Reserve University, Ohio.
posted by A. Davey (133 comments total) 20 users marked this as a favorite
 
elon musk is the glorious combination of pt barnum, ron popeil, and soapy smith that late stage capitalism deserves
posted by entropicamericana at 8:10 AM on January 1, 2018 [69 favorites]


"Human beings must have economic value to have any value at all," continued Musk. "More specifically, they must be economically valuable to me."
posted by Grimp0teuthis at 8:20 AM on January 1, 2018 [37 favorites]


Of course everyone would like to get implants connected to their brains that will have an obsolescence cycle similar to a cellphone, who doesn't want brain surgery every two years to not become disconnected from society?
posted by sukeban at 8:24 AM on January 1, 2018 [27 favorites]


I like him but he doesn't half talk some nonsense sometimes. He should stick to realisable goals like Tesla.
posted by GallonOfAlan at 8:24 AM on January 1, 2018 [1 favorite]


This is kind of incoherent. If AI replaces human labor, why would it have any use for people who believe that they are "relevant" because they are wearing a neuro-helmet?
posted by thelonius at 8:26 AM on January 1, 2018 [5 favorites]


I like him but he doesn't half talk some nonsense sometimes. He should stick to realisable goals like Tesla.

presumably talking sci-fi nonsense is easier and more lucrative than managing people and meeting production goals
posted by entropicamericana at 8:33 AM on January 1, 2018 [31 favorites]


Dunno. I think this is yet another reasonable challenge Musk is going after. The difference is, we generally all see a problem as unsolveable, and Musk just rewrites the problem to make it solvable.

What do I mean? The boring company took a multi-billion dollar construction problem of tunnel size and re-defined the problem to a smaller tunnel diameter with a system controlled speed (sled) eliminating man made accidents.

This in turn also builds real estate space for underground parking, and makes use of autonomous driving tech and electric vehicle tech his company has already developed.

So yes, he is full of hot air to a degree, but the reality is he sees a problem and has a vision to solve for it unconventionally.

I would rather live in Elon Musk's future over Trump's any day of the week.
posted by Nanukthedog at 8:35 AM on January 1, 2018 [14 favorites]


Elon Musk = John DeLorean 2.0
posted by larrybob at 8:40 AM on January 1, 2018 [8 favorites]


Given the existing bug ridden quality of both hardware and software today, and the incessant "upgrading" of same leading to more problems, I would liken plugging your brain into a computer as neurologically equivalent as putting a fork into a toaster.
posted by njohnson23 at 8:41 AM on January 1, 2018 [36 favorites]


The Warrior’s bland acronym, MMI, obscures the true horror of
this monstrosity. Its inventors promise a new era of genius, but
meanwhile unscrupulous power brokers use its forcible installation
to violate the sanctity of unwilling human minds. They are
creating their own private army of demons.

—Commissioner Pravin Lal,
“Report on Human Rights”
posted by johngoren at 8:45 AM on January 1, 2018 [9 favorites]


He can put himself in his rocket and fuck off to the moon.
posted by Artw at 8:47 AM on January 1, 2018 [6 favorites]


As someone who works in quality assurance in IT, and who has also managed RPA (robotic process automation) projects... yeah, no thanks. I'll stick with this low-bandwidth squishy thingamajig that has survived millions of years over "state-of-the-art" technology that craps out after two or three turns around the analog nuclear furnace that keeps us alive.
posted by fraula at 8:48 AM on January 1, 2018 [17 favorites]


Given the existing bug ridden quality of both hardware and software today, and the incessant "upgrading" of same leading to more problems, I would liken plugging your brain into a computer as neurologically equivalent as putting a fork into a toaster.

Given the current state of the IoT, imagine having your implants with the same level of interoperativity and security as your internet-connected toaster, and a similar ease of complaint to makers that vanished in the air two years ago and stopped giving firmware updates.
posted by sukeban at 8:49 AM on January 1, 2018 [5 favorites]


Imagine having your brain hijacked to mine bitcoin.
posted by sukeban at 8:51 AM on January 1, 2018 [45 favorites]


Of course most of us will have to get the ad-supported brain chips that'll play an Uber jingle directly into your nervous system every twenty minutes, but I have it on good authority that it's a small price to pay for whatever it is we're supposed to be getting out of it.
posted by Sing Or Swim at 8:57 AM on January 1, 2018 [36 favorites]


In the fight for survival that comes with the destruction of our biosphere, we're going to need every advantage we can get.
posted by MrVisible at 8:57 AM on January 1, 2018 [2 favorites]


One of the undiscussed-but-plainly-obvious ideas in Ian Banks's Culture novels is that the Minds don't really have much use for the humans. Crew and Passengers are like fleas on a dog to a Ship and Hub Minds look after Humanity because, well, what else is there for them to do? So the Minds and Drones do all the real work while Humanity enjoys an endless three-day weekend.

Perhaps hidden in the comments above is the realization that a post-scarcity economy is possible, but it will render most of us, perhaps all of us, redundant. Something to puzzle over.
posted by SPrintF at 8:59 AM on January 1, 2018 [8 favorites]


People say we don't need Black Mirror, this is why we need Black Mirror.
posted by Artw at 9:00 AM on January 1, 2018 [10 favorites]


I would liken plugging your brain into a computer as neurologically equivalent as putting a fork into a toaster.

Or like that fridge or whatever it was, someone'd hack me & I'd start screaming PornHub comments while in line at the Post Office.
posted by Alvy Ampersand at 9:01 AM on January 1, 2018 [13 favorites]


I have it on good authority that it's a small price to pay for whatever it is we're supposed to be getting out of it.

Relevance! You want to be relevant, don't you?
posted by thelonius at 9:04 AM on January 1, 2018 [4 favorites]


Resistance is futile.
posted by Jonathan Livengood at 9:06 AM on January 1, 2018 [3 favorites]


Whiskey Tango Foxtrot, sign me up, I guess. If it's the only way to hack the Gibson...
posted by Samizdata at 9:11 AM on January 1, 2018


Is this the same Elon Musk who "reasoned" that people don't like public transit because there might be a serial killer on their bus/subway car? Yes, yes it is.
posted by plastic_animals at 9:23 AM on January 1, 2018 [18 favorites]


This also seems very relevant.

If AI = capitalism, do you really want capitalism in your head? Capitalism gives priority to the consolidation of ownership above all else, who is going to own that part of you?
posted by Artw at 9:27 AM on January 1, 2018 [10 favorites]


Perhaps hidden in the comments above is the realization that a post-scarcity economy is possible, but it will render most of us, perhaps all of us, redundant. Something to puzzle over.

Yeah, sure, but having a scarcity economy in a setting where a post-scarcity one is possible doesn't make people not-redundant. It just gives people unpleasant makework to do and then punishes them for not doing it even though it's totally unnecessary.
posted by GCU Sweet and Full of Grace at 9:31 AM on January 1, 2018 [21 favorites]


The future’s so bright, I gotta graft VR shades directly to my brain stem.
posted by The Card Cheat at 9:32 AM on January 1, 2018 [8 favorites]


If AI = capitalism, do you really want capitalism in your head? Capitalism gives priority to the consolidation of ownership above all else, who is going to own that part of you?

People have really internalized the platitude that "technology is just a tool and it's up to us to choose how to use it". But it is not so simple. Technology develops within a system that is committed to maximizing efficiency and control, and it tends to advance those ends.
posted by thelonius at 9:33 AM on January 1, 2018 [21 favorites]


Humans need to be plugged in so we can beam ads right into their brains - Would you like a premium BCI experience? Click here for an ad-free hour.
posted by rmd1023 at 9:38 AM on January 1, 2018 [1 favorite]


New year resolution: Stop making dents in the universe.
posted by Dumsnill at 9:45 AM on January 1, 2018 [5 favorites]


musk is...
ozymandias
presteign
ahab
tyrell

mefi fave-winner gets a free brain-buddy implant:
elon, ray k, nakamoto, and jaron lanier walk into a bar...
posted by j_curiouser at 10:03 AM on January 1, 2018 [2 favorites]


who doesn't want brain surgery every two years to not become disconnected from society?

Oh, don't be silly. The brain surgery will only be required when the form factor changes, so nothing at all like cell phones - those things never change size. But if they do shrink, you'll get some space back in your skull! You could keep your car keys in it or something.

Yes, the BCI will exist, but it still won't be compatible with your car, front door lock, or tv remote. Your TV will still have an IR remote (and a second one for the cable box - one won't be able to change the channel, the other the volume). Your thermostat will work with it, but by that time all thermostats will be echo dots and no longer be capable of controlling the heat in your house. Your refrigerator will order milk whenever you think about it, leading to a runaway cognitive feedback loop.

And that's how the BCI kills you. Too much milk.
posted by mrgoat at 10:05 AM on January 1, 2018 [16 favorites]


"Human beings must have economic value to have any value at all," continued Musk. "More specifically, they must be economically valuable to me."

Is there a citation for this? I'm not seeing it in the article.
posted by Going To Maine at 10:15 AM on January 1, 2018 [1 favorite]


Look harder.
posted by thelonius at 10:19 AM on January 1, 2018 [12 favorites]


So it turns out Elon Musk is a serial port killer.
posted by srboisvert at 10:37 AM on January 1, 2018 [5 favorites]


Musk oil is the new snake oil.
posted by davebush at 10:39 AM on January 1, 2018 [1 favorite]


Ugh, dude, you promised you'd be making 500K Model 3s a year and last month you made 5K. Meanwhile, transportation now accounts for more CO2 per year than electricity generation. FOCUS ON MAKING YOUR DAMN ELECTRIC CARS PLEASE.
posted by gwint at 10:40 AM on January 1, 2018 [9 favorites]


Look harder

Ok, he didn't say it.
posted by Going To Maine at 10:43 AM on January 1, 2018 [1 favorite]


He's a rich person in 2018. He's not directly saying he's going to eat the brains of the pipes but he's totally going to eat the brains of the poors.
posted by Artw at 10:46 AM on January 1, 2018 [3 favorites]


The Overton window on rich people has shifted to the point where not actively trying to murder/enslave us all is the new philanthropy.
posted by Artw at 10:55 AM on January 1, 2018 [39 favorites]


This seems to be closely related.
posted by NoMich at 11:03 AM on January 1, 2018 [8 favorites]


wot if ya mum was a mobile?
posted by indubitable at 11:04 AM on January 1, 2018 [3 favorites]


I'll believe Elon Musk can solve AI when he can solve male-pattern baldness.
posted by jonp72 at 11:16 AM on January 1, 2018


“Your mom’s a cyborg.”
posted by octobersurprise at 11:17 AM on January 1, 2018 [1 favorite]


So I RTFA, and... I don’t get all the hate? For all the discussions we’ve had about UBI, Capitalism is still the prevailing economic model, and with automation and AI eliminating tons of jobs over the next few decades, BCI does seem like a likely next step to keep a larger number of humans in the workforce.
posted by KGMoney at 11:19 AM on January 1, 2018 [3 favorites]


I'd start screaming PornHub comments while in line at the Post Office.

You can typically get this at the Post Office without brain surgery
posted by Ray Walston, Luck Dragon at 11:27 AM on January 1, 2018 [11 favorites]


I don't think it's hate so much as annoyance with rich people having an expensive solution to a difficult problem.
posted by Dumsnill at 11:27 AM on January 1, 2018 [1 favorite]


I don’t get all the hate?

Because it’s at best utopian nonsense and at worst a human atrocity to suggest that people need to become cyborgs to “remain relevant.”

Not that I have anything against body mods. I support everyone’s right to get a RAM upgrade in their dick, if they want.
posted by octobersurprise at 11:33 AM on January 1, 2018 [25 favorites]


You all just a bunch of luddites. I for one am looking forward to my new iBrain.
posted by cjorgensen at 11:43 AM on January 1, 2018 [3 favorites]


I don't think it's hate so much as annoyance with rich people having an expensive solution to a difficult problem.

speaking for myself: no, its hate
posted by entropicamericana at 11:55 AM on January 1, 2018 [14 favorites]


Well ok, it's hate.
posted by Dumsnill at 12:01 PM on January 1, 2018 [3 favorites]


oh my god fuck elon musk

i didn't even read the article, just... fuck elon musk
posted by sockermom at 12:08 PM on January 1, 2018 [16 favorites]


More and more, it seems like the only thing the Unabomber forgot to consider is that we might blow ourselves up before we complete our enslavement to technology.
posted by Edgewise at 12:10 PM on January 1, 2018 [1 favorite]


and with automation and AI eliminating tons of jobs over the next few decades, BCI does seem like a likely next step to keep a larger number of humans in the workforce.

Well, yeah...except for the tiny little detail of them not actually being human any more...
posted by sexyrobot at 12:12 PM on January 1, 2018


Dunno. I think this is yet another reasonable challenge Musk is going after. The difference is, we generally all see a problem as unsolveable, and Musk just rewrites the problem to make it solvable.

No, the thing is, he's defining the problem according to his own irrational prejudices.

There are obvious solutions to the problems posed by individual automobiles and autonomous automobiles do not solve those problems, they merely push them down the road. By now it's well known that Musk has issues with sharing space with icky people ("serial killers," even!) so this seems to be driven by his personal pursuit of wealth and his profoundly anti-social personality. As far as the serial killer thing goes, you'd think a renowned genius would have a more quantitative insight into the risks that travelers face (and a more enlightened response to people who point it out than, "Idiot.")

I would rather live in Elon Musk's future over Trump's any day of the week.

The good news for you is that these two futures are complementary.
posted by klanawa at 12:13 PM on January 1, 2018 [14 favorites]


So yes, he is full of hot air to a degree, but the reality is he sees a problem and has a vision to solve for it unconventionally.

I would rather live in Elon Musk's future over Trump's any day of the week.


Ok, I have now calmed down. The thing about this is... these are both pretty much the same future. Musk is your garden-variety cis white tech bro who thinks that the technological future is a wonderful utopia where all of the problems that other cis white tech bros experience are solved,* without any regard for the real (and dare I say, more important) problems that marginalized people face. And because he does not think about those problems or even give them any regard, those problems are only magnified and become exponentially worse.
posted by sockermom at 12:13 PM on January 1, 2018 [27 favorites]


There is a limit to how long I want to stay relevant.
posted by Dumsnill at 12:14 PM on January 1, 2018 [1 favorite]


It's incoherent because relevance is like a sorites paradox, where the vagueness of the word relevance confuses the whole argument. In the present "relevance" means "survival", in that it means "relevant to some employer/market/metabolic requirement". When automation makes survival trivial and universal, which is what we're talking about when we talk about humans "losing relevance", then what does relevance come to mean, and why do we want it?
posted by Horkus at 12:15 PM on January 1, 2018 [5 favorites]


You wife wants to know why your mind-controlled tesla always takes you to Kate's house. We were supposed to be picking up the kids.
posted by adept256 at 12:20 PM on January 1, 2018 [2 favorites]


I mean, why should humans make drastic and potentially very unsafe changes to our bodies in order to accommodate a human designed and implemented economic system? Wouldn't it make more sense to shift the systems so they respond to human needs rather than the other way around? Oh wait, our democracy has been hideously corrupted/the current system is actually about prioritizing the profit of the wealthy and of corporations over the well-being and even the lives of everyone else.

So, while Musk's argument that humans must get cybernetic implants might make economic sense, from any ethical perspective that isn't Randian, it's hideous and absurd and highlights the profound way that the status quo demands human beings maim and contort themselves--physically, emotionally, and intellectually--to serve remorseless abstractions. And the subtle and relentless dynamjc im which the countless other potential ways of organizing society, of molding economic systems to varying human-chosen priorities are rendered invisible/are disappeared.
posted by overglow at 12:25 PM on January 1, 2018 [29 favorites]


Musk is so very Edison.
People praise him as a genius (seriously, some people do) but he's a rich guy who hires other people to do stuff.
He gets a ton of credit for absolute nonsense.

I think you could just read sci-fi novels and pick out the good bits and Musk will come along and claim to have invented them ion a few years time.
Hyperloop (to pick the most visible example if you're a transport engineer, which I am) is an age-old idea with well-understood pros and cons. But of course, after Musk decided he invented it people who don't know how transport works will mention it at parties all the time. Infuriating.
Exocortex stuff has been and is studied a great deal by a bunch of people (mesh) but you wait and see. In six months time it'll be "Oh look at this genius idea Elon Musk has had, what a genius!".

I guess there are worse things to do with your billions, but you know what would be better? Progressive taxation, industry regulations and significant investment in public science. That would be a better thing.
posted by Just this guy, y'know at 12:25 PM on January 1, 2018 [20 favorites]


Take self-driving cars. Think about the typical problems that self-driving cars are supposed to help solve: the main arguments I hear (and I teach a class where we devote a week to discussing self-driving cars, so I've heard a lot of arguments about this) are: (1) Self-driving cars will allow us to be more productive; rather than wasting time driving, we will be able to work while we transport ourselves from home to work; and (2) We won't have to worry about distracted/drunk/texting whatever drivers, so there will be fewer accidents.

It is interesting that it takes awhile for people to get to the "oh, people who are visually impaired will be able to autonomously get around in these vehicles" argument. Recall what happened when Robert Moses planned New York City: "he was also an avowed racist who did everything he could to punish and exclude people of color who lived in New York, and the legacy of his architecture-level discrimination lives on in the city today." We create systems that reflect our own values, or lack of values. We create structures and institutions that proliferate our biases. Self-driving cars are going to proliferate our own biases--institutional and structural racism and sexism. When I hear people talk about self-driving cars and ethics, they always talk about the "trolley problem" as if this is the only ethics problem that comes from self-driving cars. There are so many other ethical issues to consider!

I have diverged a bit, because I am aware that this article isn't about self-driving cars; this is just an example of the way that we tend to think about problems that have technological "solutions" - we don't think about all of the problems, or even the most important problems, especially when an able-bodied, cis, white man like Musk is the one directing the conversation.

Well, yeah...except for the tiny little detail of them not actually being human any more...
Sure, I'll be the one to say it: We haven't been human since we discovered fire. We have never been modern. People and technology are inextricably linked entities: hybrids, cyborgs, whatever you want to call it. I'm sitting here talking to you, but you are not even an individual - I'm "talking" to and with and among a bunch of different actors: language, a series of tubes, "text" on a "screen," people reading Metafilter all over the world, web crawlers, search engines, etc. I'm talking to you through time.

So joke's on you, Musk: we're already cyborgs.
posted by sockermom at 12:29 PM on January 1, 2018 [23 favorites]


I think in Musk's context, "relevant" means "deserving of being allocated the resources necessary for survival". Automation making survival trivial doesn't mean it will be universal - some amount of people will still be asking the question, "What has that person done to deserve the resources allocated to them?"

Sure, in a truly post-scarcity society, the answer is "they exist" - because there's no competition. But how do we get from scarcity, to less scarcity, to equilibrium scarcity, to post-scarcity? How does a person remain "relevant" in the transition? That's the really hard problem.

Elon Musk's answer appears to be "jam computers in your brain and be something other than human". But that presupposes a whole lot of things about who would be able to able to, economically and socially, and it's not a pretty thought experiment, without even getting into the question of whether it is right to disadvantage people who might not want to.

If some piece of technology becomes necessary to, say, get a decent job, it's functioning much the same way some college degrees do now - a class marker that you're the sort of person who can pay for it. It does absolutely nothing to solve the actual problem of how do we allocate resources to people who need them.

We already have the post-scarcity problem right now, in that there is a massive amount of wealth (resources) tied up with people who think they deserve all of it and more, and aren't using it, while there is a tremendous amount of people who need those resources, but somehow aren't considered to "deserve" them, in the capitalist economic model.

Jamming cell phones in your brain isn't going to fix that any more than any other technology we've ever developed has.
posted by mrgoat at 12:36 PM on January 1, 2018 [14 favorites]


it's hideous and absurd and highlights the profound way that the status quo demands human beings maim and contort themselves--physically, emotionally, and intellectually--to serve remorseless abstractions

Yeah, “Rise of the Cybermen” isn’t an RFP.
posted by octobersurprise at 12:38 PM on January 1, 2018 [6 favorites]


I'm used to seeing this kind of thinking from the autonomous vehicle proponents (I'm embedded in that world; I study what drivers in autonomous vehicles can perceive, so I get lots of engineers telling me that the human doesn't matter, or that what they think the human can do matters more than what the human can actually do). Yay for engineer's disease.

Even understanding how to make a useful interface between the brain and an external device in the way Musk is imagining is a horribly difficult problem. The closest we've gotten for sensory input has been things like retinal implants, which have such paltry resolution that they're pretty much a joke on the scale of vision (e.g., your lowest photoreceptor density is in the thousands of photoreceptors per mm^2; the best implants have, at most, 100 photoreceptor-equivalents over the entire implant). We're better going the other way; sending output from the brain to control a prosthetic limb, but control isn't sensation.

Musk seems to think that this is a problem like vehicular automation, where it's (mostly) a function of good enough computer vision algorithms, good enough sensor packages and fast enough computers, but where everything is fundamentally a problem of making different computers talk to each other. Unfortunately, while it's the current analogy, your brain is no more like a computer than the sleeping cat next to me is like a parked car. Making the kind of science fiction interface he's imagining is so far beyond our current understanding of the brain that it's not even a bad joke.

I wish we knew enough about the brain to even contemplate this kind of direct engagement with it, but we're decades of work, at a minimum, away from that. A century and more of neuroscience research, if it's taught us anything (and it damn well has) has taught us how little we really know.
posted by Making You Bored For Science at 12:45 PM on January 1, 2018 [23 favorites]


LOL.

No awareness of the implications of emergent antibiotic resistance for casual elective brain surgery ...
posted by cstross at 1:07 PM on January 1, 2018 [7 favorites]


What do I mean? The boring company took a multi-billion dollar construction problem of tunnel size and re-defined the problem to a smaller tunnel diameter with a system controlled speed (sled) eliminating man made accidents.

This is nonsense. There is no problem of “tunnel size”, because existing applications of tunnel boring machines (TBMs) are mostly for large-scale transit projects. Elon Musk redefined the problem all right, in that his company shrank the diameter of the tunnel because he’s envisioning single-occupancy vehicles travelling through them, not trains. And the reason that current tunnels are used for transit and not SOVs is because SOVs don’t scale.
posted by Automocar at 1:18 PM on January 1, 2018 [13 favorites]


I wish the more informed/intelligent people in society (and I do think of metafilter as one of the better proxies I have available to me for that set) would stop clutching pearls over some of the things that are obviously coming down the pipe (maybe not on Musk's timeline, but come on: it's coming at some point) and start thinking about how to do it in a way that works out for us. Musk isn't driving this stuff, he's one of the people trying to make sure that when it does happen (as it inevitably must unless we somehow magically stop progressing technologically as a global society), that it happens in a way that is workable.

If you think there's any way to stop machine/brain interfaces from coming or that the early initial risks involved will provide any significant deterrence for the people seeking the benefits, you are nothing less than deluded.
posted by lastobelus at 1:39 PM on January 1, 2018 [3 favorites]


Elon Musk is a tedious self-marketer. His claim to fame is raising VC capital and employing lots and lots of smart people for less than they are worth and with a pooooooooooor record of gender equity.
posted by Existential Dread at 1:45 PM on January 1, 2018 [11 favorites]


Ever since I saw that episode of Nova where they showed a demo of the ability to take vision off of someone's optic nerve and display it on a screen, I've been waiting for this kind of stuff to develop. As a software person, I don't really trust anything about it other than its inevitability if we get to the point where you can have bi-directional interaction with the optic nerve. The Google Glass for your Brain type devices and apps are going to be extraordinarily compelling.
posted by feloniousmonk at 1:49 PM on January 1, 2018


Elon Musk is a tedious self-marketer.

I don't really care whether this is true or not. If it is, do you believe that somehow means brain computer interfaces don't need to be considered, because Elon Musk talked about them?
posted by lastobelus at 1:51 PM on January 1, 2018


If it is, do you believe that somehow means brain computer interfaces don't need to be considered, because Elon Musk talked about them?

Absolutely not. What I believe is that what Elon Musk has to say about them should be regarded with extreme suspicion. He is a very wealthy man in a position of extreme privilege, and his expertise is in raising money and making splashy claims. From the article:
He said that a “merger of biological intelligence and machine intelligence” would be necessary to ensure we stay economically valuable.
I have extreme problems with that statement. Economically valuable to who? To the oligarchs currently running our societies into the ground? We as humanity must define the value in our existence and in our economy. I shudder to think what economic value looks like if we are forced to submit to a private company's brain implants just to be able to feed ourselves.
“Musk’s goals of cognitive enhancement relate to healthy or able-bodied subjects, because he is afraid of AI and that computers will ultimately become more intelligent than the humans who made the computers,”
I'm not sure if Elon Musk really understands AI, if he's afraid of it. AI is advanced computing, but it's not cognition as we know it. Also, the unexamined ableism here is pretty gross. Neural interfaces are critical for people with bodily impairments or disabilities, but this comment appears to dismiss them out of hand in favor of whatever 'Google Glass for rich people brains' bs Elon and Zuck are peddling.
posted by Existential Dread at 2:10 PM on January 1, 2018 [15 favorites]


Hey, Elon?

You go first…
posted by Pinback at 2:23 PM on January 1, 2018 [1 favorite]


If it is, do you believe that somehow means brain computer interfaces don't need to be considered, because Elon Musk talked about them?

In the way Musky talks about them, yes, you can forget about them. It's all sci-fi nonsense of the worst sort, of a similar worth to all the giant O'Neil space colonies of the seventies we didn't end up building. Those were just the ultimate expression of white flight, a Californian suburb in the sky with no wildfires, earthquakes or Black people.

Musky's dream meanwhile is basically E. M. Forster's The Machine Stops, where everybody is jacked into the mainframe while zooming through hypertunnels in their own personal transportation pods. It's a fear of poor people in general rather than Black people especially, so perhaps that's an improvement?.

Meanwhile, in the real world, brain computer interfaces are the stupidest thing you can think up in a world where your wifi connected tea kettle is mining bitcoins for a Russian crime syndicate while your light bulbs have stopped working because they can no longer talk to the servers of the company you bought them from.
posted by MartinWisse at 2:27 PM on January 1, 2018 [18 favorites]


As a recovering neuroscientist I can certainly say that brain-computer interfaces are so experimental that the idea that they would be superior to current brain-body (mouth, fingers, etc) body-machine interfaces is laughable. But the really risible part is that elective brain surgery is a no-big-deal thing that everyone is going to do. It's seriously a big deal, which is why brain implants have only been put into people with intractable neurological conditions (epilepsy, Parkinson's, etc). Be prepared for high death and disability rates if this becomes mandatory comrade. (Maybe that's a feature, not a bug?)

Or maybe the really risible part is the idea that a successful implementation of this nonsense would actually help people in the post-scarcity AI capitalism hellscape he's envisioning. Like you get this implant and now you can do what? Drive a taxi instead of an AI? Serve burgers at McDonalds instead of the vendbot? Comb through legal records to get material relevant to a given case? This nonsense doesn't address the problem, which is that AI replaces humans by doing the job as well or better but much cheaper, and no implant will make you cheaper than an AI.

Now, universal guaranteed income and healthcare would solve the problem nicely, but somehow that part of the Culture series doesn't appeal to Musk.
posted by Humanzee at 2:29 PM on January 1, 2018 [15 favorites]


things that are obviously coming down the pipe (maybe not on Musk's timeline, but come on: it's coming at some point)

It’s takes a remarkably naive view of science and technology to believe that some technological advance is just destined, like the working out of the Hegelian Spirit, to happen.
posted by octobersurprise at 2:37 PM on January 1, 2018 [7 favorites]


A few years ago I pranked that I was on the list for the GIB (google implant beta) -- what was funny was a few months after I'd been making that silly fake claim one of the google guys explicitly stated in a speech that there was no such thing in the works or even planned. I certainly had not the slightest influence but was keyed into the tech zeitgeist. And it was a joke, I'd never thought it would be possible within the next couple hundred years... but scanning this thread and seeing the preponderance of "man will never fly" comments, well looking for that beta sign up soon.
posted by sammyo at 3:17 PM on January 1, 2018


They laughed at Galileo and they laughed at Einstein, but they laughed at Bozo the Clown, too.
posted by octobersurprise at 3:51 PM on January 1, 2018 [6 favorites]


Musk moves by a vaulted privilege and makes considerable gains from doubting his knowledge is superior to a consensus of engineers from whom he takes a plan to upend past progress. That an electric car was feasible, but resisted by management motivated by nothing other than imminent bonuses and pension packages, was knowledge for decades. The home battery is an undisguised and useful expansion of a grid.

He's a heart of a good kid and wants to better the world, but doubting his knowledge and exposing his comprehension are not the same thing, at all. He's the generation to follow the likes of Dean Kamen who tinkered with their hands. Musk's generation? They're more prone to mistaking the map for the territory and overstating vision for past accounting, more prescriptive than descriptive, and there's no excuse for it, but the reasons are most plain to educators who have witnessed the devaluing of liberal arts, all of the Chicago school's accomplishment (to cite a representative example) that cannot escape that perspective and perspicacity are vulnerable and require audit by terms contrary to the perceived immediacy of advancement.

He collaborates, but plagued by an impatience to simultaneously render and run a razor's edge. Casting to the future is an intuition he explores and was likely always encouraged to do, but over an arc, bears sensibilities inversely proportional to reluctance and commitment to sustain demonstration. I'm sure he's exhausting to inform in terms of audit and edit. He's a surfer prone to avoid study of the reef.
posted by lazycomputerkids at 3:52 PM on January 1, 2018 [1 favorite]


If you think there's any way to stop machine/brain interfaces from coming or that the early initial risks involved will provide any significant deterrence for the people seeking the benefits, you are nothing less than deluded.

so what you're saying is "resistance is futile?"
posted by entropicamericana at 4:05 PM on January 1, 2018 [8 favorites]


As a recovering neuroscientist I can certainly say that brain-computer interfaces are so experimental that the idea that they would be superior to current brain-body (mouth, fingers, etc) body-machine interfaces is laughable. But the really risible part is that elective brain surgery is a no-big-deal thing that everyone is going to do.

All Musk is doing is regurgitating Cyberpunk 2020, Cybergeneration, and Shadowrun, where neural interfaces can be installed for a couple thousand dollars at your local mall. And back in the day when I pointed out the most likely consequences of neural implants was not Humanity loss but encephalitis, people handwaved it away. I was also pointing out the likely consequences of linking one's brain to a corporate internet decades ago.

And now, now that it's Musk talking about it, NOW it's suddenly a highly difficult, problematic thing. Fuck.

I bet if it was Walter John Williams talking about the white male corporate techno-paradise, there'd be a lot more cheering and a lot less ranting about Capitalism. Or maybe not. There's a lot more toxic nostalgia for village life around here lately.
posted by happyroach at 4:28 PM on January 1, 2018 [4 favorites]


How much efficiency is actually expected from a direct BCI? The current interface is limited by a combination of cognition and manual dexterity - I think the record for actions-per-minute is 800+, set by a top Starcraft player. A lot of this is repetition and reflex, however.

Remove manual dexterity from the equation and you can play Starcraft a bit better I guess. What actual jobs are out there that would benefit from this? I don’t get it.
posted by um at 4:34 PM on January 1, 2018


In the far future, I guess the most significant enhancement would be to have vast stores of knowledge "in your head". Information search and filtering as a "recall reflex" is something that would, in fact, be an improvement to an existing capability. Telepathic-ish communication and perfect recall information storage would be part of that.
posted by smidgen at 5:00 PM on January 1, 2018


This may happen, but it will not happen for a very long time. Event he AI "experts" are talking out of their ass. We can't even recognize and generate speech consistently as reliably as a human can in all the different environments humans operate in, and that's the showpiece for "deep learning" et al.
posted by smidgen at 5:03 PM on January 1, 2018 [1 favorite]


... I just want an ereader implant on the inside of my eyelids. Can I have that without the robot AI dystopia?
posted by ErisLordFreedom at 5:14 PM on January 1, 2018 [3 favorites]


There are companies working on putting displays on contact lenses right now, but there are a host of issues to work out.
posted by Existential Dread at 5:31 PM on January 1, 2018 [2 favorites]


I bet if it was Walter John Williams talking about the white male corporate techno-paradise, there'd be a lot more cheering and a lot less ranting about Capitalism.

Walter John Williams writes sci-fi novels. But if he really wanted me to believe that our choices were cyborgification or irrelevance, I’d be equally skeptical of him, too.
posted by octobersurprise at 5:48 PM on January 1, 2018 [1 favorite]


Isn’t information search just Googling, though? And telepathy is like a phone call? Like, being able to do those things without having to move my hands around I guess is an improvement - definitely an improvement for the physically impaired, which will be me when I’m decrepit.

The bottleneck isn’t the interface, it’s how fast people can understand stuff and turn that understanding into useful action. Computers are faster at certain tasks because we don’t expect them to understand what they’re doing or why.
posted by um at 5:49 PM on January 1, 2018 [4 favorites]


He's mainly cribbing on Banks if the past is anything to go by.
posted by Artw at 6:20 PM on January 1, 2018


Sure. No one would strap themselves into the 2000 kg chunks of metal that kill more people per year than all global wars just to go get their groceries either.

Except, they do.
posted by lastobelus at 7:24 PM on January 1, 2018 [2 favorites]


“People drive cars, therefore humans are destined to become cyborgs” is not the most compelling argument I’ve ever encountered.
posted by octobersurprise at 8:01 PM on January 1, 2018 [6 favorites]


It’s takes a remarkably naive view of science and technology to believe that some technological advance is just destined, like the working out of the Hegelian Spirit, to happen.

Assuming you don't have a religious/magical belief about how human brains work, and given what has already been experimentally demonstrated, what is there about human-brain/computer interfaces that isn't inevitable given enough iterations of engineering refinement? It's complex -- very complex. So is putting 20 billion electronic devices on a postage stamp.

in a single digit number of years, we'll be putting 40 billion electronic devices on a postage stamp. We may or may not get to 80 billion, but then again we haven't really started using 3 dimensions yet. The swimming pool has a visible amount of water in it* -- why can't people see it?

*refers to an article a couple years ago that made an excellent analogy for mankinds approach to the thermal computing density necessary to perform AI: filling a swimming pool by putting a drop of water in one day, 2 drops the next, etc. It takes 39 days to fill an olympic swimming pool. On day 20 the bottom of the pool doesn't even look wet.

My personal estimation is that beating Go is a "the bottom of the pool is wet" moment.

What does that have to do with B/CI? As AI becomes capable of doing meaningful things there are going to be people who want to be able to also do those things. They're going to really, really want to be able to do those things.

It's obviously going to be very difficult.

But what case do you have that it's "warp drive" or "time travel" difficulty (ie, requires a breakthrough that may not even be physically possible) as opposed to 20 billion transistors on a postage stamp difficulty (requires lots and lots of expensive engineering iteration)?
posted by lastobelus at 8:06 PM on January 1, 2018 [2 favorites]


So, people will buy a brain implant that allows them to play Go competitively against a Go AI? Are the people actually playing Go in that instance, or are they just actuators moving tiles on a board according to the dictates of the implant?
posted by um at 8:31 PM on January 1, 2018 [1 favorite]


I'm not implying playing Go is a highly desirable AI augmentation. I'm arguing it's a somewhat reliable harbinger of such.
posted by lastobelus at 8:36 PM on January 1, 2018 [2 favorites]


From the Rolling Stone profile:

I explain that needing someone so badly that you feel like nothing without them is textbook codependence.

Musk disagrees. Strongly. "It's not true," he replies petulantly. "I will never be happy without having someone. Going to sleep alone kills me." He hesitates, shakes his head, falters, continues. "It's not like I don't know what that feels like: Being in a big empty house, and the footsteps echoing through the hallway, no one there – and no one on the pillow next to you. Fuck. How do you make yourself happy in a situation like that?"


I'm trying to imagine a Muskian future that combines his extreme loneliness with the prospect of cyborgs.
posted by mecran01 at 9:37 PM on January 1, 2018


in a single digit number of years, we'll be putting 40 billion electronic devices on a postage stamp. We may or may not get to 80 billion, but then again we haven't really started using 3 dimensions yet.

You know, people like to trot out figures like that, but one tiny area of advancement doesn't really mean jack shit in the larger context. I mean gee, you can cram more transistors on a chip. Whoop-de-doo.

Meanwhile, you are using interface systems that are fundamentally unchanged from 30 years ago. Yeah, that's right, look at the history of mice and touch interfaces. Programming is similarly stalled on important applications- Word and Excel for example are forced to add useless, counterproductive features, because they were finished ages ago. Windows 10 is basically Windows XP, and Apple Whatever is the same shit as it was when I was in college.

We keep adding bells and whistles and things have gotten smaller, but as as fundamental breakthroughs go, progress is if anything slowing down and becoming more and more. In remental.

So joy, 80 billion electronic devices on a chip. You'll still be using a mouse.
posted by happyroach at 11:31 PM on January 1, 2018 [3 favorites]


Yeah, seconding the "we're already cyborgs". It is all about where you and your consciousness is, actual physical hookups between meat and machine are silly. Is your meat hooked up to your consciousness? How so? We can (and do) hookup machines to enhance your consciousness in just the same manner...

Even when you get beyond the AI/thinking/cyborg part and into the mechanical/physical/cyborg, bionics and direct integration is not needed. Drone pilots, undersea remote sub operators, and anybody who has played Counterstrike or Quake knows that you can easily be elsewhere even lugging around your unreformed, loose, floppy meat.
posted by Meatbomb at 11:51 PM on January 1, 2018 [3 favorites]


He's a heart of a good kid and wants to better the world

c.f. FOCUS ON MAKING YOUR DAMN ELECTRIC CARS, he's working on a couple things that are probably worthwhile - but all this jumping around between flashy futurist visions in fields he half understands is just exasperating
posted by atoxyl at 12:45 AM on January 2, 2018 [1 favorite]


> feloniousmonk:
"Ever since I saw that episode of Nova where they showed a demo of the ability to take vision off of someone's optic nerve and display it on a screen, I've been waiting for this kind of stuff to develop. As a software person, I don't really trust anything about it other than its inevitability if we get to the point where you can have bi-directional interaction with the optic nerve. The Google Glass for your Brain type devices and apps are going to be extraordinarily compelling."

I just think it would be awesome to have a pair of "shades" with a tunable, stereo-optical camera array that can dump straight to my implant. WHY should I have to buy expensive glasses EVERY YEAR? And being able to kick it into telephoto or macro range at will would be AWESOME!
posted by Samizdata at 1:49 AM on January 2, 2018


The permentantly visible ads and having to pay the subscription to keep it running would suck though.
posted by Artw at 6:36 AM on January 2, 2018


things are going great
and they're only getting better
i'm doing all right
getting good grades
the future's so bright
i gotta wear shades with a tunable, stereo-optical camera array that can dump straight to my implant
posted by entropicamericana at 6:44 AM on January 2, 2018 [4 favorites]


I used to fantasize about a BCI back when I was young and playing Shadowrun RPG and watching the original Ghost in the Shell and reading Neuromancer too. As time has marched on, I realized two things:

1. Nobody's gonna pay the outlandish prices required to get brain surgery and a massively complicated piece of tech implanted in their heads when they can do the same thing with their tablet or smartphone for a fraction of the cost, with way less hassle (and surgery).

2. Shadowrun, GiTS and Neuromancer were DYSTOPIAN!!! the world they were describing, despite all the 'neat' tech, was a hellscape for everyone involved, even the infotech cowboy heroes that seemed soo cool back then.
posted by some loser at 7:00 AM on January 2, 2018 [1 favorite]


what is there about human-brain/computer interfaces that isn't inevitable given enough iterations of engineering refinement?

If there was ever a perfect capsule summary of "engineer's disease" that sentence takes the prize. "If you assume that I'm right, how can I be wrong?" It's precisely this automatic assumption of all those iterations of engineering refinement and the ignorance or disavowal of any other variables apart from them that constitutes such scientific and technological naïveté.

But what case do you have that it's "warp drive" or "time travel" difficulty (ie, requires a breakthrough that may not even be physically possible)

At this point, I suspect that mind-machine interface may be more akin to projects like cheap and easy fusion power, large orbital habitats, or moon colonies—all projects touted as imminent over the last 50 years, none of which have materialized for a variety of technological and socio-political reasons. Any reply that they would have materialized if only the resources had been devoted to them, is a reply which rather makes my point: technological advances are not "inevitable" and there are variables besides "iterations of engineering refinement."

Even more to the point is the state of US public transportation and broadband, two things which could already be vastly improved by available technology, but which are instead, for reasons of human politics and social prejudices, inadequate and likely to remain inadequate or worse for the foreseeable future.

So I have no idea if mind-machine interface is possible or likely. It may be. But I'm extremely skeptical that it is in any way "inevitable." And I suspect any one who tells me it is of trying to sell me a bridge.
posted by octobersurprise at 7:25 AM on January 2, 2018 [9 favorites]


I don’t know much about the mind-machine interface, but it seems like farming, mass human migrations, geo-political strife, and rampant economic inequality, are all more imminent issues currently.
posted by gucci mane at 7:48 AM on January 2, 2018 [3 favorites]


engineer's disease

Also note (in TFA) the unsupported assumption that language is nothing but an I/O module, and one with "low bandwidth", for accessing independently existing thinking. That seems to me to be an assumption that is far outstripping what is known in neuroscience.
posted by thelonius at 8:09 AM on January 2, 2018 [4 favorites]


I'm okay with people needing to become cyborgs, well evolution happens eventually. In fact, we'll take so long to develop strong AI that we should be exploring parallel clusters of human minds, if only to speed things up. I highly doubt parallelized humans could ever be a truly equitable phenomenon, so fine.

I do take serious issues with the sorry state of our (a) information technology and (b) mechanism design though. If we do brain-computer interfaces like our existing technology then all the "excluded" will merely be safely irrelevant while the "advanced" cyborgs will become prey animals whose entire existence gets enslaved to the "slow AI" predators we already have. Instead we need to replace those predators with more productive functions than "paperclip maximizing".
posted by jeffburdges at 9:17 AM on January 2, 2018


Musk just rewrites the problem to make it solvable.

What do I mean? The boring company took a multi-billion dollar construction problem of tunnel size and re-defined the problem to a smaller tunnel diameter with a system controlled speed (sled) eliminating man made accidents.


The fact that you deem the problem "solved" before a single line of working track is laid, much less a single paying passenger transported, pretty much sums up everything that's wrong with Musk fanboys (and much of what's wrong with Musk himself).
posted by praemunire at 9:32 AM on January 2, 2018 [9 favorites]


I've been dreaming of AR for memory augmentation, object recognition, etc, since I was a teenager. But actually plugging directly into my brain? No thanks.

Basically, I'd rather have a perfected Hololens than a computer plugged directly into my brain. It's not like the BCI really gets you that much relative to other command interfaces that bypass much of the processing delay in your peripheral nervous system.
posted by wierdo at 10:15 AM on January 2, 2018


If they haven't figured out the human nervous system well enough to just read off the top of our heads, and discuss via native energies, then they have no business of going inside our heads.

Elon Musk will get rich when he realizes that his cars need a battery that weighs no more than a briefcase, that comes in at night, and plugs into the regular 110 by the front door, or just inside the garage door, with a two plug 110 double adapter. Then people who park on the street, and people in general would have no reason not to use these cars. Then travelers could recharge in motel, hotel rooms, and so on and so on and so on. He is just thinking for our new upper middle class overlords, that are in the process of being destroyed by the current government.
posted by Oyéah at 10:29 AM on January 2, 2018 [2 favorites]


Two links on self-driving cars related to safety and racism:
Can Self-Driving Cars Protect Black People From Police Violence?

Barack Obama - 'Automated vehicles have the potential to save tens of thousands of lives each year. '

And, more directly related, WaitButWhy's post on the brain-computer interface, which addresses I would guess about ~90% of the objections and counter-arguments in this thread:
Neuralink and the Brain’s Magical Future
posted by beatThedealer at 11:07 AM on January 2, 2018


guess again.
posted by some loser at 11:26 AM on January 2, 2018


Have you read the blog post? In full? It is a fairly long post, but there are very few things in this thread not addressed by Tim Urban's post.
posted by beatThedealer at 11:34 AM on January 2, 2018 [1 favorite]


Absolutely against letting someone who expresses themselves via Reddit cartoons put anything in anyones brain.
posted by Artw at 12:16 PM on January 2, 2018 [5 favorites]


why do people persist in assuming that saying something is probably inevitable is the same as saying it is easy? If an intelligent twelve year old starts playing chess, decides it's what they like best, have a source of funding, and dedicate the rest of their lives to playing chess, it's pretty inevitable that they will eventually become a very good chess player, quite possibly among the top chess players.

Why is that the same as saying it's easy for a twelve year old to become a top chess player? It clearly isn't the same.
posted by lastobelus at 1:14 PM on January 2, 2018


Tim Urban's post contains an excellent example of an antithesis of engineers disease: I'll call it "no biggie" disease. Gutenberg's invention -- which completely and utterly changed history -- seems like no biggie, to even the least engineer-ish people. I'm pretty sure a lot of people have "no biggie" disease about alpha go.
posted by lastobelus at 1:21 PM on January 2, 2018


When I look at the various factors & social forces, to me it seems that they add up to the inevitability of both AI (not necessarily sentience, I think equating AI with sentience is juvenile stupidity -- I'm not even sure I believe sentience is a real thing!) and B/CI.

Now I don't know how long it will take, and I don't know how much engineering iteration will get directed at B/CI (but it should be clear to everyone that a shit-ton will be (and already is being) directed at AI).

But if you say you are confident that not enough engineering iteration will get directed at B/CI for it to become significant, or that that engineering iteration will fail because human brains are magic so interfacing with them will turn out to require impossible innovation, you are the one selling a bridge my friend.
posted by lastobelus at 1:32 PM on January 2, 2018


Maybe it'll take 50 or a 100 more years for AI & BC/I to become something more than experimental curiosities. I very much doubt it though. I think it'll take more along the order of the time and effort it took to put a supercomputer in the pocket of every average Joe.
posted by lastobelus at 1:36 PM on January 2, 2018


What social media and algorithms have done in the last 5 years is horrible enough without speeding it up.
posted by Artw at 2:01 PM on January 2, 2018 [2 favorites]


The reason inevitability is problematic is because these discussions are deterministic, and technology is not deterministic. We are all actors in a giant network, people and tech alike. We are all on the same level. This is called "actor-network theory" and it's a central guiding philosophy in the study of sociotechnical systems, the sociology of science, and science and technology studies. There are literally whole fields of study devoted to discussing, in large part, the problems with technological determinism. Feenberg describes some of the problems with technological determinism better than I can in his article Subversive Rationalization: Technology, Power, and Democracy, which may interest some of the determinists in the "room."
posted by sockermom at 3:02 PM on January 2, 2018 [6 favorites]


I'm not even sure I believe sentience is a real thing!

Sometimes I get that feeling when I read the comments, too.
posted by octobersurprise at 3:37 PM on January 2, 2018 [1 favorite]


The reason inevitability is problematic is because these discussions are deterministic, and technology is not deterministic

I feel this rubric is needed more by those dismissing the idea that brain-machine interfaces will acquire some sort of significance in some sort of timespan meaningful to the people conversing in this thread than it is needed by myself. Of course technology is not deterministic. Also we could get hit by a meteor before anything gets developed. The twelve year old wannabe chess player in my analogy above might find out his brain simply can't grok chess at the world-class level, no matter that he dedicated his entire life. But it's not really likely.

We've entered the AI age. Probably. I think probably history will say AlphaGo + usable translation marked the start of the AI age. Tensorflow is just over 2 years old on github and has 41000 forks, and 26500 commits. Its github ranking is 3rd for c++ & 5th for python. That's a shit-ton of engineering iteration happening. If you think that's going to fade away into nothing, you either believe the human brain and/or intelligence is magic or you're trying to sell me a bridge. And if AI advances rapidly, I find it difficult to believe that brain machine interfaces won't follow to some degree.

Also:

* we don't need to understand the brain (fully, or even well) to have brain machine interfaces. We've already demonstrated that they are possible with the amount of understanding we currently have.

* we don't necessarily need neural lace or electrodes for significant brain-machine interfaces. How much engineering iteration has gone into eeg caps? We can fit 20 billion transistors in 4 square centimeters; currently available eeg caps have a few hundred sensors in an area of 200-300 square centimeters -- there's a little room there. How much viable interface bandwidth is there? we'll find out when we've tried exploiting it all. No encephalitis risk required.

* there was not a lot of point in aggressively pursuing brain machine interfaces until now. As people pointed out, given how we currently communicate with computers there was little gain to be had. AI is going to change that. The sorts of things it we can communicate to our machines are going to expand rapidly. So will the attendant gains available for doing it more efficiently.
posted by lastobelus at 4:21 PM on January 2, 2018


Yeah, we've entered the AI age. That's how we know it's shit.
posted by Artw at 4:30 PM on January 2, 2018 [1 favorite]


Of course technology is not deterministic.
Then... stop saying a bunch of technologically deterministic stuff? Have you read any of the articles that people have provided to try to help you and others flesh out your understanding? This thread is starting to remind me of my undergraduate seminars, where I scream about technological determinism for 70 minutes twice a week and then half the students turn in papers that start with "Since the dawn of time, technology has shaped mankind" and end with "And thus technology inevitably progresses forward."
posted by sockermom at 4:51 PM on January 2, 2018 [5 favorites]


We've entered the AI age. Probably.

New motto for 2018.
posted by octobersurprise at 5:22 PM on January 2, 2018 [2 favorites]


Technologically deterministic: in ten to twenty years we'll have neural laces that let us control computers directly. --> not something I've ever said

Not Technologically deterministic: given what's been demonstrated experimentally and the current level of engineering interest, it seems very likely that AI and, as an adjunct, brain machine interfaces will become culturally/economically significant in a reasonable time span (i.e., over the next generation, say) --> something in which I am quite confident. I don't know what the innovations will be, or what schedule they'll appear on, I'm just confident there will be innovations and that they will be significant. Cuz that's what happens when this much attention gets given to something.

Technologically deterministic: brains are hard therefore there is no way we'll solve AI / BMI to a significant degree over the time span I care about --> what I hear people saying, and what sounds to me like trying to sell a particular bill of goods.
posted by lastobelus at 5:46 PM on January 2, 2018


I don’t want to wake up some morning to find that my brain refuses to pair with my car stereo.
posted by bonobothegreat at 7:12 PM on January 2, 2018 [1 favorite]


I don't know what the innovations will be, or what schedule they'll appear on, I'm just confident there will be innovations and that they will be significant.

Fusion is only 10 years away!
posted by Existential Dread at 9:15 AM on January 3, 2018 [4 favorites]


That’s Mr. Fusion to you!
posted by octobersurprise at 10:46 AM on January 3, 2018


Cargo dirigibles and space elevators.
posted by Artw at 11:38 AM on January 3, 2018 [1 favorite]


Flying cars, jet packs, Sealab 2020.
posted by octobersurprise at 12:00 PM on January 3, 2018 [2 favorites]


Isn’t information search just Googling, though? And telepathy is like a phone call?

Use your imagination, for gods sake.

Not anymore than googling is like sorting through papers at your local university library or calling every relevant business in the phone book. Not anymore than a phone call is anything like watching someone speak, show you diagrams and draw things in real time over the internet.

Again, to be very clear, I'm not talking about having computer assisted brains "soon" for any measure of soon -- but I do think something like it is inevitable (if society itself doesn't collapse). Maybe it won't be a brain interface as we imagine it -- maybe it'll be hooked in a non-intrusive way -- certainly we should be able to pick up electrical signals going in *both* directions (see VR/AR interfaces -- which is where I think a lot of this will be happening).

I really hate the repeated analogies to 50's scifi, because I feel like people who do that have just a complete failure of imagination. Yes, there are no flying cars and moon colonies in exactly the way the pulp novels and popular science depicted them. But there are good reasons why we don't have them in exactly that way, and they aren't the same reasons as each other, not to mention that the reasons will be different from why we won't have brain implants in exactly the way we conceive of them now. But it's not like we don't have any of these things at all...

We don't have flying cars, possibly for good reasons, but we do have flying buses. We don't have moon colonies, but we have at least 3 entities trying to make them a reality -- and, yes, we've landed on the moon and come back. Just like we don't have a universal knowledge base -- not everything is on the internet -- but it's damn close and getting better all the time.
posted by smidgen at 2:27 PM on January 3, 2018


Anyone who's unable to notice the difference in size, scope, and (despite y'all's aversion to the word) inevitability of the engineering efforts being put towards flying cars and the engineering efforts being put towards AI deserves the big surprise that's coming for them sometime in the next decade or two.
posted by lastobelus at 4:38 PM on January 3, 2018


It's literally several orders of magnitude difference.
posted by lastobelus at 4:40 PM on January 3, 2018


Anyone who's unable to notice the difference in size, scope, and (despite y'all's aversion to the word) inevitability of the engineering efforts being put towards flying cars and the engineering efforts being put towards AI deserves the big surprise that's coming for them ...

If I have offended thee O Rothko's Basilisk ...

I can't be bothered to go back and be absolutely certain, but I don't think anyone's categorically denied that a mind-machine interface will never happen. Certainly I wouldn't be that person because that would be dumb. I am skeptical of the hype surrounding this idea and I'm skeptical of all of the hype surrounding Musk but I'm agnostic on the possibility of any such actual technology itself at some indeterminate time in the future because, like, who knows, man? Maybe it'll happen maybe it won't. I do find it remarkable, tho, that there are people who are so certain that it will happen that even the expression of a little skepticism elicits a "just wait till the AIs get home, you'll get yours."
posted by octobersurprise at 11:42 AM on January 4, 2018 [3 favorites]


« Older It really will be like the Phoenix rising out of...   |   Game status: afoot Newer »


This thread has been archived and is closed to new comments