Viv
April 30, 2015 5:59 PM   Subscribe

Siri talked only to a few limited functions, like the map, the datebook, and Google. All the imitators, from the outright copies like Google Now and Microsoft's Cortana to a host of more-focused applications with names like Amazon Echo, Samsung S Voice, Evi, and Maluuba, followed the same principle. The problem was you had to code everything. You had to tell the computer what to think. Linking a single function to Siri took months of expensive computer science. You had to anticipate all the possibilities and account for nearly infinite outcomes. If you tried to open that up to the world, other people would just come along and write new rules and everything would get snarled in the inevitable conflicts of competing agendas—just like life. Even the famous supercomputers that beat Kasparov and won Jeopardy! follow those principles. That was the "pain point," the place where everything stops: There were too many rules.
So what if they just wrote rules on how to solve rules?
The idea was audacious. They would be creating a DNA, not a biology, forcing the program to think for itself.
John H. Richardson for Esquire
posted by p3on (67 comments total) 25 users marked this as a favorite
 
Uh, the idea was not audacious.

Maybe 60 years ago, but not now.
posted by Tell Me No Lies at 6:09 PM on April 30, 2015 [22 favorites]


ctl+f wolfram alpha
zero results
*sad trombone*

Intelligent agents have been hyped before. Even Siri is less innovative than you might think, and not new.

Still, more advanced AI taking it to another level always sounds cool, and a little scary.
posted by snuffleupagus at 6:13 PM on April 30, 2015 [6 favorites]


Reminds me of Wolfram alpha. We'll see how well it works when it goes live.
posted by humanfont at 6:19 PM on April 30, 2015


There's no "just" about writing a program to write programs. Although similar approaches have succeeded before: Eurisko won a naval RPG tournament twice because its creator, Douglas Lenat, programmed not just heuristics about how to play the game but heuristics on how to design heuristics.
posted by Rangi at 6:20 PM on April 30, 2015 [4 favorites]


How does the quid pro quo work for this kind of breathless startup reporting? Are the reporters promised options or are the VCs steering them with later consulting offers?
posted by benzenedream at 6:23 PM on April 30, 2015 [20 favorites]


...So what if they just wrote rules on how to solve rules?

...He could see the beauty of the rule behind the rules, not a model but a metamodel.They had to define the problem in such a way that it could be solved without solving the problem

...Then the answer came to them—the glimmer of an answer, an elegant subplan that was like another little piece of DNA: Find the solution, it said, and stop there. "Intent representation," they called it. By latching the program to a goal, they gave it a kind of freedom.

The writer had been working for two weeks without success. Suddenly the solution occurred to him. It was as though a friend he hadn't seen since he was ten years old appeared at the breakfast table one morning to tell him exactly how to proceed: jot down some lightly paradoxical bullshit and refuse to elaborate.

After that, the article pretty much wrote itself.
posted by Iridic at 6:26 PM on April 30, 2015 [84 favorites]


An adventurer given to jumping out of planes and grueling five-hour sessions of martial arts

Soon he took responsibility for the computer architecture that made their ideas possible. But he also had a rule-breaking streak—maybe it was all those weekends he spent picking rocks out of his family's horse pasture, or the time his father shot him in the ass with a BB gun to illustrate the dangers of carrying a weapon in such a careless fashion. He admits, with some embarrassment, now thirty-one and the father of a young daughter, that he got kicked out of summer school for hacking the high school computer system to send topless shots to all the printers.

It's one of the eternal mysteries of tech writing that these are details that are meant to give us confidence that these people are likely to be successful at developing new ideas in AI.
posted by escabeche at 6:28 PM on April 30, 2015 [15 favorites]


Nice to see the idea of a basic income being praised by an aspirational publication like Esquire.
posted by infinitewindow at 6:33 PM on April 30, 2015 [1 favorite]


the article is typical tech breathlessness… adam cheyer is legit though.
posted by joeblough at 6:41 PM on April 30, 2015


does the quid pro quo work for this kind of breathless startup reporting?

Well, they used to just have heuristics about how to pay the reporters, but now they have heuristics on how to design heuristics on how to pay the reporters.
posted by happyroach at 6:41 PM on April 30, 2015 [7 favorites]


these are details that are meant to give us confidence that these people are likely to be successful at developing new ideas in AI.

The people have character! They Think Different!
posted by Tell Me No Lies at 6:44 PM on April 30, 2015


It's one of the eternal mysteries of tech writing that these are details that are meant to give us confidence that these people are likely to be successful at developing new ideas in AI.

Nonsense, I have perfect faith in this person based on the fact that he is obviously Hiro Protagonist.
posted by Pope Guilty at 6:49 PM on April 30, 2015 [11 favorites]


Here I am, brain the size of a planet, and they ask me to find an awesome merlot.
posted by charlie don't surf at 6:49 PM on April 30, 2015 [43 favorites]


Can we just set up this entire life sublimation, thought evasion, life simulation, out on some semi-goldilocks zone rock, and use it as a decoy, if some other life forms come by? I know we will be in trouble for presenting as a coo coo clock, but at least it might buy us some time. By the time they use up all their fuel money on varieties of sandwiches and easy robot sucking machines, they might just find some alternate route home, in desperation and miss us entirely.
posted by Oyéah at 6:51 PM on April 30, 2015 [2 favorites]


This did get me thinking fondly of the early eighties, when Expert Systems were introduced to the business world as artificial intelligence that was useful because it didn't attempt to be general but rather focused deeply on a particular set of tasks and data.

I'm vaguely curious how many times that pendulum has swung, but google isn't telling me so screw it.
posted by Tell Me No Lies at 6:54 PM on April 30, 2015 [3 favorites]


Can we just set up this entire life sublimation, thought evasion, life simulation, out on some semi-goldilocks zone rock, and use it as a decoy, if some other life forms come by? I know we will be in trouble for presenting as a coo coo clock, but at least it might buy us some time. By the time they use up all their fuel money on varieties of sandwiches and easy robot sucking machines, they might just find some alternate route home, in desperation and miss us entirely.

And you think that we aren't this sublimation why?
posted by Tell Me No Lies at 6:55 PM on April 30, 2015 [6 favorites]


They think they're about six months from a beta test and a year from a public launch, hopefully with a two-year head start on their giant competitors

Seems like a lot of hype for something that you can't try out. Expert systems are not new, and natural language interfaces have been steadily improving. I have an Amazon Echo. When it works right, it's amazing, when it doesn't, it's annoying. The proof of the technology is in the implementation and the results. I'm sure researchers in this area are rolling their eyes hard at this article.
posted by demiurge at 7:15 PM on April 30, 2015


"Why aren't we this sublimation already?"

Oh, are you saying we are covering for those vent worms, or are we a distraction for what's under the ice on Ganymede? Where do the tacos come in?
posted by Oyéah at 7:16 PM on April 30, 2015


Why stop there? Why not write a computer program to solve the problem of writing a computer program to solve problems to answer voice queries? Otherwise you're only going to waste a bunch of time creating new metamodels every time you need your program to solve a different class of problems. Just write the metametamodel in the first place, dummy!

That sounds wasteful, though. What if you want to create a whole bunch of startups that solve innovative and previously unassailably difficult problems? All you need to is create a metametametamodel to describe that process, and you're away!

Hmmm. Sounds like a turtles all the way up solution.
posted by Jon Mitchell at 7:19 PM on April 30, 2015 [9 favorites]


It's difficult to find the words to express just how much I hated this article.

It is astonishingly fawning and credulous without ever really having any real insight into or proper context regarding the subject matter. Self regarding and shot thorough with self pity about the impact of tech on journalism. It's not tech that's killing journalism, it's shitty journalism like this. It's astounding that the writer doesn't seem to want to apply any analytical rigour to some of the bigger questions raised such as tech displacing traditional middle class jobs but instead uncritically parrots back argument from authority from vested interests like investors. This is like a movie critic asking a producer if a film is any good.

Oh, and the repeated and sycophantic references to the pop culture quirks of the interviewees is inane and pathetic. These guys might like the Terminator, Lord of the Rings and martial arts but throwing in these little pop culture soupçons doesn't make the subject matter easier to relate to, it dumbs down and trivialises complex concepts, referred to almost without exception solely by scary sounding couplets but never actually explained or engaged with - an approach which is simultaneously facile, patronising and incurious. Atomic functional units and program synthesis, natural language, and program analysis. What are these things? They sound interesting. Certainly they sound more interesting than the antics of a bunch of drunk guys in a bar attempting to solve the problems thrown up by abundance and privilege. And let's not pretend that these guys are applying this awesome intellectual horsepower to trying to make the world a better place, they're out to make a buck. Not that there's anything wrong with that per se but it would behove the writer to take with at least a pinch of salt the grandiosity of the claims made.

Brigham came up with the beautiful idea, which makes its own perfect sense. Cheyer was always the visionary.

This isn't journalism, this is PR. This is advertising copy aimed at establishing a cult of the personality around these three founders. And we wonder why tech firms are so defined by their top people such as Zuck, Jobs, Kalanick and the like. It's (in part) because we're unable to have any kind of public discourse about what their companies do without making it personally all about them. It's the same thing as discussing the personalities of politicians and not the polices they espouse - look at the quality of public discourse about politics in the US and the UK. This is that same paradigm, just applied to tech. Ugh.
posted by dmt at 7:20 PM on April 30, 2015 [49 favorites]


Eurisko won a naval RPG tournament twice

Nit picking -- the Traveller tournament was a spaceship battle game, not naval, not RPG. The game rules were much more like Trafalgar than Enders Game, though, with two lines facing off against each other and some ships in reserve.

My brain is not large enough to make the connection but this reminds me of procedural generation but in reverse, sort of -- not to use algorithms to build things, but to take things and build the algorithms for them.
posted by PandaMomentum at 7:25 PM on April 30, 2015 [5 favorites]


This isn't journalism, this is PR. This is advertising copy aimed at establishing a cult of the personality around these three founders.

Absolutely. I used to lap this sort of thing up in Wired as a nerdy teen, but reading similar articles now I realise that they bear about as much relation to reality as the choose-your-own-adventure books about Ninjas, which were my aspirational literature of choice at age 11 or so.
posted by Jon Mitchell at 7:25 PM on April 30, 2015 [3 favorites]


First they came for the problems, and I said nothing because I was a computer program written to write computer programs to solve problems.
posted by Mr.Encyclopedia at 7:41 PM on April 30, 2015 [8 favorites]


Hmmm. Sounds like a turtles all the way up solution.

Not really. "Answer this voice query" is an object-level problem, and "Figure out how to answer voice queries" is a meta-problem, but "Figure out how to solve meta-problems" is still just a meta-problem, not a meta-meta-problem. Like how in object-oriented programming languages where everything is an object, you only need objects and types, but not meta-types. (42 is an object of type "int", and "int" is an object of type "type", and "type" is an object of type "type", and so on.)

Part of why I'm skeptical about Viv (apart from the ad copy tone of the whole article) is that an effective meta-problem solver would be a true Artificial General Intelligence, and the only barrier to it passing the Turing Test or proving whether P=NP or taking over the world would be giving it the goal to do so. That's still a long way away, and the necessary insights to getting there could be better used to win multiple Turing Awards than found another mobile app startup.

(PandaMomentum, thanks for the correction about Traveller.)
posted by Rangi at 7:41 PM on April 30, 2015 [2 favorites]


You can have Turing completeness without any reified types. You can also have type-classes and functors and monads as various levels above types. Every language in wide usage errs on the side of being able to express absurdities (as opposed to being unable to express certain tricky truths). Object orientation is orthogonal to all of this (see also designs like the Meta Object Protocol in common lisp - a reified system for constructing object systems).

The 80/20 rule is well known, but in terms of levels of meta-programming, every layer (code that writes code, code that recognizes whether goals are met, code that can implement novel algorithms, code that can implement a new language, code that can recognize a problem and set novel goals...) gets harder to implement and harder for humans to facilitate. And we don't really have the first 20% done yet. Our best image recognition still decides a photo of a car is an ostrich if you alter the right individual pixel.
posted by idiopath at 8:27 PM on April 30, 2015 [4 favorites]


The article seems pretty typical in terms of trying to make very specific, advanced technology understandable and compelling for a general audience. You just have to accept that many important details are elided and simplified in the interest of story.

Siri reminds me of when iRobot released the Roomba in 2002. The roboticists I worked with were impressed not because it was super smart and innovative, but because it used innovative-enough technology and in the form of a bulletproof consumer product (at least relative to research robots) and did it for a kind of insane price point of only $200. From the description of the Wildfire system linked above, it sounds like it may not have achieved its goals, and those goals were significantly more modest than what Siri has achieved. Siri is the real-world, relatively bullet-proof AI/NLP/ASR system that became good enough to be used by millions of people. it's an amazing accomplishment--And I say that as someone who studied and designs speech/natural language interfaces.

(The idea that we'd automatically have a complete general AI if we get an effective meta-problem solver doesn't really follow--It's a little like saying if we could just teach computers to learn they'd become intelligent. We do, and they aren't. Eurisko was in some sense a meta-problem solver; it was able to reason about, create and improve the heuristics it used to solve problems, but... there's just a lot more to general intelligence than that. Eurisko had to be programmed by Doug Lenat for each specific domain: Traveler space combat, circuit design, mathematics, Lisp programming. There is also some question over just how much AM and Eurisko accomplished vs. what was added by Lenat's interpretation and guidance.)
posted by jjwiseman at 8:41 PM on April 30, 2015 [7 favorites]


I think I need to flesh that out more.

You can compute anything computable, using a language without any first class entity that we would call a "type". Of course it gets very clumsy, and for many problems you will provably need to implement what is effectively a type (though it cannot be expressed as such in your language). Similarly, you can compute anything computable with types but no types-of-types. But at certain levels of complexity, once again, this gets clumsy, and you effectively build a system of type-classes or functors, it's just that your language can't recognize them as such, and can't help you in implementing or use them (see for example how often the Maybe monad gets reinvented, in a poor and ad-hoc manner, in languages that will never support monads).

To bring this back to the thing Rangi was analogizing about, there are also meta-programming tasks, and meta-meta-programming tasks, etc. And just because our language or computational model (or even our tiny human minds) cannot express these concepts, does not make them easier to implement correctly. On the contrary, this makes working with such things all the more error-prone.
posted by idiopath at 8:41 PM on April 30, 2015 [5 favorites]


the name Viv suggests to me that using it will require memorizing a huge list of key combos.
posted by Pope Guilty at 8:42 PM on April 30, 2015 [9 favorites]


Nah, it will just have many different modes, but within a mode, whatever you desire will be granted with a single character of input.
posted by vogon_poet at 8:52 PM on April 30, 2015 [6 favorites]


Vivvr: will follow any instruction as long it's 24 syllables or less.
posted by snuffleupagus at 9:01 PM on April 30, 2015


ViViViViViViv is Viv but with a retro-Commodore 64 aesthetic.
posted by Pope Guilty at 9:26 PM on April 30, 2015 [1 favorite]


I enjoyed the part where Richardson said technologies such as this may destroy our economy, then went straight back to fawning over Viv before he said anything important about the effects of advanced tech on capitalism.
posted by brecc at 9:28 PM on April 30, 2015 [2 favorites]


This is advertising copy aimed at establishing a cult of the personality around these three founders. And we wonder why tech firms are so defined by their top people such as Zuck, Jobs, Kalanick and the like. It's (in part) because we're unable to have any kind of public discourse about what their companies do without making it personally all about them.
It's an interesting exerise to think back and remember that those people - and most of the other big names like Gates, Page, Brin, etc - were, by all contemporary accounts, boring little tits (but obviously with business/tech savvy, &/or vision, &/or driven).

The public 'cult of personality' came later - after they'd actually produced, or at least shown they were on to something.
posted by Pinback at 9:49 PM on April 30, 2015 [2 favorites]


Iridic: "The writer had been working for two weeks without success. Suddenly the solution occurred to him. It was as though a friend he hadn't seen since he was ten years old appeared at the breakfast table one morning to tell him exactly how to proceed: jot down some lightly paradoxical bullshit and refuse to elaborate.

After that, the article pretty much wrote itself.
"

Totally. Once I got to that graf that purported to "explain" the nut of Viv, I could not possibly stomach reading further. I mean, that was really the best explanation the author could come up with? That was pathetic.
posted by Conrad Cornelius o'Donald o'Dell at 9:58 PM on April 30, 2015


MetaFilter: It's difficult to find the words to express just how much I hated this article.
posted by ostranenie at 9:59 PM on April 30, 2015 [9 favorites]


The concept is interesting — and, as a bunch of other mefites have noted, similar in spirit to the kind of stuff Wolfram has been doing for a long time. I'm curious if there's a difference between Wolfram Alpha's "make the world semantically parse-able" approach and the breathless "She can solve problems!" stuff ascribed to Viv.

It is worth noting, though, that the successful demonstrations of Viv the reporter recounted all involved well-modeled, API-accessible, highly structured data that other parties have already done the hard work of assembling and mapping. If that's the extent of it, they're solving fundamental IA problems rather than AI problems.
posted by verb at 10:12 PM on April 30, 2015 [3 favorites]


User: Viv, what is the meaning of life?

Viv: SOLAR. FREAKIN'. ROADWAYS.
posted by HeroZero at 10:14 PM on April 30, 2015


idiopath, that makes a lot of sense. A human writing in a language with only values and functions (which are themselves values) would have an easier time working with functors, monads, etc. But even if you're not already using a language that supports those, your code style might end up with a (verbose, error-prone, incomplete) implementation of them (Greenspun's tenth rule).

The same thing might be possible with an AI: if it has powerful enough meta-heuristics, then its human programmers don't need to supply it with meta-meta-heuristics if it can figure them out for itself. (Although whatever it figures out might be over-complicated and buggy, so practically speaking we should give it comprehensible meta-meta-heuristics anyway.)

(A real-world example: in school we learned problem-specific solutions like "to solve ax²+bx+c=0, use the formula y=(−b±√[b²−4ac])/(2a)" and general meta-solutions like "Use analogies" or "Draw a diagram." You can apply meta-solutions to themselves to create more, like analogize "diagram" to "Lego model" and get "Build a Lego model" (which will help solve 3D problems better than drawings could).)
posted by Rangi at 10:41 PM on April 30, 2015 [1 favorite]


Oh, I see how this article got written now:
I was surprised but not completely flabbergasted by the phone call I received a few weeks ago. A representative of Arista Networks, a networking company I've written about recently, phoned to inform me that the company's chief executive wanted to offer me "friends and family" shares in Arista's upcoming initial public offering. The offer was explicit, down to the number of shares I'd have the opportunity to purchase at the IPO price. The caller specifically wanted me to understand this offer came directly from CEO Jayshree Ullal.

I declined. I briefly explained that it was impossible for me to accept the gift that was being offered. I also told the (clearly uncomfortable) Arista rep, with whom I've dealt for stories for Fortune, that it is a horrible idea to be making these shares available to me. That's because the company must be similarly propositioning other business partners who, like me, are neither a friend of the company nor family members of its employees.
Startups Are Trying To Bribe Reporters With Equity
posted by benzenedream at 11:36 PM on April 30, 2015 [11 favorites]


How does the quid pro quo work for this kind of breathless startup reporting?

It could go something like this: I write the story about me, with photos, graphics, and quotes. I give the package to the reporter, promising that it is all exclusive to him. No direct cut and paste from the press release, no reusing photos, all exclusive quotes. I organize it in the tone of his publication. I give it to him, ready to file, at the lunch I'm buying him. He hands it in to his editor and gets the $800 or whatever, without spending any or much time on it.
posted by StickyCarpet at 1:01 AM on May 1, 2015 [2 favorites]


"I think the business models will change," Kittlaus says.

And this may be the understatement of the Internet age.


LOL, how kute. "This time it's different."
posted by chavenet at 2:39 AM on May 1, 2015


This is how a man should write AI software.
posted by thelonius at 2:54 AM on May 1, 2015


Wait, Skynet wins by finding us all the best seat on an airplane?
posted by Nanukthedog at 3:17 AM on May 1, 2015 [1 favorite]


That all sounds very interesting, but what I'd really like to see is a technological breakthrough that will make me not feel like a complete jackass when talking to inanimate objects.
posted by usonian at 3:50 AM on May 1, 2015 [2 favorites]


One of these days I will sit down and actually work out how many decades it's been since the first confident pronouncement that strong AI was about a decade away.

It's the world of seamless convenience, all your desires satisfied with a minimum of fuss.

It occurred to me the other day that the increasingly hectic pace of modern existence is a direct consequence of our relentless collective pursuit of convenience at all costs. Things that are convenient don't take as long, which means we expect to be able to do more things in a day, and the aggregate effect of that increasing expectation is that we become required to do more things in a day, which pretty much cancels out any benefit we're getting from the fact that each one of those things is (at least theoretically) more convenient.

I remain completely unconvinced that I actually need to be this busy.
posted by flabdablet at 3:57 AM on May 1, 2015 [7 favorites]


So the Roomba lawn mower could have sensors that told you if you needed more lime, connect you to a supplier, and have the lime delivered?

"Absolutely."

And John's RoombaMower Co. gets a cut?

"Exactly."

And if its program for satellite guidance is great, RoombaMower Co. can sell the software to every aspiring robo-lawn-mower company in the universe?

"Absolutely," Kittlaus says.

RoombaMower will conquer the world!

"That's actually a really good idea," Kittlaus says. "I think you should quit your job and do it."


I've got a better idea. First thing I'm gonna do when my fridge gets Viv is tell it to mash itself up with the Random Startup Generator. Then all these really good ideas will just get implemented automatically, without needing anybody to lift a finger to make them happen. Because DNA!

Viv has the electrolytes that plants crave
posted by flabdablet at 4:11 AM on May 1, 2015 [2 favorites]


They have been about to replace programmers with algorithms for the last fifty years.
posted by sonic meat machine at 4:14 AM on May 1, 2015 [5 favorites]


The Last One is the first of the genre that I actually encountered in the wild.
posted by flabdablet at 4:25 AM on May 1, 2015


Kittlaus

How delightfully Dickensian.
posted by snuffleupagus at 4:26 AM on May 1, 2015


What's amazing is how many best seats there are on an airplane once you use AI to find them.
posted by ardgedee at 4:47 AM on May 1, 2015 [2 favorites]


They have been about to replace programmers with algorithms for the last fifty years.

The funny thing is, the business folks who want to do this usually only have the vaguest idea of the problems they want to have these algorithms solve. Sure they could maybe give a brief description, "Make x better," but they are likely entirely unable to break down either the problem they have or the solution they want into the small discrete steps necessary to put in place an automated solution to solve it.

Because if they could, then they would be programmers.
posted by leotrotsky at 5:23 AM on May 1, 2015 [6 favorites]


Sure they could maybe give a brief description, "Make x better," but they are likely entirely unable to break down either the problem they have or the solution they want into the small discrete steps necessary to put in place an automated solution to solve it.

But don't you see? That's the whole beauty of having a DNA, not a biology, forcing the program to think for itself!

Are you truly so lacking in vision that you can't see the market for a fridge with a green V that you can press and say "Viv, please cure cancer" and the thing just works it out?

I want to give these people all my money, like, yesterday.
posted by flabdablet at 8:28 AM on May 1, 2015


MetaFilter: It's difficult to find the words to express just how much I hated this article.


Did you try googling for them them? :-)


Just a note, not that long ago automatic translation was deemed far far in the future. Many early AI attempts failed utterly and embarrassingly. https://translate.google.com/ is far from perfect but it's better than I can do even with hours pouring through a dictionary.

Strong AI will probably creep into existence, the automated phone systems will become virtually e indistinguishable from a bored phone center employee and even preferable. The convenience of auto delivery a day before you need toilette paper will be enough for many to embrace giving up data. Do you remember payphones; the Singularity may just be so convenient we happily become digi-eloi.
posted by sammyo at 9:04 AM on May 1, 2015


Behind all the breathless hype, this seems like it really is just a better Siri; it has the same natural-language capabilities to be an interface with apps and Internet services, but also some kind of active learning ability to figure out how the user wants it to deal with a new app or API. It's believable to me that this could work well a lot of the time, especially with expert guidance constantly being fed in from HQ.
posted by vogon_poet at 10:19 AM on May 1, 2015 [1 favorite]


This whole thing looks like a sales pitch to the "New Microsoft." Then the "Holo" viewing technology, the new Microsoft is creating, looks like a sales pitch to security agencies. Imagine, put on your ubiquitous holo glassas and watch crisp footage from surveillance posts anywhere, while doing anything else. They mentioned making games play all over the room you are in, so what if the room was big enough to put any "game" in it?

"Information" is advertising anymore. It seems that speech and capital potential are twinned.
posted by Oyéah at 11:32 AM on May 1, 2015


expert guidance constantly being fed in from HQ

will rapidly become every bit as worthless and annoying as every other form of paid advertising.
posted by flabdablet at 12:01 PM on May 1, 2015


A real-world conversation I had with my phone just now:

"Siri, are you intelligent?"

"I couldn't even begin to think about knowing how to answer that question."

"That's you and me both, Siri."

"Who, me?"
posted by slappy_pinchbottom at 12:14 PM on May 1, 2015 [2 favorites]


Skynet is here
posted by Ironmouth at 12:54 PM on May 1, 2015


@interfluidity · 7:34 PM - Apr 28: "capitalism will be complete when implantable devices offer in-app purchases."

fwiw, i was reading about how siri runs on apache mesos and then came across apache spark and this reddit AMA with its creator matei zaharia:
Initially, we designed Spark to demonstrate how to build a new, specialized framework on Mesos (specifically one that did machine learning). The idea was that by writing just a few hundred lines of code, and using all the scheduling and communication primitives in Mesos, you could get something that ran 100x faster than generic frameworks for a particular workload. After we did this though, we quickly moved to a goal that was in some ways the opposite, which was to design a multi-purpose framework that was a generalization of MapReduce, as opposed to something you'd only run alongside it. The reason is that we saw we could do a lot of other workloads beyond machine learning and that it was much easier for users to have a single API and engine to combine them in...

One of our goals in Spark is to have a unified engine where you can combine SQL with other types of processing (e.g. custom functions in Scala / Python or machine learning libraries), so we can't use a separate engine just for SQL. However, Spark SQL uses a lot of the optimizations in modern analytical databases and Impala, such as columnar storage, code generation, etc...

There might definitely be competing ML libraries, but more generally we're also happy to include more algorithms in MLlib itself (which has the benefit that your algorithm will be maintained and updated with the rest of Spark). The problem is that there are a lot of algorithms and ways to implement the same algorithm, so it's unlikely that we'll be able to cover everything. Right now we only put in algorithms that are very commonly used and whose parallel implementations are well-understood, so that we're sure we can maintain this algorithm for a long time into the future.

Edited to add: the ML pipeline API is also designed to let people plug in new algorithms, so I hope that many third-party libraries use this API to plug into existing MLlib apps...

The big data space is indeed one with lots of software projects, which can make it confusing to pick the tool(s) to use. I personally do think efforts will consolidate and I hope many will consolidate around Spark :). But one of the reasons why there were so many projects is that people built new computing engines for each problem they wanted to parallelize (e.g. graph problems, batch processing, SQL, etc). In Spark we sought to instead design a general engine that unifies these computing models and captures the common pieces that you'd have to rebuild for each one (e.g. fault tolerance, locality-aware scheduling, etc). We were fairly successful, in that we can generally match the performance of specialized engines but we save a lot of programming effort by reusing 90% of our code across these. So I do think that unified / common engines will reduce the amount of fragmentation here...

And I'd also like to see DataFrames from Spark SQL be even more of a first-class concept because they are a much better way to represent data than raw Java/Python objects (they let us do a lot more compression, etc by understanding the format of the data)...

The main benefits of Spark over the current layers are 1) unified programming model (you don't need to stitch together many different APIs and languages, you can call everything as functions in one language) and 2) performance optimizations that you can get from seeing the code for a complete pipeline and optimizing across the different processing functions. People also like 1) because it means much fewer different tools to learn...

Another area we are working on is defining APIs that lead to better automatic optimizations and more compact storage of data. The new DataFrame API is a nice step towards this because it's still pretty general (you can easily use it to put together a computation) but it tells Spark more about the structure of your data and of the operations you want to run on it, so that Spark can do better optimizations (similar to the optimizations we do for SQL).
that i thought sounded pretty interesting!
posted by kliuless at 2:36 PM on May 1, 2015 [1 favorite]


That all sounds very interesting, but what I'd really like to see is a technological breakthrough that will make me not feel like a complete jackass when talking to inanimate objects.

We can put little waving arms on them if it will help.
posted by Tell Me No Lies at 6:07 PM on May 1, 2015 [2 favorites]


There is a very large difference between machine learning, as it exists, and artificial intelligence, as it is popularly conceived. Machine learning is the application of (human-supplied) heuristics to data in order to deliver potentially surprising insights. For example, if an analyst looks at data and says "hmm, let's try fitting it with this formula," and then flags the result as interesting, the machine learning library can take all of its data sets and look for other, similar correlations. AI—the popular idea of it—and automatic programming will not be possible until it is the computer that can say "hmm, this curve is interesting! Let's apply that to eighteen thousand datasets simultaneously!"

Unfortunately (or fortunately, depending on how you look at it), programming and problem solving are as much about aesthetics at this stage as they are about repeatable heuristics.
posted by sonic meat machine at 8:16 PM on May 1, 2015 [1 favorite]


i guess what i'm thinking where we're going is that when you have potentially thousands of developers working on what could be a qualitatively better open-source version of watson/siri/google now/wolfram alpha/cortana/viv running on publicly available networked supercomputers then what you'd have isn't an artificial intelligence but a superhuman hivemind... altho whether it'd be 'intelligent' or not is another question :P (and then if you had brain-to-brain interfaces you'd have a ramez naam trilogy!)

however keep in mind that: "It's always about who gets to write the laws."
posted by kliuless at 6:59 AM on May 2, 2015


Speaking of an open-source Siri, there's Sirius (github): Meet Sirius, the open-source Siri clone that runs on Ubuntu.

It's a neat illustration of how one might build a Siri-type system, gathering the basic components from other open source projects: Kaldi (or Pocketsphinx or Sphinx) for speech recognition, OpenCV to extract SURF features for object recognition in images, using the LAPOS part-of-speech tagger with OpenEphyra to answer questions. And using some of the best open source code for doing these tasks, it is a essentially a toy system that is 10-20x slower than Siri. Which is fine, it's only been around for a year, but still points out just what a huge amount of hard work goes into creating something as robust as a production-ready (to Apple's standards) Siri.
posted by jjwiseman at 9:19 AM on May 2, 2015 [1 favorite]


By forcing all of us into that same seat, Skynet compresses us into an easily-digestible slurry.
posted by rum-soaked space hobo at 11:46 AM on May 2, 2015


what I'd really like to see is a technological breakthrough that will make me not feel like a complete jackass when talking to inanimate objects.
posted by usonian


I'm the same way. I think using something like this when talking to the computer might make me feel less ridiculous.
posted by StickyCarpet at 1:13 PM on May 2, 2015




I am going to laugh so hard if the first genuinely intelligent computer program turns out to be one written in PHP.
posted by flabdablet at 4:44 AM on May 6, 2015 [3 favorites]


« Older Nepal, Anthropology, and Earthquakes   |   Whisk Me Away! Newer »


This thread has been archived and is closed to new comments