Computer science professor Jordan Boyd-Graber is currently working on a National Science Foundation grant for "Bayesian Thinking on Your Feet: Embedding Generative Models in Reinforcement Learning for Sequentially Revealed Data." At first glance, this might not sound like fun, but in the paper, Besting the Quiz Master, Boyd-Graber showed how machine learning could be used to create a quiz bowl version of the Terminator that can take all human comers. This weekend, that proposed machine finally played a nervewracking 200-200 tie game against a team of four Jeopardy! champions (Kristin Sausville of single contestant Final Jeopardy fame, teacher tournament winner Colby Burnett, professional poker player Alex Jacob, and underdog Tournament of Champions winner Ben Ingram).
Siri talked only to a few limited functions, like the map, the datebook, and Google. All the imitators, from the outright copies like Google Now and Microsoft's Cortana to a host of more-focused applications with names like Amazon Echo, Samsung S Voice, Evi, and Maluuba, followed the same principle. The problem was you had to code everything. You had to tell the computer what to think. Linking a single function to Siri took months of expensive computer science. You had to anticipate all the possibilities and account for nearly infinite outcomes. If you tried to open that up to the world, other people would just come along and write new rules and everything would get snarled in the inevitable conflicts of competing agendas—just like life. Even the famous supercomputers that beat Kasparov and won Jeopardy! follow those principles. That was the "pain point," the place where everything stops: There were too many rules.
The idea was audacious. They would be creating a DNA, not a biology, forcing the program to think for itself.John H. Richardson for Esquire
History Lab has "focused on digitizing, structuring and visualizing large sets of declassified US government documents. This is a starting point for showcasing how computational techniques can aid historical research." Can big-data analysis show what kinds of information the government is keeping classified? [more inside]
The Deep Mind of Demis Hassabis - "The big thing is what we call transfer learning. You've mastered one domain of things, how do you abstract that into something that's almost like a library of knowledge that you can now usefully apply in a new domain? That's the key to general knowledge. At the moment, we are good at processing perceptual information and then picking an action based on that. But when it goes to the next level, the concept level, nobody has been able to do that." (previously: 1,2) [more inside]
Essays and longer texts written in English can provide interesting insights into the linguistic background of the writer, and about the history of other languages, even dying languages, when evaluated by a new computer program developed by a team of computer scientists at MIT and Israel’s Technion. As told on NPR, this discovery came about by accident, when the new program classified someone as Russian when they were Polish, due to the similarity in grammar between the languages. Researchers realized this could allow the program to re-create language families, and could be applied to people who currently may not speak their original language, allowing some categorization of dying languages. More from MIT, and a link to the paper (PDF, from the 2014 Meeting of the Association for Computational Linguistics).
A robot with a broken leg learns to walk again.
A heat map of your preferences over the beer space. Developer Kevin Jamieson writes, "Beer Mapper is a practical implementation of my Active Ranking work on an Apple iPad. The application presents a pair of beers, one pair at a time, from a list of beers that you have indicated you know or have access to and then asks you to select which one you prefer. After you have provided a number of answers, the application shows you a heat map of your preferences over the 'beer space.'" [more inside]
"The discovery advances UC Berkeley’s mission to make sense of big data and to use new technology to document and maintain endangered languages as critical resources for preserving cultures and knowledge. [...] it can also provide clues to how languages might change years from now."
I See What You Did There: Software Uses Video to Infer Game Rules and Achieve Victory Conditions. A French computer scientist has constructed a system that successfully divines the rules to simple games just by using video input of human players at work.
How do robots see the world? This is an experiment in found machine-vision footage, exploring the aesthetics of the robot eye. [SLVimeo]
In the recent MIT symposium "Brains, Minds and Machines," Chomsky criticized the use of purely statistical methods to understand linguistic behavior. Google's Director of Research, Peter Norvig responds. (via) [more inside]
A visualization of all the nouns in the English language arranged by semantic meaning. [NSFW words included!] [more inside]
Introduced to Western culture by the Beatles in their single Norwegian Wood, the sitar has featured prominently in North Indian classical music for centuries. Princeton-based computer scientist Ajay Kapur updates the instrument with his ESitar, an audio and video controller that uses gesture input (PDF) and machine learning algorithms to facilitate joining the computer with Ajay in his sitar performance. Undergraduate engineering students at the University of Pennsylvania work from the other direction, building RAVI-bot, an award-winning, self-playing robotic sitar (YouTube) programmed to generate music from classical Raga scales and melodies all on its own. For those in the Philadelphia area, be sure to check out a live performance of RAVI-bot at the local Klein Art Gallery.