2020: A Year Full of Amazing AI papers- A Review
December 25, 2020 1:58 PM   Subscribe

A curated list of the latest breakthroughs in AI by release date with a clear video explanation, link to a more in-depth article, and (when available) code. (Also available on Medium) Even with everything that happened in the world this year, we still had the chance to see a lot of amazing research come out. Especially in the field of artificial intelligence. Many important aspects were highlighted this year, for example, the ethical aspects, important biases, and much more. Artificial intelligence and our understanding of the human brain and its link to AI is constantly evolving, showing promising applications in the near future.
Here are the most interesting research papers of the year, in case you missed any of them. In short, it is basically a curated list of the latest breakthroughs in AI and Data Science by release date with a clear video explanation, link to a more in-depth article, and code (if applicable).

The complete reference to each paper is listed at the end of this repository.

The Full List
Paper references

posted by infinite intimation (13 comments total) 35 users marked this as a favorite
 
Is the fact so many of these are about images reflective of where AI research is right now, or is this just what the curator is interested in or ... ? (My intelligence, such as it is, is very much home-grown, don't know much at all about AI.)
posted by zenzenobia at 2:24 PM on December 25, 2020 [1 favorite]


I'm guessing it's driven by the medium -- all of these seem to have Youtube links and working in an image or graphical domain makes the results more interesting to laymen. Same way that SIGGRAPH's demo reel is widely anticipated beyond those researching in the field -- the results are easy to understand just by looking at them, even when the math underpinning it is unfathomable.
posted by pwnguin at 2:28 PM on December 25, 2020


I am not an expert at all, just interested by the way these tools seem to be exploding.

I think generally it’s about training computers to ‘develop’ methods of not only sensing (we’ve had webcams for decades), but of interpreting and analyzing sensory input... be it optical or audio - the one that detects objects in a dash-cam video is essentially a basic evolutionary ancestor to some sort of autonomous driving tool. So while they seem focused on more ‘entertainment/media’ uses today, these same tools might be used in much more complex stacks of processes moving forward.
I do think a lot of this sort of AI research is focused around things we can see or listen to simply because those are the sensors that are best refined - these GANs seem to work by analyzing a MASSIVE database of footage/images/audio etc., and then “adversarially” saying “is this that (Hotdog/not hot-dog).

Currently our ability to collect and catalog databases of, for example, chemo-analytical data (an artificial nose/taster) is not widely available or easily accessed (most are proprietary databases/tools used only by large organizations). It’s simply much more possible to collect massive databases of celebrities in every possible light and angle and train systems to manipulate them and get them to say thing s, or colorize/interpolate frames in low frame rate old time videos.
posted by infinite intimation at 2:37 PM on December 25, 2020 [1 favorite]


Its also, these sort of low level applications are really where AI research is at-progress on general AI has stalled.
Science Magazine: Core progress in AI has stalled in some fields
Scientific American: Will Artificial Intelligence Ever Live Up to Its Hype?
Nature.Com: Why general artificial intelligence will not be realized
Sicara: 3 Reasons Why We Are Far From Achieving Artificial General Intelligence
posted by happyroach at 3:16 PM on December 25, 2020 [3 favorites]


As someone who works in this space I find the general inaccessibility and tone of most AI articles as almost off putting. Like I get they’re academic papers but I feel as a lot of it is to impress VCs or corporate executives to throw money at it. Like instead of saying “markup language” they’d rather say “Hungarian Grammar Notation” to look smart. Maybe I just don’t have the smarts for it or maybe I went to school before AI was a Thing and don’t really grasp some of the terminology but I kind of feel as if this is all what we use to call statistical analysis to a large degree. There’s also a huge degree of snobbery from the AI researchers to the actual engineering side of things, or at least that’s been my experience and maybe why I’m put off by the tone of it.
posted by geoff. at 5:57 PM on December 25, 2020 [3 favorites]


I feel as a lot of it is to impress VCs or corporate executives to throw money at it.

"Not hot dog."
posted by gimonca at 6:48 AM on December 26, 2020 [2 favorites]


Nirv’na sings Christmas 🎄 songs . Created through computer synthesis.
posted by infinite intimation at 10:37 AM on December 26, 2020


A Christmas 🎄 carell.

Channel 4 under fire for deepfake Queen's Christmas message. “There are few things more hurtful than someone telling you they prefer the company of Canadians...”
posted by infinite intimation at 10:45 AM on December 26, 2020


To Cartoonize Our Gentlemen
posted by thelonius at 10:48 AM on December 26, 2020


To me, none of this is AI. Where is the 'I'?
posted by GallonOfAlan at 1:25 PM on December 26, 2020 [2 favorites]


That photo colorizer is kinda fun and free to use.
posted by RobotVoodooPower at 1:33 PM on December 26, 2020 [1 favorite]


To me, none of this is AI. Where is the 'I'?

I mean, to some extent this is the central debate around "AI" -- what is "intelligence", really? There's a semi-joke I've heard: you call it "ML" (machine learning) when you're talking to other engineers, and "AI" when you're talking to investors. This sorta captures the fact that, to some degree, the "I" in AI is hype and bullshit.

But... it's not just hype. There's more to it than that. In the last few years, there really has been a dramatic expansion of what's possible to do algorithmically, into the realm of what used to be only possible to do by hand.

An example from my day job: we have a database of ratings which include a numeric (1-5) and free-form written component. We want to find the ratings where the numeric ratings disagree with the written ones -- i.e., where someone has given a low score but written positive words, or vice versa. Today, this is easy, almost trivial, to do using sentiment analysis, and it's highly accurate.

This feels quite different from the kind of software development I've done for most of my career, in two main ways:

a) it's performing a task that seemingly requires human-level language recognition (namely, reading a chunk of text and telling how positive or negative it is)
b) it's doing in a way that's highly accurate and incredibly fast (it's as accurate as human classification, but can process millions of ratings in seconds)

For me, this is the "intelligence" part. This is why I'm comfortable calling it AI. Not I believe it's "real" intelligence in the actually-has-consciousness sense, but because it's doing a thing that until very recently only humans could do, with (seemingly) surprisingly deep understanding.

One more example from these papers: the paper on transferring clothes between humans. The authors of this paper built an algorithm that, given a picture of Person A in wearing some outfit, and Person B wearing something else, can generate a totally synthetic picture of Person B wearing Person A's clothes. (Watch the first bit of the video - it's pretty impressive.)

This wouldn't be particularly hard for a human, assuming they could draw accurately -- they'd look at both pictures and ... draw the synthesized person/clothes. But for a computer, this requires understanding:

- which parts of a picture are clothes, and which human
- body shapes and clothing enough to "guess" what part of an outfit that are occluded might look like
- how bodies move such that the clothes from one body can be repositioned accurately onto another
- and probably more

This is something different from what computers were capable of until just a little while ago. I'm not a philosopher, and not super interested in the "but what is intelligence really" part of this discussion. But I am a software developer, and there is something new about this. I roll my eyes at the ridiculous levels of AI hype, too, but I also don't have a better name that captures just have different this all is from what we had just a few years ago.
posted by jacobian at 4:00 PM on December 26, 2020 [7 favorites]


This is something different from what computers were capable of until just a little while ago. I'm not a philosopher, and not super interested in the "but what is intelligence really" part of this discussion. But I am a software developer, and there is something new about this. I roll my eyes at the ridiculous levels of AI hype, too, but I also don't have a better name that captures just have different this all is from what we had just a few years ago.
Seconding this — I'd add a couple of even simpler examples: how many of us search our photo collections using things like face recognition & clustering or subject tags which were generated by computer? This is not human-level in many cases but it's basically assumed by now by users because it works well enough to be useful, and relatively few people have the patience to perform manual assessment at the level needed to outperform it. Machine-translation between languages is similarly expected by most people these days — built in to the UI in most browsers and something people just assume they'll be able to use when they make international travel plans (well, back when that was something we did), rather than memorizing a bunch of text to look for on menus.

There's an interesting debate about how much using the term “AI” both helps and hurts the field. It's generated tons of funding and some good success, along with plenty of busts, but using the term “intelligence” really sets some big expectations because most people do not read the term with an accurate understanding how narrow many of these skills are or how nonsensical the failure-modes can appear to human reviewer. Earlier terms like “computer vision” had the helpful trait of not promising higher-level human-class understanding but they definitely didn't attract the same kind of attention or money, either.
posted by adamsc at 9:15 AM on December 28, 2020 [1 favorite]


« Older Classic Rock for the Holidays   |   Snowball fight, 1897: 52 seconds of joyful carnage Newer »


This thread has been archived and is closed to new comments