Deep Alice
April 19, 2016 6:24 AM   Subscribe

Recently there was a post about using deep learning techniques to apply artistic styles from one image to another. Here is a similar technique applied to moving video from Disney's Alice in Wonderland, using a number of well-known paintings to modify the source. posted by codacorolla (15 comments total) 23 users marked this as a favorite
 
That broke my brain: I think some of the subsystems maxed out on CPU.

How the hell Alice got anime eyes in the The Great Wave off Kanagawa mix is.. perhaps I didn't really see that?

+1 for my theory that this is the future of surrealism and/or animation art. On the deep Google side, a friend told me that some of the more striking images come from stopping the algorithm at an interesting point and then handing the result over to artists to tune. Which is another path to explore.

Now, if you excuse me, i have to try and return to the world I was in before watching that . Not quite sure I'll make it.
posted by Devonian at 6:40 AM on April 19, 2016 [1 favorite]


I want this done with The Hallucinogenic Toreador.
posted by dozo at 6:42 AM on April 19, 2016 [1 favorite]


If you didn't watch all the way through, be sure to check out the Sol Lewitt section at about 1:45. That's the best, as far as I am concerned.
posted by Rock Steady at 6:44 AM on April 19, 2016


From what people were saying in the other thread, you need a beefy system to get modest times on still images. I wonder how long this took to process?
posted by codacorolla at 6:51 AM on April 19, 2016 [2 favorites]


I mean soon enough we'll have the processing power to do this in real time and feed it directly into our ubiquitous AR/VR units, right?
posted by griphus at 6:52 AM on April 19, 2016 [2 favorites]


I can't wait til someone figures out how to do this with music.
posted by STFUDonnie at 6:53 AM on April 19, 2016 [3 favorites]


STFUDonnie:
I can't wait til someone figures out how to do this with music.
Songsmith has you covered. (previously)
posted by Vendar at 7:00 AM on April 19, 2016


Recent post with advice for your first mushroom trip may be relevant here.
posted by Kabanos at 7:19 AM on April 19, 2016 [2 favorites]


I am an animator and I love this.

Some of these were more successful than others (and yeah, processing power is an issue) but I am genuinely surprised how great these look and also how painterly.

I am really excited for this technology to get to the point that an animator who was also a painter could potentially give their own animation the same quality their paintings have, without having to do it all frame- by- frame.
posted by matcha action at 7:19 AM on April 19, 2016 [1 favorite]


I've seen this gimmick enough times now that they're all starting to look the same. It's like when automated rotoscoping became cheap enough that suddenly everything was rotoscoped for six months. It's tedious.

I love the technique, I'm just waiting for someone to apply this thing in a way that involves artistry. Don't just apply a random selection of 20th century paintings on top of a random clip from a Disney film. Do something creative.

harumpf!
posted by Nelson at 7:55 AM on April 19, 2016 [5 favorites]


I don't think this looks very good. The original technique was done on static images, and I suspect that this video is constructed by using the technique on each frame individually. There is very little temporal cohesion.
posted by demiurge at 7:58 AM on April 19, 2016 [2 favorites]


I'd like to see it with the 'painting' influence throttled down to barely detectable, and the 'film' source actual live footage. I guess I'm wondering if there is some achievable subtle/subconscious surrealism possible.
posted by j_curiouser at 8:32 AM on April 19, 2016


> I can't wait til someone figures out how to do this with music.

I've argued that that's what a vocoder does.
posted by benito.strauss at 9:29 AM on April 19, 2016 [2 favorites]


> I've seen this gimmick enough times now that they're all starting to look the same.

This is because no one is teaching the networks new styles. They are all just running the code stock on various images/videos.

It's like using a filter in Photoshop, but claiming extra skills and techniques were used. But you just ran a filter. And then they get press, passed around, and gawked at like they are special.

Even the apps that have packaged up the software for ease of use just use the same datasets without any way to train new ones. Like this one: https://71squared.com/deepdreamer

That's not to say you couldn't do the work to train new ones. It's just not simple, it requires time, power, and skill.
posted by OwlBoy at 1:10 PM on April 19, 2016


I am really excited for this technology to get to the point that an animator who was also a painter could potentially give their own animation the same quality their paintings have, without having to do it all frame- by- frame.

That reminded me of this video, which might be another technique useful for the same purpose and applicable in combination.
posted by NMcCoy at 10:52 PM on April 19, 2016


« Older Bendito Machine V   |   Destined to make a difference Newer »


This thread has been archived and is closed to new comments