Ostagram
April 2, 2016 12:06 AM   Subscribe

Ostagram (github), uses neural networks to combine two images, styling the first (may be NSFW) as the second, with awesome and terrifying results.
posted by dilaudid (70 comments total) 150 users marked this as a favorite
 
I clicked on the last link, and literally LOLed uncontrollably.
posted by i_am_joe's_spleen at 1:15 AM on April 2, 2016 [11 favorites]


I think this is amazing.

More info:
- the underlying software
- original paper (A Neural Algorithm of Artistic Style)
posted by ormon nekas at 1:16 AM on April 2, 2016 [9 favorites]


Some of those are spectacular. The tennis player with the grass is super creepy awesome. And the woman in the room with the bricks with the wave...really nice. (The wave with Cookie Monster not so much...)
posted by leahwrenn at 2:01 AM on April 2, 2016 [1 favorite]


Funny, I thought Cookie Monster with the wave was the beer of the two. Very interesting, especially if you could get prints.
posted by doozer_ex_machina at 2:11 AM on April 2, 2016 [5 favorites]


Very impressive! there are many more examples here.
posted by misteraitch at 2:42 AM on April 2, 2016 [5 favorites]


I'm guessing that the name “Ostagram” comes from the Russian word ostranenie, for the artistic strategy of presenting the familiar in a strange style, rather than being a Scandinavian-language reference to having eaten too much cheese before going to sleep.
posted by acb at 4:35 AM on April 2, 2016 [8 favorites]


Mesmerizing! The singularity approaches...
posted by STFUDonnie at 4:37 AM on April 2, 2016


rather than being a Scandinavian-language reference to having eaten too much cheese before going to sleep.

Or maybe Kurdish?
posted by hal9k at 5:11 AM on April 2, 2016


I guess I expected artists to be one of the last groups rendered obsolete, rather than the first.

Welp. Off to the protein strippers, everyone!
posted by aramaic at 5:35 AM on April 2, 2016 [9 favorites]


Love that Cookie Monster one!

A similar site is Deepart.io, which is open to new registrations/submissions, unlike Ostagram as far as I can tell.

I had some fun with it earlier this week: result/source, r/s, r/s, r/s, r/s, r/s, r/s, r/s, r/s, r/s, r/s.
posted by Maladroid at 5:49 AM on April 2, 2016 [8 favorites]


So how do I do these as a regular computer user? Saw the linux instructions, but....Yeah.
posted by nevercalm at 6:00 AM on April 2, 2016


Excellent! See also: Pikazo.
posted by BeBoth at 6:00 AM on April 2, 2016 [2 favorites]


The terrifying link isn't terrifying. That looks like a Conan made of dicks, which is both hilarious and thematically appropriate.
posted by middleclasstool at 6:06 AM on April 2, 2016 [3 favorites]


Literraly meatloaf.
posted by srboisvert at 6:07 AM on April 2, 2016 [2 favorites]


nevercalm: So how do I do these as a regular computer user? Saw the linux instructions, but....Yeah.

You could probably do this in a virtual machine (though I'd expect it to be *painfully* slow as it's killing my laptop right now in my testing, and that is running Ubuntu natively). Grab a desktop image of Ubuntu (or probably any Debian-based distribution, but Ubuntu is easy) and run that. The instructions are literally copy/paste lines to paste into a terminal and will Just Work.

You could also probably do this in something like Amazon Web Services which would give you more power but is more difficult to set up.
posted by fader at 6:52 AM on April 2, 2016 [1 favorite]


This is relevant to my interests.
posted by ostranenie at 6:56 AM on April 2, 2016 [8 favorites]


Here's Eakin's Gross Clinic crossed with Guernica.

I think it turned out pretty well! It seemed to recognize where the faces were and applied the cubist style to them.
posted by codacorolla at 7:48 AM on April 2, 2016


I guess I expected artists to be one of the last groups rendered obsolete, rather than the first.
Welp. Off to the protein strippers, everyone!


There is something sort of terrifyingly awesome about this, isn't there? We'll soon be seing this all over the place, won't we? "I am become Kai Power Tools, destroyer of graphic design originality".
posted by Chitownfats at 8:05 AM on April 2, 2016 [5 favorites]


Conan got fingered.
posted by nubs at 8:14 AM on April 2, 2016


These are really extremely cool.
posted by Slarty Bartfast at 8:31 AM on April 2, 2016


I fucking love this!
posted by Abehammerb Lincoln at 8:31 AM on April 2, 2016


I don't know whether to laugh or cry about the fact that impressionism has been automated.
posted by Chocolate Pickle at 8:31 AM on April 2, 2016 [5 favorites]


This is one of the coolest things I've ever seen. I can't believe this is possible!
posted by showbiz_liz at 8:33 AM on April 2, 2016 [3 favorites]


Did they ever make an idiot proof desktop version of Google deep dreaming? I want to do this but I want to put like no effort into it. I'd even pay monies for it
posted by ian1977 at 8:45 AM on April 2, 2016 [1 favorite]


What's the quickest way to get this going on Windows?
posted by Gyan at 9:00 AM on April 2, 2016


We have reached Peak Mashup.
posted by Greg_Ace at 9:24 AM on April 2, 2016 [3 favorites]


where are the instructions?
posted by signal at 9:30 AM on April 2, 2016


I've just submitted a handful of images to that Deepart.io site linked above. There's about a 2 hour wait a the moment, not terrible.
posted by showbiz_liz at 9:31 AM on April 2, 2016


I tried two more:

Magritte's Empires of Light mixed with The Creation of Adam. The algorithm builds the human figures out of the natural and structural forms, which means a God and Adam made out of clouds and trees.

Pope Innocent by Bacon mixed with Procession to Calvary by Bruegel. I'm really impressed with how it's understood the basic style of the Bacon portrait, and applied it to a pretty faithful rendition of Breugel's.

I'm actually really impressed with this. A lot more so than Google's dog-engine, even.
posted by codacorolla at 9:35 AM on April 2, 2016 [4 favorites]


Does this remind anyone else of a vocoder? In typical vocoder usage, the large-scale structure comes from one source (e.g. human speech), while the small-scale structure comes from another (e.g. a musical instrument's tone).
posted by benito.strauss at 10:00 AM on April 2, 2016 [15 favorites]


What's the quickest way to get this going on Windows?

Well the first step would be this. The second step, i'd wait for someone to post a guide on setting this app up. I haven't gone through it yet, but when i do i'll post back. It's gonna involve dumping those files from github onto said USB stick, though.

The app itself looks not-super-simple to set up. It's some sort of client-server thing where you're running a background process and the frontend. I was hoping for something to the effect of "compile it and like ./ostagram -parameter -moreparameters file1.jpg file2.jpg" but noooope.

So yea, i'm gonna try the digital equivalent of a "hold my beer" to get this going with all the instructions being in russian in a few hours, and i'll post back.

So how do I do these as a regular computer user? Saw the linux instructions, but....Yeah.

Did you see english ones somewhere? Because i'm 110% game to do this right now, and i'd be willing to come back and post easier to follow starting-from-windows ones.
posted by emptythought at 10:16 AM on April 2, 2016 [5 favorites]


So for reference, the Ostagram bits (the things in Russian on github) are for the web interface. The command-line tool that it is running on the backend (and which is doing the actual work of synthesizing images) is here: https://github.com/jcjohnson/neural-style

This gives a command line that can be fed images without needing the web frontend (and without needing to be able to read Russian). The instructions included in the repository (INSTALL.md) are pretty straightforward (mostly "copy this into the terminal and hit enter") and worked for me in an Ubuntu 15.10 environment with no issue.
posted by fader at 10:38 AM on April 2, 2016 [6 favorites]




Lots of people have been posting Docker images in case you don't want to poop dependencies all over your system.
posted by RobotVoodooPower at 10:53 AM on April 2, 2016 [3 favorites]


Oh, this turned out pretty cool - Dali plus the guys from The Flop House
posted by showbiz_liz at 11:13 AM on April 2, 2016 [2 favorites]


There's also a Twitter bot trained on famous artists that you can tweet images to.
posted by RobotVoodooPower at 11:28 AM on April 2, 2016


Deepart's about page:

> "University of Tübingen has a pending patent application for the Neural Art technology."

Well... shit. I was getting all excited after seeing openly accessible research papers and reports under permissive CC licenses and software under the MIT License.
posted by wwwwolf at 12:19 PM on April 2, 2016


This is fucking incredible. And weirdly disconcerting at the same time. I'm blown away.
posted by three easy payments and one complicated payment at 12:22 PM on April 2, 2016 [1 favorite]


Oh, how I love machine translation: "and add a bit of art, worth about ten million men's socks..."
posted by mmahaffie at 1:04 PM on April 2, 2016 [1 favorite]


Getting it to run on Windows seems to be difficult. The author recommends "virtualizing with something like VirtualBox".
posted by Gyan at 1:14 PM on April 2, 2016 [1 favorite]


As an update, the least-annoying method i've found to run it on windows is to install docker and then use the aforementioned docker image.

This is still not for the faint of heart. It's like a 30+ minute project with a fast internet connection.

It's the least steps in a row i've seen(since getting the dependencies set up on my ubuntu live USB started to get annoying), and it's still quite a few and fairly technical. It was also generally kind of laggy throughout on my core i5 half decent work laptop that's ~2 years old.
posted by emptythought at 2:49 PM on April 2, 2016 [1 favorite]


I don't know whether to laugh or cry about the fact that impressionism has been automated.
Depends. Are you a realist painter? If so, then I think it's entirely appropriate to wave a camera over your head, laugh maniacally, and scream, "You see?! You bastards see how it feels!?!"
posted by roystgnr at 7:07 PM on April 2, 2016 [9 favorites]


I don't know if I just got unlucky, something went wrong with my submission, or the queue just blew up in conjunction with this post, but it's been a whole lot more than the predicted 106 minutes since I submitted to deepart.io and I still haven't been turned into some kind of horrid trypophobic cortex-sierpinski hybrid.
posted by cortex at 7:11 PM on April 2, 2016


Some of these mashups are quite remarkable and I find that I react to them the way I would to artworks. I wonder what this means - perhaps this sort of conflation is what art is; that even representational art (if it's any good) is messing with our perceptions at some deep level.
posted by Joe in Australia at 7:29 PM on April 2, 2016 [6 favorites]


This is totally cool!

I installed it on my laptop and got it running, but I couldn't get the GPU accelerated stuff working, so it took about 3 hours to do one image. Reminds me of the old days of raytracing on my Amiga 500 and setting it going before I went to school in the morning only to find it still going when I go home in the afternoon. Can you imagine what this stuff will be like in 20 years when neural-based image manipulations and even animations can be done in real-time?

If anyone has difficulty with dependencies and stuff I recommend following the INSTALL.md instruction instead of the instructions on the main github page for it. Also, if anyone was as confused as I was after reading the abstract and expecting a tl;dr, one thing that wasn't clear to me from it that is mentioned in the body of the paper is that they are using a neural network that has already been trained on "this is a picture of an X" type object recognition tasks and feeding these images into it, and part of the "oh wow isn't that interesting" is that training for that task produces a net that can perform this rather different one. I won't pretend to understand it properly but as far as I can tell the algorithm is basically "find an image that produces similar responses (detected features at particular locations) in the neural net as image A, but has similar correlations between features (image style) as image B".
posted by L.P. Hatecraft at 10:59 PM on April 2, 2016 [1 favorite]


I'm now intensely curious about if anyone has tried similar techniques with audio. It could be a crazily powerful sound design tool if done well!
posted by Jon Mitchell at 12:07 AM on April 3, 2016 [4 favorites]


I wonder if that would be like auditory hallucinations? You could train it on people speaking, or singing, and feed it the rumbling of a storm, or the soft susurration of a summer swarm. Feed it images and sound!
And upside down in air were towers
Tolling reminiscent bells, that kept the hours
And voices singing out of empty cisterns and exhausted wells.

posted by Joe in Australia at 1:01 AM on April 3, 2016 [2 favorites]


Setup a minimal Ubuntu VM and installed neural-style to try. It dies unless the VM has at least 4GB of memory. Without a GPU the render took about 3 hours for a 512x289 image from a 1024x579 source.

I present "Munch's Lazy Dog"
posted by cmfletcher at 6:27 AM on April 3, 2016


I submitted two identical images just to see what happens.
posted by ian1977 at 7:48 AM on April 3, 2016


Do you want an image depicting your demise at the hands of newly awakened thinking machines? Cause that's how you get an image depicting your demise at the hands of newly awakened thinking machines!

but I'm curious so please post the results.
posted by cmfletcher at 8:08 AM on April 3, 2016


I will in 310 minutes.
posted by ian1977 at 8:08 AM on April 3, 2016




So I uploaded 2 identical images and the end result was that it didn't change the image at all.
posted by ian1977 at 8:57 AM on April 3, 2016 [1 favorite]


Also, is EVERY image on deepart.io a selfie????????
posted by ian1977 at 8:59 AM on April 3, 2016


Also, is EVERY image on depart.io a selfie????????

They're not really posting every upload in the 'latest images' section, or else it would be chock full of all the Hannibal screencaps I've been tossing into it
posted by showbiz_liz at 9:01 AM on April 3, 2016 [4 favorites]


I just am going to say that for anyone using Windows (or probably OSX) who is thinking of trying a VM approach with Docker, being a power user isn't really enough. I stumbled through a bunch of obstacles, succeeded in rendering one image overnight. Today I wanted to try again using my own images (not the included ones) and I've run into "not enough memory" errors. If you want to render an image above 600 pixels, which I did, you won't have enough memory with 8 GB, 5 allotted to the VM. Even then, it is possible (haven't determined yet) that even modestly large source JPGs also cause it to fail regardless of output size.

And if you're content with 600 pixels, you'll encounter a bunch of Docker command line confusion (why does one "container" end up having numerous instances? I want one instance!). Just figuring out how to get the input and output files into the Docker Container, even if everything else succeeds, is a whole thing without a background in Linux/virtual machines/whatever. At the end of the day you end up having your computer run for hours to spit out a 600 pixel image if you met the challenge!

I think you have to run Linux natively for this to really be worth it. You'll have the resources of the full system behind you (memory, possibly graphics card).
posted by sylvanshine at 7:46 PM on April 3, 2016 [1 favorite]


Also, is EVERY image on depart.io a selfie????????

Mine: Guy without a pearl earring

I tried using it in the docker container on a EC2 instance, but either something was wrong with it or it just can't be done in a t2.micro. Kept crashing with an error message that wasn't helpful.
posted by ctmf at 10:39 PM on April 3, 2016 [1 favorite]


Request: Someone try submitting an image together with the same image flipped L/R.
posted by Bugbread at 1:01 AM on April 4, 2016 [1 favorite]


aramiac: "I guess I expected artists to be one of the last groups rendered obsolete, rather than the first.

Welp. Off to the protein strippers, everyone!"
This reminds me of how in a lot of ways the original goal of electronic music was to eliminate the musician standing between the composer and music, but all it really succeeded in doing was to expand what it means to be a musician while providing newer stranger instruments. I guess in an underlying kind of way, the essence of what a musician does is to listen with style - actively shaping what they hear in order to produce a sound that is also pleasing or moving to others. In the same way, how could this make painting obsolete anymore than photography did? Sure photography displaced quite a bit of painting, but only the kind of painting that would depict a person or a landscape or a taxonomy specimen like the New York phone book depicts New York, rather than the kind of painting that depicts New York like Nighthawks does. Thus, I'm not sure how this new technique could have any more revolutionary of an effect on producing paintings beyond democratizing the creation of art away from the luxury of affording the time to dedicate to acquiring the skills and coordination to paint.

Even if we could train a computer to search for, find, and juxtapose images in an arresting way all on its own, we would still need an artist to see them with style and shape what output is worthy of attention.
posted by Blasdelb at 5:10 AM on April 4, 2016 [4 favorites]


Bugbread, here you go:

Picasso

This was with only 500 iterations, and I think it would iterate back to the original quite closely...
posted by moonface at 8:00 AM on April 4, 2016 [1 favorite]


Thanks, moonface. Surprising, yet not surprising.
posted by Bugbread at 3:56 PM on April 4, 2016 [1 favorite]


I think you have to run Linux natively for this to really be worth it. You'll have the resources of the full system behind you (memory, possibly graphics card).

I don't know if this is true, though it depends on your system. I've got it running on Ubuntu in VirtualBox and I'm having loads of fun with it. I basically followed the instructions on the Github page to the letter, although I blew everything away and had to start over when I experimented with loading the Nvidia Cuda drivers, which pretty much broke everything. Is it a little slow? Sure, but I've got it set up to save every 100 iterations so that I can preview the image right away instead of waiting for 1000. By 200-300 iterations, I can usually tell if it's going in a direction I'm interested in or not, and that only takes a few minutes. I have half a mind to pull a bunch of these intermediate preview images into Photoshop and use them to generate pulsing animated GIFs that show the image willing itself in and out of being in a loop. I suspect that might look pretty rad in at least some cases.

Now this is a video-editing machine. I have a pretty speedy six-core processor (i7 5820 overclocked to 4.3 GHz), and I have 32 GB of RAM under the hood, so I give the VM half of that and it purrs along nicely. (To my surprise, VirtualBox seems to devote all six cores to the VM if I'm not doing anything else on my machine.) 256px is pretty fast and 512px is surprisingly tolerable. 1024px takes a couple of hours so you don't want to wait around for it, and I haven't tried anything above that yet. But I just write a sample bash script to cue up a few jobs and let it run overnight and I've got like a half-dozen pretty pictures to look at in the morning. Was it a hassle to set up? A bit. But I had to do it twice, and it didn't even take up all of my Sunday afternoon. And I am decidedly NOT a Linux guy, although my first Internet experiences were on a Unix host back in 1994 so I guess I'm ahead of the curve in that sense. (I know my way around emacs, but I still had to do a Google search to remember exactly how to write a bash script, if that gives you any idea of my tech level.)

Now, I do have a GTX 970 plugged in for games and Adobe video editing so I'd like to dual-boot Ubuntu to do this the right way eventually. But I do worry that could get out of hand and cause me real pain if I screw up somehow and can no longer get into Windows. And, honestly, the VM experience isn't bad.
posted by Mothlight at 6:54 PM on April 5, 2016 [1 favorite]


No, it's not true so much if you have 4 times the memory of a standard gaming machine and a pretty darn good processor. I stand by my warning to us bog-standard entry-level power user types with average machines who might try to do this in a VM.

Meanwhile I just got inspired to try Ubuntu, and I'm stuck as usual with Linux stuff at around the point I thought would take 15 minutes: having no network with a bog-standard Gigabyte motherboard. Some things never change.

All I really want is to see a high-res image from neural-style. Has anyone come across one? Like a couple thousand pixels?
posted by sylvanshine at 9:04 PM on April 6, 2016


I've not tried anything larger than 512 pixels, but 1,000 iterations typically took c.2.5 hours using CPU mode, on a work-issue Dell XPS with a 2.6GHz core i-7 and 8GB of RAM (with 4GB allocated to Ubuntu on a VM).

It's hardly a video-editing rig, and the GPU is puny, and I still hit a couple of crashes so had to run at 256 pixels.

But I got it running with practically no Linux experience, so I'd say it was a worthwhile waste of a Sunday afternoon.
posted by moonface at 1:30 AM on April 7, 2016


It's sort of cool and creapy in the same way haha. Some of them looks like art though!
posted by Krislarsson at 7:20 AM on April 7, 2016


Is anyone here running AMD?

So I've tried Ubuntu 15, Ubuntu 14, Fedora -- in all cases the install-deps script fails. On Ubuntu 15 one place it fails is ipython. On both it fails on:

Package python-software-proper ties is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
software-properties-common


To which a single line is dedicated -- I commented it out. Then I get


Package gnuplot is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Unable to locate package libqt4-core
E: Unable to locate package libqt4-gui
E: Unable to locate package libzmq3-dev
E: Package 'gnuplot' has no installation candidate
E: Unable to locate package gnuplot-x11


I know this isn't a support forum, but the thread is dead and I'd just like to say I'm not a goddamn idiot in response to my poorly received comment that this is heavy treading for average joes. My understanding of Linux is that it does this stuff to people all the time, and somehow I'm the only one with difficulty? One person's "worthwhile afternoon" becomes my "20 wasted hours". If anyone would be willing to help be offline I'd appreciate it.
posted by sylvanshine at 9:11 PM on April 7, 2016


Is anyone still playing with this? I am having fun combining deepart.io with deepdreamgenerator.com.

Check out Wind in the Willows of Earthly Delights
posted by ian1977 at 11:17 AM on April 9, 2016 [3 favorites]


Ian1977, I'm not going to say that that is the best thing ever, because I already used that expression for a plate of Chongqing chicken, but I have to say: It's right up there. I wish there were a HD version.
posted by Joe in Australia at 5:12 PM on April 9, 2016 [1 favorite]


Hi Joe, the one I linked was the medium res version. I sprung 20€ for it. Though I don't think you see the medium res version? Or maybe you do. The high res version is a staggering amount of money but so so tempting.


These are low res but still kinda cool...

https://deepart.io/result/556618/

https://deepart.io/result/556590/

https://deepart.io/result/556581/

https://deepart.io/result/556571/

https://deepart.io/result/555330/ (Garden of Earthly Delight in the style of SNAKE MOUNTAIN!!!
posted by ian1977 at 5:47 PM on April 9, 2016


OK, I just got back from a conference and finally had a chance to fool around with running Ubuntu natively on my Windows machine to try and GPU-accelerate this stuff. I don't have anything miraculous to report, but I'm going to drop some notes in here for anyone who is still following this thread or happens upon it in Google later. To be clear, I'm simply following the instructions in the neural-style installation guide.

1) As far as sylvanshine's issues with the missing packages, I had the exact same problem with the latest version of Linux. When I backed down to the same version I was using on my virtual machine (14.04 LTS), everything installed with no hassle. I was even able to get the Nvidia drivers running for CUDA acceleration, which I was really psyched about. So I would definitely recommend only trying this with Ubuntu 14.04 unless you have better troubleshooting chops than I do and can work through those dependency issues.

2) GPU acceleration is crazy good. I have a GeForce GTX 970 with 4 GB of video memory (more like 3, I guess, based on how Nvidia segmented the RAM) and neural-style just screams at the default settings. Seriously, a 512px image takes just a couple of minutes. I'm super-happy with that performance.

3) Image resolution using the GPU is throttled by RAM constraints. I had NOT realized that switching to GPU acceleration would mean I would have access to only my 4 precious GB of video RAM and not to the full 32 GB of system RAM. So pretty much any resolution above the default 512px results in an out-of-memory error.

The obvious solution is to use the GPU to prototype a lot of different style ideas quickly, then return to CPU mode to take advantage of the (much) higher available memory -- but neural-style does not seem to scale linearly, so images that look great at 512px can end up not as good at 1024px, and vice versa. (With 32 GB of system RAM, I have not successfully rendered any images at more than 1024px.) I've experimented a little but haven't figured out an easy way to use the neural-style scale settings to maintain results at different pixel resolutions. Meanwhile, rendering at high resolutions in native Linux feels even more painfully slow than it does in my VM. ¯\_(ツ)_/¯
posted by Mothlight at 8:09 AM on April 23, 2016 [2 favorites]


« Older The language of flowers, spoken in forms around...   |   Saturday Morning Acid Flashback Newer »


This thread has been archived and is closed to new comments