“Enhance... enhance... enhance”
June 22, 2011 10:13 AM   Subscribe

Lightfield cameras capture the entire photonic information of a scene with essentially infinite depth of field, meaning that pictures can be focused after the photo is taken, and low-light conditions do not require a flash. Lightfield images are also “3D” without the need for stereo lenses.
Lightfield (aka “plenoptic”) technology was developed in the 90's: the first working prototype required dozens of separate cameras and a supercomputer. Professional plenoptic cameras have been available for the past year; the Lytro startup intends to release a consumer-ready shirt-pocket lightfield camera later this year. posted by Bora Horza Gobuchul (52 comments total) 40 users marked this as a favorite
 
I have seen the future of funny cat pictures.
posted by broken wheelchair at 10:17 AM on June 22, 2011 [3 favorites]


Low light? Focused after the fact? This is going to be used for sexy photo times for sure.
posted by dirtylittlecity at 10:22 AM on June 22, 2011


I can haz variable focus?
posted by Hairy Lobster at 10:23 AM on June 22, 2011 [1 favorite]


Oh, thank goodness. I was starting to get worried that my existing camera was good enough for the next few years and I wouldn't have any reason to spend hundreds of dollars on a new one.
posted by gurple at 10:24 AM on June 22, 2011 [18 favorites]


The examples appear to be rigged, or else you're only able to click on two or three pre-defined planes of focus. You could do the same yourself with a good SLR and frame bracketing, adjusting depth of field instead of exposure. I'd be more impressed if they put up the original data files and let you zoom in and focus properly instead of the "obvious" areas of interest.
The Lytro site itself is very thin, and no pricing I could find. I suspect this will be a serious prosumer camera that intros at $2k a pop.
posted by Old'n'Busted at 10:25 AM on June 22, 2011


"Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there."
posted by PROD_TPSL at 10:26 AM on June 22, 2011 [25 favorites]


"By substituting powerful software for many of the internal parts of regular cameras, light field processing introduces new capabilities that were never before possible." What does this mean for power consumption? Or is it all post-processing? As mentioned in this discussion of the Raytrix camera (the professional plenoptic camera available), it'll take a lot of promotions to get these ideas across to consumers, if they're trying to pitch a shirt-pocket camera that requires work outside of that camera.
posted by filthy light thief at 10:26 AM on June 22, 2011


I really hope that these are excessively priced out of my range - I really don't need to be tempted by something JUST out of my price range. Plus that will force me to continue to try to improve my photography skills and take better pictures with what I have.
posted by Slack-a-gogo at 10:31 AM on June 22, 2011


The examples appear to be rigged, or else you're only able to click on two or three pre-defined planes of focus.

No, technology wise it's totally legit; it just requires a lot of parallel computation. This is essentially the same principle that an insect eye works upon.
posted by anigbrowl at 10:31 AM on June 22, 2011 [1 favorite]


This sort of recalculating the previous trajectories of incoming light waves is exactly like the hand-wavy explanation I came up with for Superman's telescopic/X-ray vision 20 years ago. Unfortunately, my patent just expired, which is probably why you're starting to see these now.
posted by straight at 10:35 AM on June 22, 2011 [3 favorites]


I very rarely add to any thread from my own posts, but yes, PROD_TPSL, that was pretty much my thought... after a few years, we might finally be free of seeing this in television and movies.
posted by Bora Horza Gobuchul at 10:35 AM on June 22, 2011 [1 favorite]


Old, if you watch the video at Engadget (embedded ad), they clearly state it's a "competitively priced" consumer camera.
posted by CheeseDigestsAll at 10:37 AM on June 22, 2011


Enhance 224 to 176. Enhance, stop.

More like "Enhance cat head 2 times, stop. Now pin little widdums around so I can see all of his adorable head, stop. Now get his paws in the frame. Ok, I'm gonna need a mass email right here."
posted by Brandon Blatcher at 10:38 AM on June 22, 2011 [4 favorites]


There is still no substitute for real cats.
posted by KS at 10:41 AM on June 22, 2011


In the same way that HDR allows you to capture whole range of light in an image, plenoptic captures the whole focal range. Pretty cool stuff. When they get this combined with an HDR sensor, we'll have cameras that you only need point in the right direction and click. All other choices about lighting and focus can be made later.
posted by doctor_negative at 10:41 AM on June 22, 2011


Don't you mean: combine this with HDR and 3d cameras and we'll never need to leave the house.
posted by blue_beetle at 10:47 AM on June 22, 2011


The examples appear to be rigged...

I think they're pre-rendered. I suspect that downloading the original image data (tens of MB) and attempting to process it in Flash on a real-time, on-demand way would be ridiculously impractical.

If you expand the pictures to full size, you'll see they are, by camera-hype sample standards, low-res, noisy, and not very sharp. Which I think is in line with what we'd expect from a compound-lens single-sensor design.
posted by Western Infidels at 10:51 AM on June 22, 2011 [1 favorite]


When I see something like this, I tend to imagine a bunch of geeks working-up technologies and software so they don't ever have to waste time actually learning how to use a camera. So they almost never have to leave their beloved workstations.
posted by Thorzdad at 10:53 AM on June 22, 2011 [1 favorite]


Much like the 360-degree photos, it will depend on the software that presents the images. Some implementations might be good, many no doubt will suck.
posted by Goofyy at 10:53 AM on June 22, 2011


I'd be more impressed if they put up the original data files and let you zoom in and focus properly instead of the "obvious" areas of interest.

That probably takes serious horsepower, likely not something you can easily do in a Flash applet. I suspect that rendering a frame with a specific focus depth is probably a lot like ray-tracing, meaning hours of computer time per rendering.

As a plus, this might finally be something we can use the upcoming eight- and sixteen-core computers for. :)
posted by Malor at 10:58 AM on June 22, 2011 [1 favorite]


Should make taking pictures of bands in low light a lot easier.
posted by zzazazz at 11:09 AM on June 22, 2011


I wouldn't buy rev 1 of a consumer implementation of this, but this is ultimately a game-changer and in some sense a world-changer. Digital photography changed the means but not the nature of photography. This does the latter.

A fantastic and genuinely valuable application of Moore's Law, where all that power allows us to do something fundamentally new and useful, as opposed to simply enabling the latest bloated version of your favorite OS to boot in under a week, which is where most of the gains seem to go.
posted by George_Spiggott at 11:12 AM on June 22, 2011 [1 favorite]


Not very sharp, and the depth of field appears to not be all that great. Yeah, you can put anything you want in focus, but not so much with making something 2 inches behind that out of focus.
posted by wierdo at 11:18 AM on June 22, 2011


I'm nine kinds of excited, but this:
Once images are captured, they can be posted to Facebook and shared via any modern Web browser, including mobile devices such as the iPhone.
made me weep.
"Yes, you will be able to make pictures with this new camera."
posted by hat_eater at 11:20 AM on June 22, 2011 [3 favorites]


Wierdo, I assume you're talking about the Flash applet. As several people have pointed out above, that's rigged -- the Flash applet doesn't really contain the light field information or the real postprocessor; it simply switches between two prepared postprocessed images.
posted by George_Spiggott at 11:23 AM on June 22, 2011


I'm nine kinds of excited about this, too (sans the reservation about Facebook; frankly, I don't care what the kids do with their pep rally snapshots and their schoolbus Polaroids of Debbie's knockers...).

I've been really enjoying the process of learning Apple Aperture in tandem with a Micro Four Thirds camera after fifteen years away from photography - it's all the fun and creative flexibility of a darkroom... without the toxic chemical mess. There are plug-ins that sorta-kinda-almost-not-really do this kind of thing, but the concept of being able to shift the focus like this to me is really ground-breaking. It's the polar opposite of the entire photographic process I was taught as an undergraduate - instead of having to previsualize, and meter, and shoot, and develop, and touch-up, and PRAY... I use the camera to gather the information from the scene, and I construct the image I want to show using the software.

I say, price it less than $500, and I'll buy the v1 package right now, without even seeing the device itself.
posted by OneMonkeysUncle at 11:33 AM on June 22, 2011


Ken Rockwell adds his two cents.
posted by tapesonthefloor at 11:36 AM on June 22, 2011


George_Spiggott wrote: Wierdo, I assume you're talking about the Flash applet. As several people have pointed out above, that's rigged -- the Flash applet doesn't really contain the light field information or the real postprocessor; it simply switches between two prepared postprocessed images

And if you look at their prepared images, you'll note the depth of field is pretty wide. Perhaps the technology can do better, but perhaps it can't. The evidence I've seen so far says no.
posted by wierdo at 11:38 AM on June 22, 2011


For the technically inclined here's the thesis. Page 39 details how they built the thing, which is really interesting. They used a lot of off-the-shelf tech and basically taped it all together. With GPU programming and the parallel nature of image processing, building 3D scenes in near real-time seems within grasp.

Thank you for posting this.
posted by The Power Nap at 11:47 AM on June 22, 2011


Not very sharp, and the depth of field appears to not be all that great.

The intro video at the "Lightfield Cameras" link appears to demonstrate a lightfield picture viewer that allows the user to select at least two depths-of-field options: shallow and infinite. I'd be sort of surprised if it there weren't further adjustments between those extremes.
posted by Western Infidels at 11:56 AM on June 22, 2011


this is awesome. thanks for the link. anyone have any idea if this technology provides an option to "put every single thing in whole frame in focus"?
posted by rude.boy at 12:14 PM on June 22, 2011


Which by implication deals with the depth of field issue nicely.
posted by George_Spiggott at 12:36 PM on June 22, 2011


Don't you mean: combine this with HDR and 3d cameras and we'll never need to leave the house.

Well, let's not get TOO carried away - someone is going to have to physically bring the camera to all those abandoned amusement parks. Oh wait...
posted by FatherDagon at 12:49 PM on June 22, 2011


> after a few years, we might finally be free of seeing this in television and movies.

Ha ha ha ha!

How long have cars been around? How often do cars screech to a halt in movies? How often in Real Life?

Riiiiiight.
posted by mmrtnt at 12:59 PM on June 22, 2011


tapesonthefloor,

Where are Ken Rockwell's comments? Thanks.
posted by lukemeister at 1:06 PM on June 22, 2011


Ken Rockwell on Lyto.
posted by artof.mulata at 1:20 PM on June 22, 2011


I "got it" when I finally watched the video (second link), when he zooms in on the raw image data. Color me very impressed. I think this is one of those things that's remarkably simple in principle (like "insect vision" as anigbrowl mentions) but is only just now becoming practical on a small/immediate/consumer scale due to advances in computational power.
posted by treepour at 1:31 PM on June 22, 2011


"In the same way that HDR allows you to capture whole range of light in an image, plenoptic captures the whole focal range. Pretty cool stuff. When they get this combined with an HDR sensor, we'll have cameras that you only need point in the right direction and click. All other choices about lighting and focus can be made later."

Imagine how cool it would be to use this technology on the next Mars probe! You could focus on each individual rock, one after another. Or biologists could use it on photo traps, meteorologists could study cloud formations... So many possibilities.
posted by Kevin Street at 2:20 PM on June 22, 2011


I've been wondering why we havn't been seeing more stuff like this. IMO cameras have been stuck in the "lets make it like a film camera" for way to long, especially DSLRs. The Macro 4/3ds cameras are a step in the right direction: just a sensor and a shutter, with the bulky mirror system in an SLR. You're also seeing advancement in terms of lowlight conditions.

Also I don't think you can really say there is such a thing as an "HDR" sensor, just a normal sensor that happens to have a good dynamic range. The "HDR effect" pictures you see (most of which look awful, and many of which aren't even HDR to begin with) look that way because they are compressing the dynamic range into what fits on the monitor.

Someone did a comparison between "video game" HDR and photo HDR and it was basically the reverse. In games people were deliberately blowing out the highlights to make scenes seem more realistic, rather then less. While photo "HDR" is doing the reverse. Really they ought to call it "squished dynamic range"
As a plus, this might finally be something we can use the upcoming eight- and sixteen-core computers for. :)
More likely our GPUs. Right now the two quad core Xeon chips in my desktop can put out about 64Gflops (PDF). My GPU can do 683 Gflops double precision, or 2.7 teraflops double precision.
Not very sharp, and the depth of field appears to not be all that great. Yeah, you can put anything you want in focus, but not so much with making something 2 inches behind that out of focus.
You should be able to adjust the DOF in real time.
posted by delmoi at 2:37 PM on June 22, 2011


Who the heck is Ken Rockwell? He doesn't seem to have a clue about how the technology actually works, despite the fact that that info is widely available. Lytro's slightly-misleading PR aside, there's nothing in the CCD that magically senses the direction of incoming light. It's just a bunch of microlenses that redirect light coming from different directions to different parts of the CCD, coupled with some clever algorithms for shuffling, blending and filtering the resulting pixels to recreate the illusion of different planes of focus. Rockwell seems to have jumped to the false conclusion that there's something fundamentally different about the CCD. As far as I can tell, there isn't.

Lightfields as a theoretical entity have been around for a couple of decades now, but capturing them from the real world has always been just out of reach. I, for one, am super excited that this tech is being made available to ordinary people. The artistic possibilities go well beyond this "shoot first, focus later" trick. In the near future, when there are as many lightfield-based image files online as there are JPEGs on Flickr, we are going to be able to do some seriously amazing things with them. Imagine a project like Photosynth or Photofly, only with lightfields instead of flat images... sorry, I'm drooling a little.
posted by otherthings_ at 2:45 PM on June 22, 2011


Is it ok that I am a bit disappointed that this isn't like somehow taking photonic waveforms and interpolating them into some hyperdimensional image?

That's what I thought when I read "light field" and I AM DISAPPOINT!

(not that this isn't cool. Whod've thought that sports-flick technology would one day let us have 3D portable gaming systems and software-based zooming cameras?)
posted by symbioid at 3:15 PM on June 22, 2011




Damn it, products like this are going to make it impossible for me to make claims about "how totally great that picture would have been had it been in focus, but it wasn't so I deleted it" and my actual mediocrity as a photographer will be revealed to all!

Screw you, technology!
posted by quin at 3:22 PM on June 22, 2011


Ken Rockwell is a photographer who does a ton of camera and lens reviews.

As a result he's mainly interested in getting high resolution sharp images. And I think he's right to question whether this camera will deliver on that. As you say, they don't have really any sensor breakthrough - they're not going to get more information per square inch. They are trading spatial resolution for focal information.

For Ken Rockwell, and most of his audience ("Serious photographers" generally) it doesn't really matter what capabilities this device has if it doesn't deliver on image quality, which to a large extent means sharpness and high resolution. I very strongly suspect that in this category it will fall short of its competition. But there's a lot of room between "razor sharp 13x19 inch prints" and "worthless" and it seems likely it will somewhere in the middle. The question is "where?" As he points out, the only reliable way to answer that is to wait until the camera exists and see how it actually performs.

His comparison to the Foveon X3 doesn't really really seem apt to me, though. The Foveon promised improving one aspects of image quality at the expense of others. Which means it wound up being like one step forward, one step back. I'm sure that Lytro will have to make some image quality compromises, but those are being traded for a step in a whole different direction. So even they only squeeze 3 megapixels of effective resolution out of this, there's could still be a market for it. Though it wouldn't be the niche Ken Rockwell lives in; and so it may well be successful but still totally irrelevant to him.
posted by aubilenon at 3:53 PM on June 22, 2011 [2 favorites]


> So even they only squeeze 3 megapixels of effective resolution out of this, there's could still be a market for it.

It's like the new Polaroid.
posted by StarmanDXE at 3:58 PM on June 22, 2011


I suspect that rendering a frame with a specific focus depth is probably a lot like ray-tracing, meaning hours of computer time per rendering.

I haven't had time to read very much of the thesis in the FPP (I'm at work), but it describes optimizations over ray-tracing, which is indeed the general principle behind the refocusing algorithim.
There are more efficient ways to estimate the integral for the relatively simple case of refocusing, and optimizations are presented in the next chapter. However, the ray-tracing concept is very general and subsumes any camera configuration. For example, it handles the case of the generalized light field camera model where the microlenses defocus, as discussed in Section 3.5 and Chapter 5, as well as the case where the main lens exhibits significant optical aberrations, as discussed in Chapter 7.
The thesis also describes the limitations of this camera in detail. The two main limitations are the resolution of the final image essentially being one microlens per pixel, and diffraction limiting the minimum size of the microlens. A plenoptic camera need a sensor quite a bit larger than a conventional camera for any given resolution.

This technology is cool and all, but for resolution, it sure isn't going to beat my lens scavenged from a 1960s Xerox machine jammed in a shoebox for resolution.
posted by [expletive deleted] at 4:02 PM on June 22, 2011


I'm a photographer and I have to say that all of this sounds like Science Fiction/Cold Fusion.
posted by spock at 5:45 PM on June 22, 2011


It would be interesting to hook up those images where you click to refocus to some kind of eye-tracking software, so that the image automatically refocuses depending on what you are looking at.

One problem with 3D movies is that the audience has no control over what part of the scene is in focus, so you can't shift your gaze from objects in the foreground to the background and have it come into focus naturally, which sends conflicting signals to your brain. This might offer a partial solution. The ultimate would be a projector or monitor that actually recreates the light-field completely, so that you can look into the scene in depth and focus your eyes on different objects yourself.
posted by L.P. Hatecraft at 9:40 PM on June 22, 2011


L.P. Hatecraft, Canon used to sell some cameras that used eye tracking in the viewfinder to choose where to autofocus. Some of the Elan line of film SLR cameras had this. I assume it was annoying or didn't work well or something, is why it never made the jump to digital.

The only difference is that it's doing it before you take the picture instead of after. :D
posted by aubilenon at 11:52 PM on June 22, 2011 [1 favorite]


I'm not a camera buff, aubilenon, so I don't remember the model, but a friend had a Canon digital camera that did gaze-tracking autofocus a number of years ago.

Back to this camera, though: if you're interested in this you may also be interested in computational imaging, compressive sensing, etc. Lots of research in the past decade or so (mainly either military or medical). This lab has an interesting assortment of projects though.
posted by hattifattener at 10:42 PM on June 23, 2011


Florian Cramer gave a very interesting presentation at the Video Vortex #6 conference about "Bokeh Porn" and the thriving world of DSLR enthusiasts and explained how their Holy Grail seemed to be the shallowest depth-of-field.
A multitude of Vimeo videos emanated from this craze, with virtually no content or storyline other than their focus.

And somewhat related-3D pavement art.
posted by Ideefixe at 4:20 PM on June 24, 2011


You know, if I were a focus-puller, I would be thinking about brushing up on some other camera skills, for the day this makes it into the next generation digital cinema cameras, and focus becomes yet another thing that gets "fixed in post".
posted by fings at 2:06 PM on June 28, 2011


« Older Fela! the aftrobeat musical   |   The old switcheroo. Newer »


This thread has been archived and is closed to new comments