Capturing a scene at trillion frames per second
August 18, 2012 4:58 PM   Subscribe

 
I think the page you linked to is incorrect. It seems that their technique is based from samples from repeated occurrences, and could not capture one-time physical events such as a bullet going through an apple. However, it's a pretty cool technique for visualizing the way light interacts with objects on very short timescales.
posted by demiurge at 5:11 PM on August 18, 2012 [1 favorite]


It's like seeing the "expanding sphere of light as a shell" as Iain M. Banks describes in some of his SF stories, only at very small scale instead of over light-years.
posted by localroger at 5:29 PM on August 18, 2012


The bullet-time potential is astounding. I can't wait for the new Matrix prequel, Matrix: A Sunbeam Moves One Quarter of a Millimeter.
posted by Uppity Pigeon #2 at 5:30 PM on August 18, 2012 [1 favorite]


Matrix: A Sunbeam Moves One Quarter of a Millimeter

Each scene filmed in only 1080 takes
posted by localroger at 5:32 PM on August 18, 2012


2000 takes, actually. The sunbeam was coked out of its mind and kept fucking up.
posted by Uppity Pigeon #2 at 5:33 PM on August 18, 2012 [4 favorites]


From the youtube clip, they state that a video of a bullet going through an apple at those timescales would take a year to screen, so I'd imagine they could probably still get a few exposures of that one-time physical event within the resolution of the camera.
posted by knoxg at 5:34 PM on August 18, 2012


Kind of a double.
posted by howfar at 5:55 PM on August 18, 2012 [1 favorite]


It's difficult to imagine how this could be possible if it were recording a single instance, which demiurge notes, it's not.

I hope they have a lot of hard drive space.
posted by JHarris at 6:01 PM on August 18, 2012


Sheeit, don't give those Inception assholes any ideas.
posted by jimmythefish at 6:18 PM on August 18, 2012


First I was like mind=blown the I realized what was going on and I was like mind=!blown. But it is still cool. It is pretty much sonar with light, which is still cool as hell. Plus I like that guy's outfit, I wonder if I could rock one of those.
posted by Ad hominem at 6:27 PM on August 18, 2012


It is pretty much sonar with light

You mean, like "vision"?
posted by ShutterBun at 6:48 PM on August 18, 2012 [4 favorites]


This has been linked a couple of times. They do compose things from lots of individual instances. But that doesn't really mean they aren't doing what they claim to do, or how you would think about capturing video at a trillion FPS.

Obviously I don't know exactly how their setup works, but from what I remember they're capturing one scanline at a time. But they really are taking an 'Image' of light as it exists for a very short timespan. It's just that they're taking lots and lots of scanlines of lots and lots of bursts of light.

I assume that each scanline in each frame is probably from a different actual light blast. But I'm not sure.

But anyway, what you are seeing is what you would actually see if you did have a camera that could record a single instance, and it is made from individual exposures that short.

The other problem with recording a single instance is that you just wouldn't have many photons to collect. And of course, even if your sensor worked, you'd need to dump the data extremely quickly. If one frame were 1 megapixel raw, you'd need to record data at a rate of 8 exabytes per second for a black and white, 8-bit image.
posted by delmoi at 6:51 PM on August 18, 2012


You mean, like "vision"?

Well yes. I suppose sonar is pretty much hearing though.
posted by Ad hominem at 7:01 PM on August 18, 2012


So...how do we train ourselves to see the world in more FPS? Because that's the key to immortality, as far as I can tell...
posted by 3FLryan at 7:10 PM on August 18, 2012


I think it is possible to do as "slow motion" as we understand it.

Have a trillion CCDs in sequence.
posted by Ad hominem at 7:12 PM on August 18, 2012


You know you are a science nerd if you audibly gasped when you saw that first pulse of light move down the bottle.

Fuck TED naysayers, that was awesome.
posted by JimmyJames at 7:22 PM on August 18, 2012 [2 favorites]


First I was like mind=blown the I realized what was going on and I was like mind=!blown

You mean you had the normal TED response?
posted by Chekhovian at 7:42 PM on August 18, 2012 [2 favorites]


I assume that each scanline in each frame is probably from a different actual light blast. But I'm not sure.

I don't think so. They are just capturing the very small number of photons they can from the same pulse over and over and then adding them all up to make one image. Essentially the same as just capturing an image over a given longer period of time t (the photons are essentially added up as they strike the sensor), but in this case they capture n exposures over time t/n.
posted by ssg at 8:36 PM on August 18, 2012


This is actually pretty cool. When I was a little kid, I used to sit in the room and stare at the ceiling fan trying to see the individual fan blades whir by and for some reason, this would make me think of what it would be like if you could see the wavefront of light moving right after you turned the light on.

So now you can (sort of, in specifically-designed scenarios.) As long as the situation is repeatable, this technique should work.

The physics of light are actually very complicated and interesting, and I like the idea of being able to actually see it happen. Just because many TED talks are bullshit glazed in fine chocolate for the .01% doesn't mean they all are.
posted by !Jim at 9:17 PM on August 18, 2012 [1 favorite]


I'm probably misunderstanding the argument, but I don't see how this is anything other than advertised. He/they created a "light bullet" (something like a laser blast in the Star Wars universe) with a length of 1 lightpicosecond (assuming that's a word.)

So we have a beam of light about 1mm long shooting through a Coke bottle, and we're watching it at 1/trillionth speed, to see what it does.

There are no "scan lines", that is just a blast of light with a "thickness" of 1mm (due to the light being turned on and off in 1 billionth of a second)

But since the image is so dark, they're just multiplying the results of several bursts in order to make it visible to us, right?
posted by ShutterBun at 9:18 PM on August 18, 2012


Their sensor only captures one scan line, and they need to repeat the experiment over and over again for each scanline, summing up enough data to overcome noise. The coke bottle image took hours to create. Their technique only works with a totally static scene. At Siggraph they talked about using it to see people trapped in a building and it was pretty silly.
posted by scose at 10:27 PM on August 18, 2012


This is a trillion-frames-per-second camera in the same sense that an auto mechanic's strobe offers the viewer thousands-of-frames-per-second vision. Useful, but needlessly exaggerated.
posted by 0rison at 10:46 PM on August 18, 2012 [1 favorite]


There are no "scan lines", that is just a blast of light with a "thickness" of 1mm (due to the light being turned on and off in 1 billionth of a second)
The scan line is in the camera. The only capture one 'line' of the image at once, as opposed to capturing an entire frame in one shot. So in other words, they capture different parts of the image/video at different times and compose them together.

The same thing happens in each shot, but what's captured by the camera is different for each one.

So, you would only need 1000 sensors to create a 1000x1000 image. Since the sensors they use for this are undoubtedly expensive, that probably saves money.
This is a trillion-frames-per-second camera in the same sense that an auto mechanic's strobe offers the viewer thousands-of-frames-per-second vision.
That is not correct. This camera actually allows you to see the photons from the strobe moving through space. If you just look at something illuminated with a strobe, you can't tell 'when' the photons hit, just that they did.
posted by delmoi at 11:26 PM on August 18, 2012


That is not correct. This camera actually allows you to see the photons from the strobe moving through space. If you just look at something illuminated with a strobe, you can't tell 'when' the photons hit, just that they did.

I think 0rison means that the auto mechanic’s strobe similarly uses samples over a long period of time to compose the appearance of slow-motion from several separate motions. My understanding of this camera is that instead of showing a single light burst in positions t0, t1, t2, etc., they record several successive bursts over time and stitch them back together into an animation. In other words, we’re watching a coke bottle bombarded with thousands of pulses, made to look like a single slow pulse. The frequency of the pulses and the recording device are slightly out of phase, like an old movie of a wagon wheel.
posted by migurski at 11:44 PM on August 18, 2012 [1 favorite]


The scan line is in the camera. The only capture one 'line' of the image at once, as opposed to capturing an entire frame in one shot. So in other words, they capture different parts of the image/video at different times and compose them together.

Is it specifically described that way somewhere? I got the exact opposite impression from the original video. It would take 1,000 passes to create a single frame, somehow spaced at the correct timing so as to be reassembled into a coherent image. So that means about 1,000,000 passes to create a 30 second sequence. And really, why would a single line of pixels be possible, whereas a matrix of 1,000 lines by 1,000 (captured simultaneously) be impractical?

There is a section of the video where he specifically shows a bunch of full frames and explains that they use an additive process to amplify the light. I didn't see anything about "we scan one line at a time, then piece it all back together to create an entire image."
posted by ShutterBun at 12:40 AM on August 19, 2012 [1 favorite]


I'm unable to watch the videos atm, but here's the description of the Femto Photography process, from the MIT Media Lab site linked off the original post:
The new technique, which we call Femto Photography, consists of femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques. Our light source is a Titanium Sapphire laser that emits pulses at regular intervals every ~13 nanoseconds. These pulses illuminate the scene, and also trigger our picosecond accurate streak tube which captures the light returned from the scene. The streak camera has a reasonable field of view in horizontal direction but very narrow (roughly equivalent to one scan line) in vertical dimension. At every recording, we can only record a '1D movie' of this narrow field of view. In the movie, we record roughly 480 frames and each frame has a roughly 1.71 picosecond exposure time. Through a system of mirrors, we orient the view of the camera towards different parts of the object and capture a movie for each view. We maintain a fixed delay between the laser pulse and our movie starttime. Finally, our algorithm uses this captured data to compose a single 2D movie of roughly 480 frames each with an effective exposure time of 1.71 picoseconds.
posted by zhwj at 4:31 AM on August 19, 2012


When he's talking about the seeing-around-corners demo, I thought it was really interesting how the whole thing seemed to be set up like a sales pitch to the military -- a running human around a corner, graphics all done in green on black -- such that the whole time I was thinking "man, this is cool but it's obviously going to be used to make us better at killing people, isn't it?". And then when the guy talks about applications he's all talking about colonoscopies and firefighting and doesn't mention military applications whatsoever.

I wonder if this genuinely is not meant to be a primarily military technology (though the applications are so obvious that it seems natural for the DoD to pick it up after it gets a little bit more mature) or if this guy just doesn't really like talking about that because he just wants to play with his cameras and try to make the world better and doesn't want to think about possible negative impacts, or maybe he just knows his audience and knows that people don't really like to hear about how new technologies are often first used in war. Just a thought I had.
posted by Scientist at 1:13 PM on August 19, 2012


Scientist, it's probably presented as a military technology because one of the prime sources of funding for Media Lab professors (and any applied physics / computer science professors) is DoD grants. My guess would be that they created a demo for the DoD and incorporated pieces of it into that demo into the presentation you just saw, along with pieces of their demos for laypeople and other funding sources.

-Also a scientist
posted by kellybird at 11:34 PM on August 19, 2012


I thought it was really interesting how the whole thing seemed to be set up like a sales pitch to the military

Not to be a "TED naysayer" god forbid, but after a couple weeks I unsubbed the TED talks from my iPod because of that ego-driven entrepreneurial sales-pitch mode heard in most of the talks (and the breathless gasp-and-cheer-on-a-hairtrigger worshipful audiences). It was like listening to televangelism for venture capitalism.
posted by aught at 8:09 AM on August 20, 2012


« Older Top 10 Hardest Adventure Games   |   Look to the skies. The flying saucers will always... Newer »


This thread has been archived and is closed to new comments