"Enhance 15 to 23."
January 6, 2015 8:34 AM   Subscribe

 
Wow. Add this to the ability to capture sound from filming a potato chip bag, and I guess all hail our future Panopticon Overlords... Why does it seem like the coolest things about living in the future are simultaneously the scariest?
posted by Mchelly at 8:41 AM on January 6, 2015 [13 favorites]


Note to self: invent contact lenses with painted-on pictures of middle fingers.
posted by Etrigan at 8:43 AM on January 6, 2015 [23 favorites]


Wow. An entire year passed since publication (December 2013) and I've never heard of this before.
posted by RedOrGreen at 8:46 AM on January 6, 2015


I remember watching that scene with my step-dad and his annoyance was that the image enhancer understood "Wait a minute, go right, stop." as "do this immediately without waiting a minute"
posted by Molesome at 8:46 AM on January 6, 2015 [9 favorites]


Does anyone remember a sketch where this played out ad absurdum? Like, reflecting back and forth across the street, around corners, and so on?
posted by stinkfoot at 8:48 AM on January 6, 2015


It took a 39 MegaPixel camera 2 feet from the guys face to be able to do this, so it's quite CSI Zoom in and Enhance levels of magic.

Yet.
posted by Pogo_Fuzzybutt at 8:48 AM on January 6, 2015 [7 favorites]


They used a $30,000 medium-format camera with a $5,000 lens for this shoot. Picture a normal DSLR, then double the sensor size, then double the sensor size again, with a lens to match.

I would have liked them to simulate the physical limit of image resolution for various lens/sensor pairs and present the equivalent "best-possible" images. It's entirely possible that the diffraction limit of a cell phone camera (or even a consumer-grade camera) would rule out capturing images like this. I suspect it's simply not possible given that their setup is probably already very close to the absolute maximum possible resolution for that format.
posted by 0xFCAF at 8:52 AM on January 6, 2015 [7 favorites]


Keep in mind that this was an in-lab finding, using 39 megapixel cameras and perfect lighting. It will probably be most useful in spycraft rather than police investigations after the fact.
posted by pwnguin at 8:52 AM on January 6, 2015 [1 favorite]


I was taught this trick as a way to learn studio lighting: find a shot you like, then check the eye reflections to see where the lights are. Works pretty well for that level of detail.

You can try looking for reflected faces in your own photos, but for people at more ordinary distances from each other, under less carefully controlled lighting conditions, with less impressive camera sensors, the results are going to be disappointing.
posted by echo target at 8:54 AM on January 6, 2015 [2 favorites]


"Pfft. Amateurs."
~The NSA
posted by Thorzdad at 8:55 AM on January 6, 2015 [6 favorites]


Does anyone remember a sketch where this played out ad absurdum? Like, reflecting back and forth across the street, around corners, and so on?

You're probably thinking of this from Red Dwarf.
posted by Thing at 9:03 AM on January 6, 2015 [10 favorites]


Well the bad news is that 2015 looks nothing like Back to the Future 2. The good news is that 2019 will likely resemble the grim, dystopian future predicted by Blade Runner.
posted by Nevin at 9:04 AM on January 6, 2015 [9 favorites]


I think stinkfoot's thinking of this:

https://www.youtube.com/watch?v=gF_qQYrCcns
posted by I-baLL at 9:09 AM on January 6, 2015 [5 favorites]


Presently you can buy a full-frame mirrorless digital camera for about $1500 (no lens). It may not give results as crisp as this study but it probably can get results.
posted by stbalbach at 9:09 AM on January 6, 2015



"Pfft. Amateurs."
~The NSA
posted by Thorzdad at 8:55 AM on January 6
Knowing this, it should come as no surprise that we have learned from the Snowden leaks that the National Security Agency (NSA) stores pictures at a massive scale and tries to find faces inside of them. Their ‘Wellspring’ program checks emails and other pieces of communication and shows them when it thinks there is a passport photo inside of them. One of the technologies the NSA uses for this feat is made by Pittsburgh Pattern Recognition (‘PittPatt’), now owned by Google. We underestimate how much a company like Google is already part of the military industrial complex.
Ai WeiWei is Living in Our Future
posted by infini at 9:11 AM on January 6, 2015 [9 favorites]


Thing, I-baLL - perfect! I was actually thinking of both of those!
posted by stinkfoot at 9:24 AM on January 6, 2015


We underestimate how much a company like Google is already part of the military industrial complex.

Ugh yeah it's so easy to forget that the tech industry is just another arm of the military-industrial complex. Hell, Apple manufactures missile guidance chips (via the PA Semi acquisition, if memory serves).
posted by frijole at 9:29 AM on January 6, 2015 [2 favorites]


The grim dystopian world not envisioned by Blade Runner:

Deckard: "Give me a hard copy right there."

Computer: "Calling Domino's Pizza…"
posted by CosmicRayCharles at 9:33 AM on January 6, 2015 [10 favorites]


Apropos of nothing, it's high time to dispatch our attack ships to Orion.
posted by Iridic at 9:39 AM on January 6, 2015 [4 favorites]


CosmicRayCharles: "The grim dystopian world not envisioned by Blade Runner:

Deckard: "Give me a hard copy right there."
"

Computer: "Looking for hard papis near you."
posted by boo_radley at 9:39 AM on January 6, 2015 [17 favorites]


I'm pretty sure that reflected image is Wil Wheaton.

He's everywhere!
posted by blurker at 9:45 AM on January 6, 2015


0xFCAF: I suspect it's simply not possible given that their setup is probably already very close to the absolute maximum possible resolution for that format.
Correct. (optical engineer here) It will never be possible to retrieve face-in-eyeball images from a cellphone-sized lens (barring someone changing the Planck constant, of course).
posted by IAmBroom at 9:53 AM on January 6, 2015 [5 favorites]


How close are we to the theoretical limit of camera lens/sensor technology in those small devices?
posted by tonycpsu at 9:56 AM on January 6, 2015


Not only is it a super-hi-res camera, the lighting is also kind of nuts. They had extra strobes pointing at their bystanders.


A few years ago at Burning Man I took a series of photos of art structures reflected in my campmates' eyes. But I was up way close with a macro lens.

One thing I realized is that the location of the virtual images reflected in the eyes tend to be about a half an inch into the person's head. So unless you're getting real close and doing it macro like I was, if the person's in focus, pretty much everything reflected in their eye will be in focus.
posted by aubilenon at 9:59 AM on January 6, 2015


My interpretation of the Blade Runner scene was always that the machine was doing crazy 3-D holographic stuff, not just zooming into a flat image. It looks around obstacles, etc. The assumption being that photos in the future are somehow different to our photos.
posted by memebake at 9:59 AM on January 6, 2015 [3 favorites]


IAmBroom: "Correct. (optical engineer here) It will never be possible to retrieve face-in-eyeball images from a cellphone-sized lens (barring someone changing the Planck constant, of course)."

Is there any chance of stuff like virtual aperture cameras, taking several pictures over time, etc., getting arond that to some extent? (Note, I don't know the theory well enough, but I understand that such techniques can in some cases circumvent limitations of optical systems).
posted by Joakim Ziegler at 10:02 AM on January 6, 2015


It will never be possible to retrieve face-in-eyeball images from a cellphone-sized lens

Might it be possible if you had multiple images to work with? I seem to recall some nifty algorithm they could use to correlate multiple lo-res images so as to yield surprisingly hi-res results. Difficult, of course, with faces because they wouldn't normally stay still while multiple images were taken.
posted by yoink at 10:03 AM on January 6, 2015


If your subjects held still and you were using a decent optical system and the lighting conditions were right and the eyes were in focus to begin with, then you might be able to use those techniques to improve the resolution enough.

At the point that everyone is cooperating that much, it would probably be easier to ask them to simply turn the camera around.
posted by 0xFCAF at 10:08 AM on January 6, 2015 [1 favorite]


Ah but can the algorithm remove the beard and groucho glasses to see who the real terrorist sophmore is in the photo?
posted by sammyo at 10:11 AM on January 6, 2015 [1 favorite]


The grim dystopian world not envisioned by Blade Runner:

Deckard: "Give me a hard copy right there."
Computer: "Calling Domino's Pizza…"


Metalbeard: "Be ye disabling of yon shield..."
posted by Foosnark at 10:34 AM on January 6, 2015 [2 favorites]


Lemme see for fun... Assume a subject 2m (6ft) away from the camera, a 1cm viewable eyeball size (the visible portion), a 2.5cm-diameter eyeball (0.025m, we'll need this to approximate the fisheye effect of the reflection on the eyeball surface), and the face we're looking for is also 2m away (say, the photographer).

The image of the photographer on the eye surface appears to be:
1/f = 1/d1 + 1/d2
f=-r/2=-0.025/2=-0.0125m for a mirror
d2=1/(1/f-1/d1)=1/(-80-1/2)= -0.012m (in front of the mirror) - but this is the distance to the teeny, tiny guy in the eyeball reflection, not to a 2m-high person.
Magnification=d2/d1=0.012/2=0.006= 1/167
The "eyeball face" (the photographer's reflected face) appears to be 167 times smaller than the "portrait face" (the face in the normal picture we all see). (Someone check me; it's bloody easy to get confused here.)

A quick test with my own photo suggests a face image should be about 12x16 pixels to be somewhat recognizable. (Humans are really, really good at facial recognition.)

That means the "portrait face" needs to be 167*(12x16) = 2000x2700 pixels. A 5-Mpxl camera shooting a tight facial portrait of a subject might have a marginally recognizable image of the photographer in the eyeball.

Today's 20 Mpxl cameras are good enough for bust-sized portraits (doubling each dimension) to be analyzed.

So, it's hardly something that a store camera or a casual group photograph is going to reveal, but it seems like it would be feasible (you know: unlikely but possible sometimes) to retrieve faces from some closeup portraits' eyeballs.

[Update: found a source on the web that states "a face should be 17 pixels wide for recognition or 40 pixels for identification", where "recognition" means "Yep, that's definitely a human face, not a hammer", and "recognition" means "That's Bob's face". I used a width of 12, about 3.3x too low, so the pixel count is 11x too low. 6600x8900 - a 55 Mpxl camera. Still, ballpark feasible for modern cameras - just unlikely to work, as I noted.]
posted by IAmBroom at 10:44 AM on January 6, 2015 [4 favorites]


...barring someone changing the Planck constant.

Special Circumstances paging IAmBroon to the white courtesy phone.
posted by digitalprimate at 10:47 AM on January 6, 2015


One of the proposed uses for this process, by the way, is recognizing perps (or locations) in CP cases, where you would, presumably, often have high-resolution cameras and well-lit rooms. The fact that it wouldn't work with shaky hand-held cellphone shots doesn't eliminate the possibility of it being useful.
posted by yoink at 10:57 AM on January 6, 2015 [1 favorite]


digitalprimate: ...barring someone changing the Planck constant.

Special Circumstances paging IAmBroon to the white courtesy phone.
There's no contradiction. The words you grabbed referred specifically to "a cellphone-sized lens". You're never going to get 55Mpxl of information from a cellphone-sized lens.
posted by IAmBroom at 11:28 AM on January 6, 2015 [1 favorite]


Keep in mind that this was an in-lab finding, using 39 megapixel cameras and perfect lighting.

At the speed with which prosumer cameras are improving, I wouldn't rule out anything just yet. I haven't looked at the specs on any cameras released in the last year or so, but I know that the last generation had ISOs that could take images in near total darkness. As this technology improves, and maybe adds in some features from thermal and FLIR both of which have dropped in price significantly, I would think that something like this could happen a lot sooner than we think.

I have a digital camera from 12 years ago, and another from 2, the improvements in those ten years are like going from a hang-glider to an F-16. I can't imagine what it'll look like ten years from now.

Lenses will be the failure point before anything else, but even those have made some interesting advancements.
posted by quin at 11:31 AM on January 6, 2015


Joakim Ziegler: Is there any chance of stuff like virtual aperture cameras, taking several pictures over time, etc., getting arond that to some extent? (Note, I don't know the theory well enough, but I understand that such techniques can in some cases circumvent limitations of optical systems).
Great question! Yes... but the pictures would have to retain phase information. . There are some special cameras that do this (the ones that can adjust the focus after you take the photo, basically these are taking holographic pictures that are reconstructed 2D). But in general, more cameras won't help.
yoink:
Might it be possible if you had multiple images to work with? I seem to recall some nifty algorithm they could use to correlate multiple lo-res images so as to yield surprisingly hi-res results. Difficult, of course, with faces because they wouldn't normally stay still while multiple images were taken.
That can overcome problems like camera jitter, moving obstructions (atmospheric effects over long ranges), and other situational problems, but the resolution limit is a hard constraint of the Universe. Without phase information, the resolution limits stays. (To be clear: this isn't a black-and-white resolution limit like your camera's pixel count imposes. More like the "resolution" of your eyes - a person with 20/100 vision might make out a letter or two on the 20/50 line, but the 20/20 line is out of the question blurry. To make a face recognizable, dozens of those easy-to-get "letters" have to occur in a row - by analogy - and that isn't going to happen.)
posted by IAmBroom at 11:37 AM on January 6, 2015


It will never be possible to retrieve face-in-eyeball images from a cellphone-sized lens (barring someone changing the Planck constant, of course).
...
There's no contradiction. The words you grabbed referred specifically to "a cellphone-sized lens". You're never going to get 55Mpxl of information from a cellphone-sized lens.


Not that it's your job to teach me this, but now I'm curious — how does this follow from the Planck constant?
posted by nebulawindphone at 11:38 AM on January 6, 2015


IAmBroom: You're never going to get 55Mpxl of information from a cellphone-sized lens.
... which doesn't mean they won't try to sell you one someday... but the reviews will be quick and decisive.
posted by IAmBroom at 11:39 AM on January 6, 2015 [1 favorite]


nebulawindphone: Not that it's your job to teach me this, but now I'm curious — how does this follow from the Planck constant?
The resolution limit is a duality pair implied by the Heisenberg Uncertainty Principle, which states that DeltaE * DeltaT = h, where h is the Planck constant. (I might be missing a pi/2 factor here.) It's more physics than I can write off the top of my head, but eventually you can prove that the resolution limit (DeltaAngle) is related to f/# (F-number, effectively in the range of 1.2-50 for normally available lenses*) and Diameter of the lens divided by photon wavelength (which is 3 to 8 microns).

* I worked on the refinement of one sub-1 F/# lens, 0.95 IIRC. It had a depth-of-field of a few microns, a field-of-view of a few millimeters, operated in the extreme UV/near X-ray spectrum, and... was expensive.
odinsdream: I assume what they're getting at is that the size of the pixel detector on a sensor is limited by the Planck length.
Nah, the Planck length is way tinier than atoms. Interesting guess, though.
posted by IAmBroom at 11:51 AM on January 6, 2015 [1 favorite]


quin: Keep in mind that this was an in-lab finding, using 39 megapixel cameras and perfect lighting.
Hehe... I got so caught up in my little numbers game, I didn't even realize I basically described their setup. 39Mpxl is approximately 55Mpxl - given that the definition of "recognizable" is somewhat fluid. Are you trying to pick the right guy out of a lineup of 5, 50, or 50000? And they only tested with 3 judges, for that matter. Small sampling = large possible variance.

Anyway, I'm happy my math came close.
quin: I have a digital camera from 12 years ago, and another from 2, the improvements in those ten years are like going from a hang-glider to an F-16. I can't imagine what it'll look like ten years from now.
The difference between the Canon 5D and the Canon 5D Mark II is twofold:
1. They added video (which is pretty much a firmware addon, NBD), and
2. They improved the high-ISO (night time) performance.

They achieved 2 by cheating: they just added noise suppression to remove single-pixel blips, which are often Shott Noise. Shott Noise is caused by photons radiated from the detector (because it's not at absolute zero) and then absorbed by another part of the detector. It also happens in all the electronics with photon energy levels dislodging electrons, creating noise. It doesn't happen much (or else we'd see things glow in the dark - that is, if our eyeballs were cold enough they didn't blind us with glow), but it happens in systems with a lot of amplification - like low-light photography.

Anyway, my point is: you can fairly easily apply the same filtering they do in post-processing, making the Mark-I and Mark-II identical in still-camera performance. So, I saved a few thou by buying the ancient Mark-I after the shiny Mark-II hit the market.

I guess the point is: we're fairly close to thermally-limited performances, too. To beat this, we either have to chill our detectors (which is what heat-seaking missiles do, and hi-tech nightvision goggles), or use bigger lenses (more photons means a single bad one is more clearly noise). The noise only goes down as square-root of detector size along any axis, though - to double sensitivity you need to quadruple sensor size, and that leads to quadruple lens diameters...
posted by IAmBroom at 12:04 PM on January 6, 2015 [2 favorites]


eventually you can prove that the resolution limit (DeltaAngle) is related to f/# (F-number, effectively in the range of 1.2-50 for normally available lenses*) and Diameter of the lens divided by photon wavelength

Ah. I was thinking about how what you were saying earlier jibes with pinholes, and the answer I guess is they have ridiculous F-numbers, assuming you have your sensor a reasonable distance away (which you can't do in a cellphone)

photon wavelength (which is 3 to 8 microns)

That's a typo or unit conversion error or something. Visible light is 300nm - 800nm, which is 0.3 - 0.8 microns.
posted by aubilenon at 12:06 PM on January 6, 2015


Yup, math error: your wavelengths are right. Duh, or else cells (1-10 micron=ish in size) wouldn't be visible in white light microscopes.
posted by IAmBroom at 12:14 PM on January 6, 2015


"I worked on the refinement of one sub-1 F/# lens, 0.95 IIRC. It had a depth-of-field of a few microns, a field-of-view of a few millimeters, operated in the extreme UV/near X-ray spectrum, and... was expensive."

How would you even manufacture something like that?
posted by klangklangston at 6:11 PM on January 6, 2015 [1 favorite]


aubilenon: "lighting is also kind of nuts."

It's not all that crazy, just reproducibly flat. You could get the same effect on an overcast day.
posted by Mitheral at 7:08 PM on January 6, 2015


When they went broad with this, the researchers were disturbed to find that in every case the face reflected in the subject's eyes was the same. The same gaunt, hungry-looking face.
posted by um at 8:34 PM on January 6, 2015 [6 favorites]


klangklangston:
How would you even manufacture something like that?
Heh... 18 optical components, including an asphere, and controlling Zernike errors (shape errors) out to Z37.

I didn't even know anyone cared about Zernike's past about half that high. That's in the "mathematically possible" range of formulas, not the "a few microns lopsided" sort. But there it was in the data: Z37 errors had been driving lens failures... I was stunned.

It wasn't a secret application, BTW. It was for IC photolithography - trying to keep up with Moore's Law.
posted by IAmBroom at 2:43 PM on January 8, 2015


IAmBroom: The noise only goes down as square-root of detector size along any axis, though - to double sensitivity you need to quadruple sensor size, and that leads to quadruple lens diameters...
Oops - quadruple sensor size leads to double lens diameters. As I said, easy to make mistakes when you're not super careful with the math.
posted by IAmBroom at 2:44 PM on January 8, 2015


"Heh... 18 optical components, including an asphere, and controlling Zernike errors (shape errors) out to Z37. "

Is the basic lens just borosilicate? And at that size, are you still able to cut them or do you have to mold them?
posted by klangklangston at 3:56 PM on January 8, 2015


I really don't recall; it was 2007. But they were cut. When I found out they were controlling Z37, I asked, "But how?"

The answer was, "Oh, the guys who run the planetray lap have ways of adjusting it."

My assumption of what that meant was: "Let's put it on the lap for another 30 minutes and then remeasure it. Who knows? We might get lucky."
posted by IAmBroom at 8:08 PM on January 8, 2015


« Older “Wasn’t anything we could do about it.”   |   Still not a good idea to keep money in a bitcoin... Newer »


This thread has been archived and is closed to new comments