Sea-thru Brings Clarity to Underwater Photos
November 13, 2019 4:44 PM   Subscribe

Photographing coral reefs is difficult because even shallow water selectively absorbs and scatters light at different wavelengths, washing out colors. This makes it difficult to use computer vision and machine-learning algorithms to identify, count and classify species in underwater images. But a new algorithm called Sea-thru, developed by engineer and oceanographer Derya Akkaynak, removes the visual distortion caused by water from an image. The effects could be far-reaching for biologists who need to see true colors underneath the surface.
posted by Lexica (11 comments total) 26 users marked this as a favorite
 
My brain translated the title to "Sea-thru Brings Clarity to Underwear Photos." Reader, I was disappointed.
posted by Abehammerb Lincoln at 4:53 PM on November 13, 2019 [6 favorites]


Neat!
posted by clawsoon at 4:59 PM on November 13, 2019


I found this really confusing when I read about it on petapixel.

It appears to be an automated white balance tool, which, like, every underwater photog ever shoots in raw and tweaks WB in post, you have to. Many people dive with a white card or plastic specifically to assist with this, I find their claims grandiose to say the least.

Unless the algorithm is applying different corrections to different parts of the frame (and I'm pretty confident it's not as all the samples I've seen show uniform balance) it seems like it's automating a process that currently takes less than five seconds in any one of dozens of photo editors.
posted by smoke at 5:19 PM on November 13, 2019 [5 favorites]


My thoughts are exactly as smoke put it. This is already a THING, so why is See-thru remarkable in any way?
posted by blaneyphoto at 5:36 PM on November 13, 2019


Smoke: that’s what their paper is about. That the distortion and backscatter vary based on range and albedo, so physically-correct results can’t be produced from simply applying a whole scene filter.

Within the linked article, what should’ve tipped you off that there was more than a Photoshop filter involved is the aside that depth information is required for this to work. Given that the method appears to involve swimming towards the scene I’m betting they’re taking multiple samples to produce BRDFs (or something similar in implementation and intent).
posted by Ryvar at 5:41 PM on November 13, 2019 [12 favorites]


I read that depth information is required, however light drop off is a well known scale and again, something easily compensated for manually. And their samples have only corrected drop off at one distance (presumably what's set) rather than multiple.

However, reading the paper, this is all about automation and essentially offering up the tool to help with automated scanning of large image sets, which makes much more sense. They even say:
Since Sea-thru is the first algorithm to use the revised un-
derwater image formation model and has the advantage of having a range map, we do not test its performance against single image color reconstruction methods that also try to
estimate the range/transmission.
So essentially they've created a tool that can do away with manual white balance and is totally automated.

I'm still not really sold on the benefits, as really they've just substituted manual WB with manual depth data. Typically to "automate" a process like this, as most sets are taken at one depth and similar distances, you would WB correct the first photo and then bulk apply changes to the rest of your photo set which will take you 90‰ of the way there.

I still think the benefits are being overstated.
posted by smoke at 5:58 PM on November 13, 2019 [2 favorites]


There's a lot more than just fixing white balance to get good underwater photos, ultimately you'd need to combine actual depth in water + depth in image to know much actual color absorbtion went on to estimate how to correct the color properly for every pixel. They mention Dark Channel Prior, the estimator that Lightroom's Dehaze function uses to estimate depth and know how much fog to remove, but I haven't read all the paper to know what they actually do.

This probably works better at shallow depth where all the red hasn't been absorbed.

Personally, a bit of dehaze + white balance adjustment can do some magic for my pictures, but some algorithm attuned to the specifics of water as a medium can surely do bother. Get me a backscatter cleaner while you're at it, sometimes water is murky and there's just no way to light the scene without getting some.
posted by WaterAndPixels at 6:28 PM on November 13, 2019


I think the writeup is at fault because it’s advertising this as a solution to a problem that was already being handled adequately by professional photographers with Photoshop, and possibly handled *better* because they have license to bias the results toward the more aesthetically pleasing.

But this isn’t about photography, not really, it’s about producing a physically-correct “clean” source for the scene partly for (according to the authors) training up deep-learning algorithms for doing image analysis on underwater images.

Beyond that, for those of us who work with physically-based realtime rendering it’s important to have physically-correct clean base textures for the various properties used in your lighting model. For instance, when Epic was doing photogrammetric asset gathering for the Kite Demo, photogrammetry + physically-based rendering + dynamic global illumination meant they needed a process to generate source textures for the assets *before* real-world lighting got involved (@14:55 if your browser doesn’t support timestamp bookmarking). They had to remove shadows from the source textures because the self-shadowing ought to be generated by the lighting model.

In the future, for similar methods applied to underwater scenes you’d want to use something like Sea-thru to generate clean diffuse/normal/reflectivity/metallicity maps. I don’t know if the authors considered that purpose, but if you’re a tech artist or graphics programmer making a game or VR experience that primarily takes place underwater about 10~15 years from now, you’ll probably be referencing this paper when establishing your asset pipeline during pre-production.
posted by Ryvar at 6:32 PM on November 13, 2019 [3 favorites]


This is neat, but what happens when you go deeper? As you descend, light gets filtered out starting from red until there's nothing left but blue around 100' or so. Professional photographers get around this by bringing their own light. Without it, that chip chart will just look like a bunch of light blue squares. There can't possibly be enough bit depth on the camera to separate that cleanly with her algorithm.

It looks super cool on shallow stuff, though!
posted by higginba at 9:16 PM on November 13, 2019


In the thread about this on Reddit, the inventor of the algorithm clarifies that the color chart is not necessary for the program to work, and links to her published paper for anyone who may want a little more background.
posted by DSime at 7:08 AM on November 14, 2019 [3 favorites]


As a layperson they question I have for those that know is when you are done with the white balancing or whatever this thing does, do the resulting pictures look like what you see when you are diving?
posted by Pembquist at 3:29 PM on November 14, 2019


« Older Dogs and Beer in Fargo   |   Who will be this year's Australia's bird of the... Newer »


This thread has been archived and is closed to new comments