Robert Falcon Scott was a British explorer who dreamed of being the first person to reach the South Pole. In 1912, he reached the Pole only to learn that his Norwegian rival, Roald Amundsen, had beat him to it. Caught by freakish weather and a string of bad luck, his entire party died trying to get back.
When diving you’ll notice that the colors start acting up: everything will become more washed out (and look more green/blue) from the moment you look down in the water, and everything will become darker as you descend. Colors like red go first: at a depth of 10m it looks brown. To correct this, underwater photographers like myself use red filters or software in post-production. The cleverly named “Sea-Thru” is an algorithm that can do this automatically:
Sea-thru’s image analysis factors in the physics of light absorption and scattering in the atmosphere, compared with that in the ocean, where the particles that light interacts with are much larger. Then the program effectively reverses image distortion from water pixel by pixel, restoring lost colors.
Here’s a comparison gif:
It comes with one big caveat though:
One caveat is that the process requires distance information to work.
I’m pretty sure that you’ll only need to do this once per type of water. With my own GoPro it usually takes me a dive or two to get my settings right and then I can continue on using with the same settings (at the same location and under the same conditions)
Nice research by Simon Niklaus, Long Mai, Jimei Yang and Feng Liu:
In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks.
You can see the result in action starting at 0:30.
“Raising the Flag on Iwo Jima” and “Raising a Flag over the Reichstag” are similarly iconic photos from World War II. They’re both beloved images of victory, and they’re both taken after the fighting ended in significant battles. But the Russian one is different, because parts of it are altered.
What happens when we tap into the power of artificial intelligence and deep learning to transform bad portrait shots into good ones – all on a smartphone? By combining perspective effect editing, automatic, software-only photo masking, and photo style transfer technology, we’re able to transform a typical selfie into a flattering portrait with a pleasing depth-of-field effect that can also replicate the style of another portrait photo.
Sidenote: Replicating the style of one photo to another was recently described, and demo’d in a paper entitled “Deep Photo Style Transfer”. It has some stunning results:
On the left you see the source photo, in the middle the style to apply, and on the right the result.
Being a diver myself (certified PADI Master Scuba Diver), this looks awesome:
The latest TomTom Bandit camera update now automatically provides colour correction while filming in the water up to a depth of 15 m without the need for any additional accessories. View the difference!
By the looks of it, mainly the white balance is adjusted.