Nice research by Simon Niklaus, Long Mai, Jimei Yang and Feng Liu:
In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks.
You can see the result in action starting at 0:30.
3D Ken Burns Effect from a Single Image →
🤔 I’m wondering how this will stack up against Apple’s “Portrait Mode” photos, as those photos already contain depth information.