The folks over at Formidable have been experimenting with Houdini and WebGL/Three.js to create futuristic UIs
Futuristic sci-fi UIs in movies often support a story where humans, computers, and interfaces are far more advanced than today, often mixed with things like super powers, warp drives, and holograms. What is it about these UIs that feel so futuristic and appealing? Can we build some of these with the web technologies we have today?
The GitHub homepage features a very nice rotating 3D globe, augmented with realtime data shooting around. Here’s how they built it:
At the most fundamental level, the globe runs in a WebGL context powered by three.js. We feed it data of recent pull requests that have been created and merged around the world through a JSON file. The scene is made up of five layers: a halo, a globe, the Earth’s regions, blue spikes for open pull requests, and pink arcs for merged pull requests. We don’t use any textures: we point four lights at a sphere, use about 12,000 five-sided circles to render the Earth’s regions, and draw a halo with a simple custom shader on the backside of a sphere.
About a year ago, Facebook announced a feature named “3D Photos”, a way to show photos taken with Apple’s “Portrait Mode” (or any other device that does the same) interactively:
Whether it’s a shot of your pet, your friends, or a beautiful spot from your latest vacation, you just take a photo in Portrait mode using your compatible dual-lens smartphone, then share as a 3D photo on Facebook where you can scroll, pan and tilt to see the photo in realistic 3D—like you’re looking through a window.
As unearthed by this research Facebook builds a 3D model out of the image + depth data, and then render the generated .glb file on screen using Three.js.
For example, here’s the wireframe of the kangaroo pictured at the top of this post:
3D wireframe of the kangaroo (Yuri akella Artiukh)
A photo taken in Apple’s Portrait Mode is in essence no more than the flat photo combined with a depth map. A depth map is a gray scale photowhere white defines points close-by and pure black defines points farthest away. Using the depth map, you can then blur the content that is furthest away.
Winging back to Facebook: if you upload a file named photo.jpg along with a file photo_depth.jpg, Facebook will treat the latter as the depth map for the photo.jpg, and create a post with a 3D photo from them.
Uploading a photo and its depth map to become one one 3D photo
If you don’t have a depth map of a photo, you can always create one yourself manually using Photoshop or any other image editing tool.
Certain advertises have used this technique a few times by now, as illustrated on Omnivirt:
Tools like the online 3D Photo Creator have a depth prediction algorithm built in. The result is most likely not as good as your own DIY depth map, yet it give you a head start.
🤖 Psst, As a bonus you can check the console to see the link to the resulting .glb float by in said tool 😉
To go the other way around – from 3d photo to photo and depth map – you can use a tool such as the Facebook 3D Photo Depth Analyzer to extract both the photo and the depth map from a 3D photo post.
Just enter the Facebook post ID and hit analyze 🙂
Another approach to show a 3D photo is to use WebGL. With this technique you don’t need to generate a .glb, but can directly use a photo and its accompanying depth map:
William Candillon recreated some parts of Instagram’s editing process (namely the filter, rotate, and crop steps) in React Native.
The filters are implemented with GL-React:
Gl-react offers a really efficient paradigm for integrating React and OpenGL together. The library enables you to easily build scenes using React composability. For instance, consider an image with two filters: Brannan and Valencia. It can be expressed as below. The GLImage implementation is based on gl-react-image.