Mars 2020 Entry Descent Landing

Relive the whole Mars 2020 Entry Descent Landing in your browser, in 3D, powered by Three.JS/WebGL 🤯

Here’s a few pointers to use:

  • Use the ⏪ ⏩ at the bottom to speed up / slow down the animation
  • Scroll over the pane on the left to jump between phases of the landing
  • You can click and drag around as you see fit, zooming also possible

Mars 2020 Entry Descent Landing →

GitHub Skyline — Your GitHub Story in 3D

Nice in-browser 3D-render of your GitHub History. You can download the result as a .stl file to run it through your 3D printer.

Here’s my 2020 timeline for example:

I take pride in the fact that my Saturdays (front row) and Sundays (back row) remain as good as empty, and that there’s an occasional gap in between the blocks where I took some time off.

GitHub Skyline →

Building Future UIs

The folks over at Formidable have been experimenting with Houdini and WebGL/Three.js to create futuristic UIs

Futuristic sci-fi UIs in movies often support a story where humans, computers, and interfaces are far more advanced than today, often mixed with things like super powers, warp drives, and holograms. What is it about these UIs that feel so futuristic and appealing? Can we build some of these with the web technologies we have today?

Building Future UIs →

How we built the GitHub globe

The GitHub homepage features a very nice rotating 3D globe, augmented with realtime data shooting around. Here’s how they built it:

At the most fundamental level, the globe runs in a WebGL context powered by three.js. We feed it data of recent pull requests that have been created and merged around the world through a JSON file. The scene is made up of five layers: a halo, a globe, the Earth’s regions, blue spikes for open pull requests, and pink arcs for merged pull requests. We don’t use any textures: we point four lights at a sphere, use about 12,000 five-sided circles to render the Earth’s regions, and draw a halo with a simple custom shader on the backside of a sphere.

How we built the GitHub globe →

How Facebook 3D Photos work, and how to create one yourself

💡 Sparked by the 3D Ken Burns Effect from a Single Image post, I was reminded of a few other 3D photo things …

About a year ago, Facebook announced a feature named “3D Photos”, a way to show photos taken with Apple’s “Portrait Mode” (or any other device that does the same) interactively:

Whether it’s a shot of your pet, your friends, or a beautiful spot from your latest vacation, you just take a photo in Portrait mode using your compatible dual-lens smartphone, then share as a 3D photo on Facebook where you can scroll, pan and tilt to see the photo in realistic 3D—like you’re looking through a window.

As unearthed by this research Facebook builds a 3D model out of the image + depth data, and then render the generated .glb file on screen using Three.js.

For example, here’s the wireframe of the kangaroo pictured at the top of this post:


3D wireframe of the kangaroo (Yuri akella Artiukh)

~

A photo taken in Apple’s Portrait Mode is in essence no more than the flat photo combined with a depth map. A depth map is a gray scale photowhere white defines points close-by and pure black defines points farthest away. Using the depth map, you can then blur the content that is furthest away.


Photo + Depth Map = Portrait Mode (Marc Keegan)

~

Winging back to Facebook: if you upload a file named photo.jpg along with a file photo_depth.jpg, Facebook will treat the latter as the depth map for the photo.jpg, and create a post with a 3D photo from them.


Uploading a photo and its depth map to become one one 3D photo

~

If you don’t have a depth map of a photo, you can always create one yourself manually using Photoshop or any other image editing tool.

Certain advertises have used this technique a few times by now, as illustrated on Omnivirt:

Tools like the online 3D Photo Creator have a depth prediction algorithm built in. The result is most likely not as good as your own DIY depth map, yet it give you a head start.

🤖 Psst, As a bonus you can check the console to see the link to the resulting .glb float by in said tool 😉

~

To go the other way around – from 3d photo to photo and depth map – you can use a tool such as the Facebook 3D Photo Depth Analyzer to extract both the photo and the depth map from a 3D photo post.

Just enter the Facebook post ID and hit analyze 🙂

~

Another approach to show a 3D photo is to use WebGL. With this technique you don’t need to generate a .glb, but can directly use a photo and its accompanying depth map:

(Forked from this instructional video by k3dev)

Did this help you out? Like what you see?
Thank me with a coffee.

I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!

☕️ Buy me a Coffee (€3)

To stay in the loop you can follow @bramus or follow @bramusblog on Twitter.

Implementing Instagram Filters and Image Editing in React Native

William Candillon recreated some parts of Instagram’s editing process (namely the filter, rotate, and crop steps) in React Native.

The filters are implemented with GL-React:

Gl-react offers a really efficient paradigm for integrating React and OpenGL together. The library enables you to easily build scenes using React composability. For instance, consider an image with two filters: Brannan and Valencia. It can be expressed as below. The GLImage implementation is based on gl-react-image.

import * as React from "react";
import {Surface} from "gl-react-expo";

import {GLImage, Brannan, Valencia} from "./gl-filters";

export type FilterName = "brannan" | "valencia";
type FilterProps = {
    uri: string,
    aspectRatio: number,
    name: FilterName,
    onDraw?: () => mixed
};

export default class Filter extends React.PureComponent<FilterProps> {

    render(): React.Node {
        const {style, uri, aspectRatio, name, onDraw} = this.props;
        const source = { uri };
        return (
            <Surface style={{ width: 100, height: 100 }}>
                <Brannan on={name === "brannan"}>
                    <Valencia on={name === "valencia"}>
                    <GLImage {...{source, aspectRatio, onDraw}} />
                </Valencia>
                </Brannan>
            </Surface>
        );
    }
}

A more condense writeup is also available: Implementing Instagram filters and picture editing with React Native →

AR.js: Efficient Augmented Reality for the Web

AR.js is Efficient Augmented Reality for the Web using ARToolKit. It is Fast! It runs efficiently even on phones. 60 fps on my 2 year-old phone! it is a pure javascript solution, fully opensource. It works on any phone with webgl and webrtc

AR.js: Efficient Augmented Reality for the Web →

Uber Visualization Nights: deck.gl

During the first “Uber Visualization Night” on June 20, 2017, Nicolas Belmonte from Uber’s Data Visualization team gave a nice introduction to the aforementioned deck.gl: