Townscaper – Town Generator Game

This summer Oskar Stålberg released his game Townscaper (see video above).

Build quaint island towns with curvy streets, small hamlets, soaring cathedrals, canal networks, or sky cities on stilts. Build the town your dreams, block by block.

No goal. No real gameplay. Just plenty of building and plenty of beauty. That’s it.

Yesterday he surprised us all by releasing a web version.

Looking at it via the DevTools I see that it’s powered by WebGL and Web Assembly.

Townscaper on Steam →
Townscaper on the web →


Checking Oskar’s blog, you can see that he’s been into city builders for quite some time. Take this demo from 6 years ago:

How To create the Stripe Website Gradient Effect

Kevin Hufnagl reverse engineered Stripe’s Website Gradient Effect. Here’s what he found on the JavaScript side of things:

Essentially they are using a minimalistic implementation of WebGL which they called minigl and a Gradient Class which is used to store all of our animation properties and control the animation.

See the Pen
Stripe Website Gradient Animation
by Kevin Hufnagl (@kevinhufnagl)
on CodePen.

Update 2021.10.17: The original Pen above seems to have been removed, so here’s a restored version for you to check out:

See the Pen
MiniGL Stripe Gradient
by Bramus (@bramus)
on CodePen.

Update 2021.10.25: On you can easily generate your own gradients. Exporting the code is also possible. It uses the same code as the pen in the update above. Only difference is that it loads the Gradient code using ES Modules.

See the Pen
Whatamesh — Stripe WebGL Gradient
by Bramus (@bramus)
on CodePen.

☝️ There’s no need to download your own gradient.js, as you can serve it directly from the GitHub Gist via GitHack.

How To create the Stripe Website Gradient Effect →

Google Maps JavaScript API WebGL

I’m a huge mapping nerd, so I’m a sucker for the Maps JavaScript API WebGL beta which was announced at Google I/O back in May:

Introduced are three new features:

  • WebGL Overlay View lets you add custom 2D and 3D graphics and animated content to your maps.
  • Tilt and heading can now be adjusted programmatically, and by using mouse and keyboard gestures.
  • map.moveCamera() lets you simultaneously change multiple camera properties.

To give you an idea of what you can do with this, the WebGL-powered Maps demo website has some nice examples, such as an airplane taking off, a car following directions on a map, a building being highlighted, etc.

There’s also a codelab you can follow to learn the details.

WebGL-powered Maps →
Maps JavaScript API » WebGL Features » Overview →
Codelab: Build 3D map experiences with WebGL Overlay View →

Mars 2020 Entry Descent Landing

Relive the whole Mars 2020 Entry Descent Landing in your browser, in 3D, powered by Three.JS/WebGL 🤯

Here’s a few pointers to use:

  • Use the ⏪ ⏩ at the bottom to speed up / slow down the animation
  • Scroll over the pane on the left to jump between phases of the landing
  • You can click and drag around as you see fit, zooming also possible

Mars 2020 Entry Descent Landing →

GitHub Skyline — Your GitHub Story in 3D

Nice in-browser 3D-render of your GitHub History. You can download the result as a .stl file to run it through your 3D printer.

Here’s my 2020 timeline for example:

I take pride in the fact that my Saturdays (front row) and Sundays (back row) remain as good as empty, and that there’s an occasional gap in between the blocks where I took some time off.

GitHub Skyline →

Building Future UIs

The folks over at Formidable have been experimenting with Houdini and WebGL/Three.js to create futuristic UIs

Futuristic sci-fi UIs in movies often support a story where humans, computers, and interfaces are far more advanced than today, often mixed with things like super powers, warp drives, and holograms. What is it about these UIs that feel so futuristic and appealing? Can we build some of these with the web technologies we have today?

Building Future UIs →

How we built the GitHub globe

The GitHub homepage features a very nice rotating 3D globe, augmented with realtime data shooting around. Here’s how they built it:

At the most fundamental level, the globe runs in a WebGL context powered by three.js. We feed it data of recent pull requests that have been created and merged around the world through a JSON file. The scene is made up of five layers: a halo, a globe, the Earth’s regions, blue spikes for open pull requests, and pink arcs for merged pull requests. We don’t use any textures: we point four lights at a sphere, use about 12,000 five-sided circles to render the Earth’s regions, and draw a halo with a simple custom shader on the backside of a sphere.

How we built the GitHub globe →

How Facebook 3D Photos work, and how to create one yourself

💡 Sparked by the 3D Ken Burns Effect from a Single Image post, I was reminded of a few other 3D photo things …

About a year ago, Facebook announced a feature named “3D Photos”, a way to show photos taken with Apple’s “Portrait Mode” (or any other device that does the same) interactively:

Whether it’s a shot of your pet, your friends, or a beautiful spot from your latest vacation, you just take a photo in Portrait mode using your compatible dual-lens smartphone, then share as a 3D photo on Facebook where you can scroll, pan and tilt to see the photo in realistic 3D—like you’re looking through a window.

As unearthed by this research Facebook builds a 3D model out of the image + depth data, and then render the generated .glb file on screen using Three.js.

For example, here’s the wireframe of the kangaroo pictured at the top of this post:

3D wireframe of the kangaroo (Yuri akella Artiukh)


A photo taken in Apple’s Portrait Mode is in essence no more than the flat photo combined with a depth map. A depth map is a gray scale photowhere white defines points close-by and pure black defines points farthest away. Using the depth map, you can then blur the content that is furthest away.

Photo + Depth Map = Portrait Mode (Marc Keegan)


Winging back to Facebook: if you upload a file named photo.jpg along with a file photo_depth.jpg, Facebook will treat the latter as the depth map for the photo.jpg, and create a post with a 3D photo from them.

Uploading a photo and its depth map to become one one 3D photo


If you don’t have a depth map of a photo, you can always create one yourself manually using Photoshop or any other image editing tool.

Certain advertises have used this technique a few times by now, as illustrated on Omnivirt:

Tools like the online 3D Photo Creator have a depth prediction algorithm built in. The result is most likely not as good as your own DIY depth map, yet it give you a head start.

🤖 Psst, As a bonus you can check the console to see the link to the resulting .glb float by in said tool 😉


To go the other way around – from 3d photo to photo and depth map – you can use a tool such as the Facebook 3D Photo Depth Analyzer to extract both the photo and the depth map from a 3D photo post.

Just enter the Facebook post ID and hit analyze 🙂


Another approach to show a 3D photo is to use WebGL. With this technique you don’t need to generate a .glb, but can directly use a photo and its accompanying depth map:

(Forked from this instructional video by k3dev)

Did this help you out? Like what you see?
Thank me with a coffee.

I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!

☕️ Buy me a Coffee (€3)

To stay in the loop you can follow @bramus or follow @bramusblog on Twitter.