Easily include Blurhash placeholders in your React projects with react-blurhash

react-blurhash allows you to easily integrate Blurhash Placeholder Images images in your React Projects:

Blurhash component is the recommended way to render blurhashes in your React projects. It uses BlurhashCanvas and a wrapping div to scale the decoded image to your desired size. You may control the quality of the decoded image with resolutionX and resolutionY props.

Installation per NPM

npm install --save blurhash react-blurhash

Example Usage:

import './App.css';
import { Blurhash } from "react-blurhash";


function App() {
  return (
    <div className="App">
      <Blurhash
        hash="eCF6B#-:0JInxr?@s;nmIoWUIko1%NocRk.8xbIUaxR*^+s;RiWAWU"
        width={600}
        height={400}
      />

      <img
        src="https://example.org/original.jpg"
        width={600}
        height={400}
      />
    </div>
  );
}

export default App;

react-blurhash (GitHub) →

Alt vs Figcaption

This message by Elaina Natario writing over at Thoughtbot cannot be repeated enough:

While both the alt attribute and the figcaption element provide a way to describe images, the way we write for them is different. alt descriptions should be functional; figcaption descriptions should be editorial or illustrative.

Examples of both functional and editorial descriptions in the full post!

Alt vs Figcaption →

Monochrome Image Dithering Explained

Surma digging into the oldskool dithering technique:

I always loved the visual aesthetic of dithering but never knew how it’s done. So I did some research. This article may contain traces of nostaliga and none of Lena.

Turns out there’s quite a lot to it 😅

Ditherpunk — The article I wish I had about monochrome image dithering →
Ditherpunk Demo Page →

Compress and Convert AVIF/WebP/PNG/etc images on the CLI with squoosh-cli

To compress and compare images with different codecs right in your browser there’s squoosh.app that you can use.

Announced at the still ongoing Chrome Dev Summit 2020 is Squoosh v2 with new codecs support (AVIF!), an updated design, and the release of CLI version!

Squoosh CLI is an experimental way to run all the codecs you know from the Squoosh web app on your command line using WebAssembly. The Squoosh CLI uses a worker pool to parallelize processing images. This way you can apply the same codec to many images at once.

Squoosh CLI is currently not the fastest image compression tool in town and doesn’t aim to be. It is, however, fast enough to compress many images sufficiently quick at once.

Run it using npx, or install it globally:

npx @squoosh/cli <options...>
npm i -g @squoosh/cli
squoosh-cli <options...>

Announcing Squoosh v2 →
Squoosh CLI (Repo) →

BlurHash — Low-res Blurred Placeholder Images Represented as Text

If you’re dealing with images it’s quite common to show a small placeholder while the image is loading. You could go with grey placeholders, but a low-res blurred version of the original is preferred. That way you can, in the example use case of a website, use the Blur Up technique once the image is loaded. BlurHash is something that can help you with exactly that:

In short, BlurHash takes an image, and gives you a short string (only 20-30 characters!) that represents the placeholder for this image. The string is short enough that it comfortably fits into whatever data format you use. For instance, it can easily be added as a field in a JSON object.

An example of a BlurHash would be LEHV6nWB2yk8pyo0adR*.7kCMdnj

Implementations that can encode and decode exist for TypeScript, PHP, Python, Swift, Kotlin, etc.

To use BlurHashes in the context of a web browser without needing to rely on JavaScript on the client side, I’d use this with a Cloud Function (or the like) that converts the encoded version to the actual image. Your markup could then look something like this:

<span style="display: inline-block; background: transparent url('https://blurhash-decoder.function.cloud/?blurhash=LEHV6nWB2yk8pyo0adR%2A.7kCMdnj') 0 0 / 100% 100%;">
	<img src="https://example.com/assets/original.jpg" width="538" height="" alt="346" title="" />
</span>

To tone down the potential number of network requests you could of course pre-decode those BlurHashes on the server and inject the background images using Data URIs from within your template engine.

BlurHash →

How to embed AV1 Image File Format (AVIF) images

New in Chromium 85 is support for the AV1 Image File Format (AVIF), which is pretty impressive:

AVIF offers significant file size reduction for images compared with JPEG or WebP; ~50% savings compared to JPEG, and ~20% savings compared to WebP.

🦊 Using Firefox and can’t wait to use AVIF images? Set the image.avif.enabled flag to true to enable experimental support for it.

Time to tweak the modern way to embedding images a bit, and add AVIF in there:

<picture>
  <source srcset="/images/cereal-box.avif" type="image/avif" />
  <source srcset="/images/cereal-box.webp" type="image/webp" />
  <img src="/images/cereal-box.jpg" alt="Description of Photo" />
</picture>

The browser will load the first source it can interpret, eventually falling back to the JPG if none are supported.

☝️ Now that Safari is about to support WebP in version 14, the image/jp2 image that was in the original snippet was also dropped.

How to Use AVIF: The New Next-Gen Image Compression Format →

UPDATE 2020.09.08: Jake Archibald just released an extensive post on AVIF packed with examples and comparisons, worth checking out.

Native Image Lazy-Loading: loading-attribute-eagle-polyfill

Today, Rick Viscomi noted that some sites have set eagle – instead of eager – as the value for Native Image Lazy-Loading:

While this is most likely a classic case of #damnyouautocorrect (instead of jokingly being a LOTR/Scrubs reference), that didn’t keep Jay Phelps from creating loading-attribute-eagle-polyfill to cater for those small mishaps:

A polyfill for <img loading="eagle" />. Displays an America Eagle as the placeholder of the image while the your real images are still loading.

LOL 😁 — I love the internet.

Here’s a code example on how to use, if you ever were to use it in the first place:

<head>
  <script src="https://unpkg.com/loading-attribute-eagle-polyfill/loading-attribute-eagle-polyfill.js"></script>
</head>
<body>
  <!-- Here's an example URL that artificially delays the src so you can see the proud Eagle -->
  <img
    loading="eagle"
    src="https://deelay.me/2000/https://img.webmd.com/dtmcms/live/webmd/consumer_assets/site_images/article_thumbnails/other/cat_relaxing_on_patio_other/1800x1200_cat_relaxing_on_patio_other.jpg"
    width="300"
    height="200"
  />
</body>

loading-attribute-eagle-polyfill

ℹ️ Remember Native Image Lazy Loading being way too eager? Chrome recently updated the thresholds and are backporting the changes back to Chrome version 79:

Cleverly Cropping Images on Twitter using AI

To crop uploaded images, Twitter doesn’t simply cut them off starting from the center. After first having used Face Detection, they – in 2018 already – switched to AI to cleverly crop uploaded images.

Previously, we used face detection to focus the view on the most prominent face we could find. While this is not an unreasonable heuristic, the approach has obvious limitations since not all images contain faces.

A better way to crop is to focus on “salient” image regions. Academics have studied and measured saliency by using eye trackers, which record the pixels people fixated with their eyes. In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast. This data can be used to train neural networks and other algorithms to predict what people might want to look at. The basic idea is to use these predictions to center a crop around the most interesting region.

💡 Note that depending on how many images you upload, Twitter will use a different aspect ratio.

What I find weird is that this clever cropping only works on their website, and not in embeds nor other clients. Take this tweet for example, embedded below:

When viewed on the Twitter website it does use the clever cropping:

Now, it wouldn’t surprise me that Twitter hides this extra information from 3rd party clients, given that they basically imposed a no-fly zone back in the day.

Speedy Neural Networks for Smart Auto-Cropping of Images →

Embracing Modern Image Formats

Josh W. Comeau, on embracing modern image formats to ship less bytes to browsers. As not all browsers understand all image formats (Apple/Safari for example doesn’t support .webp, an image format developed by Google) he resides to the picture element with various sources set.

<picture>
  <source srcset="/images/cereal-box.webp" type="image/webp" />
  <source srcset="/images/cereal-box.jp2" type="image/jp2" />
  <img src="/images/cereal-box.jxr" type="image/vnd.ms-photo" />
</picture>

On his own blog he chose to use only .webp (for Chrome & Firefox) with a fallback to a .jpg (for Safari, IE, and other browsers that don’t speak WebP)

<picture>
  <source srcset="/images/cereal-box.webp" />
  <img src="/images/cereal-box.jpg" />
</picture>

Using a (premium) service like imgix you can easily automate this, through its fm parameter. Alternatively you could roll your own image transform service on AWS/GCP/Azure.

Embracing modern image formats →

Using AWS’ “Server­less Image Han­dler” to roll your own Image Transform Service

Ama­zon AWS has offered a Server­less Image Han­dler for a while that allows you to spin up an AWS Lamb­da func­tion to cre­ate your own pri­vate lit­tle image trans­form ser­vice that is inex­pen­sive, fast, and is front­ed by the Cloud­Front con­tent deliv­ery net­work (CDN).

Whenever an image is uploaded to the bucket, a Lambda function processes it and creates all other required versions.

Under the hood it uses SharpJS, so you could always use their code to make it run on other Cloud Providers 😉

serverless-image-handler Source Code (GitHub) →
Setting Up Your Own Image Transform Service →