Semantically Identify a Heading Subtitle with ARIA role="doc-subtitle"

Over at CSS-Tricks, Chris takes a look at how to mark up a “Double Heading”, a common pattern where you have a big heading with a little one preceding/succeeding it (as pictured above).

After going over a few options, the answer comes from a tweet by Steve Faulkner: use role="doc-subtitle" for the secondary heading. As per spec:

An explanatory or alternate title for the work, or a section or component within it.

<header>
   <h1>Chapter 2 The Battle</h1>
   <p role="doc-subtitle">Once more unto the breach</p>
</header>

Oh that’s nice! Support from JAWS/NVDA seems ok so it’s perfectly fine to use.

HTML for Subheadings and Headings →

Realtime Face and Hand Tracking in the browser with TensorFlow

The MediaPipe and Tensorflow.js teams have released facemesh and handpose:

The facemesh package infers approximate 3D facial surface geometry from an image or video stream, requiring only a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips within the face, including details such as lip contours and the facial silhouette.

The handpose package detects hands in an input image or video stream, and returns twenty-one 3-dimensional landmarks locating features within each hand. Such landmarks include the locations of each finger joint and the palm.

Once you have one of the packages installed, it’s really easy to use. Here’s an example using facemesh:

import * as facemesh from '@tensorflow-models/facemesh;

// Load the MediaPipe facemesh model assets.
const model = await facemesh.load();
 
// Pass in a video stream to the model to obtain 
// an array of detected faces from the MediaPipe graph.
const video = document.querySelector("video");
const faces = await model.estimateFaces(video);
 
// Each face object contains a `scaledMesh` property,
// which is an array of 468 landmarks.
faces.forEach(face => console.log(face.scaledMesh));

The output will be a prediction object:

{
    faceInViewConfidence: 1,
    boundingBox: {
        topLeft: [232.28, 145.26], // [x, y]
        bottomRight: [449.75, 308.36],
    },
    mesh: [
        [92.07, 119.49, -17.54], // [x, y, z]
        [91.97, 102.52, -30.54],
        ...
    ],
    scaledMesh: [
        [322.32, 297.58, -17.54],
        [322.18, 263.95, -30.54]
    ],
    annotations: {
        silhouette: [
            [326.19, 124.72, -3.82],
            [351.06, 126.30, -3.00],
            ...
        ],
        ...
    }
}

Both packages run entirely within the browser so data never leaves the user’s device.

Be sure to check the demos as they’re quite nice. I did notice that the handpose demo only shows one hand, even though the library can detect more than one.

Face and hand tracking in the browser with MediaPipe and TensorFlow.js →
facemesh Demo →
handpose Demo →

Craft.js – A React framework for building drag-n-drop page editors.

Page editors are a great way to provide an excellent user experience. However, to build one is often a pretty dreadful task.

Craft.js solves this problem by modularising the building blocks of a page editor. It provides a drag-n-drop system and handles the way user components should be rendered, updated and moved – among other things. With this, you’ll be able to focus on building the page editor according to your own specifications and needs.

Craft.js →

💡 If you’re looking for an editor that’s more focused on content (instead of the entire page layout), check out editor.js

Performance Budgets for those who don’t know where to start

Harry Roberts on how to set a Performance Budgets if you really don’t have a clue where to start:

Time and again I hear clients discussing their performance budgets in terms of goals: “We’re aiming toward a budget of 250KB uncompressed JavaScript; we hope to be interactive in 2.75s”. While it’s absolutely vital that these goals exist and are actively worked toward, this is not part of your budgeting. Your budgeting is actually far, far simpler:

Our budget for [metric] is never-worse-than-it-is-right-now.

Harry suggest to measure in periods of two weeks (or whatever the length of your sprints I guess) and always compare against the previous value. If performance is equal or better: great, you’ve got your next maximum to compare against next time. If performance is worse: you’ve got work (or some serious explaining) to do.

By constantly revisiting and redefining budgets in two-weekly snapshots, we’re able to make slow, steady, and incremental improvements.

Performance Budgets, Pragmatically →

Making a Better Custom Select Element

24ways – the advent calendar for web geeks – is back! First post is “Making a Better Custom Select Element” in which Julie Grundy tries to create an accessible Custom Select Element:

Sometimes, I can’t recommend the select input. We want a way for someone to choose an item from a list of options, but it’s more complicated than just that. We want autocomplete options. We want to put images in there, not just text. The optgroup element is ugly, hard to style, and not announced by screen readers. The focus styles are low contrast. I had high hopes for the datalist element, but although it works well with screen readers, it’s no good for people with low vision who zoom or use high contrast themes.

Here’s a pen with the result:

Making a Better Custom Select Element →

JSONbox – Free HTTP based JSON Storage

jsonbox.io lets you store, read & modify JSON data over HTTP APIs for free. Copy the URL below and start sending HTTP requests to play around with your data store.

Oh, this will come in handy for Workshops and quick Proof Of Concepts:

curl -X POST 'https://jsonbox.io/demobox_6d9e326c183fde7b' \
    -H 'content-type: application/json' \
    -d '{"name": "Jon Snow", "age": 25}'

// {"_id":"5d776a25fd6d3d6cb1d45c51","name":"Jon Snow","age":25,"_createdOn":"2019-09-10T09:17:25.607Z"}

Don’t know about the retention policy though 😉

jsonbox.io →

Visually Search using your Phone’s Camera with The Web Perception Toolkit

The Web Perception Toolkit is an open-source library that provides the tools for you to add visual search to your website. The toolkit works by taking a stream from the device camera, and passing it through a set of detectors. Any markers or targets that are identified by the detectors are mapped to structured data on your site, and the user is provided with customizable UI that offers them extended information.

This mapping is defined using Structured Data (JSON-LD). Here’s a barcode for example:

[
  {
    "@context": "https://schema.googleapis.com/",
    "@type": "ARArtifact",
    "arTarget": {
      "@type": "Barcode",
      "text": "012345678912"
    },
    "arContent": {
      "@type": "WebPage",
      "url": "http://localhost:8080/demo/artifact-map/products/product1.html",
      "name": "Product 1",
      "description": "This is a product with a barcode",
      "image": "http://localhost:8080/demo/artifact-map/products/product1.png"
    }
  }
]

When the user now scans an object with that barcode (as defined in arTarget), the description page (defined in arContent) will be shown on screen.

Next to BarCodes, other supported detectors include QR Codes, Geolocation, and 2D Images. ML Image Classification is not supported, but planned.

The Web Perception Toolkit: Getting Started →
Visual searching with the Web Perception Toolkit →
Web Perception Toolkit Repo (GitHub) →

Implementing Dark Mode on adactio.com

Jeremy recently implemented “Dark Mode” on his site. Tanks to CSS Custom Properties the implementation is pretty straightforward (also see my writeup here).

But of course, Jeremy added some extra details that make the difference:

  1. In Dark Mode images are toned down to make ‘m blend in better, as detailed by Melanie Richards:

    @media (prefers-color-scheme: dark) {
      img {
        filter: brightness(.8) contrast(1.2);
      }
    }
  2. Using the picture element, and media queries on the various source‘s contained, you can ship a dark version of an image (such as map) to the visitor:

    <picture>
      <source media="prefers-color-scheme: dark" srcset="https://api.mapbox.com/styles/v1/mapbox/dark-v10/static...">
      <img src="https://api.mapbox.com/styles/v1/mapbox/outdoors-v10/static..." alt="map">
    </picture>

Implementing Dark Mode on adactio.com →

Simple Scroll Snapping Carousel (Flexbox Layout / Grid Layout)

Here are two small scroll-snapping carousels that I made. In the top one the items are laid out using CSS Flexbox, whereas the second one uses CSS Grid.

The code also works fine with arbitrarily sized .scroll-items elements, they don’t need to have the same width.

ℹ️ Want to now more about scroll snapping? Check out Well-Controlled Scrolling with CSS Scroll Snap by Google, and Practical CSS Scroll Snapping on CSS Tricks.

😅Great to see that CSS Scroll Snapping moved away from the snapList, as first proposed in 2013.

Did this help you out? Like what you see?
Thank me with a coffee.

I don't do this for profit but a small one-time donation would surely put a smile on my face. Thanks!

☕️ Buy me a Coffee (€3)

To stay in the loop you can follow @bramus or follow @bramusblog on Twitter.