Over at CSS-Tricks, Chris takes a look at how to mark up a “Double Heading”, a common pattern where you have a big heading with a little one preceding/succeeding it (as pictured above).
The MediaPipe and Tensorflow.js teams have released facemesh and handpose:
The facemesh package infers approximate 3D facial surface geometry from an image or video stream, requiring only a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips within the face, including details such as lip contours and the facial silhouette.
The handpose package detects hands in an input image or video stream, and returns twenty-one 3-dimensional landmarks locating features within each hand. Such landmarks include the locations of each finger joint and the palm.
Once you have one of the packages installed, it’s really easy to use. Here’s an example using facemesh:
import * as facemesh from '@tensorflow-models/facemesh;
// Load the MediaPipe facemesh model assets.
const model = await facemesh.load();
// Pass in a video stream to the model to obtain
// an array of detected faces from the MediaPipe graph.
const video = document.querySelector("video");
const faces = await model.estimateFaces(video);
// Each face object contains a `scaledMesh` property,
// which is an array of 468 landmarks.
faces.forEach(face => console.log(face.scaledMesh));
Both packages run entirely within the browser so data never leaves the user’s device.
Be sure to check the demos as they’re quite nice. I did notice that the handpose demo only shows one hand, even though the library can detect more than one.
Page editors are a great way to provide an excellent user experience. However, to build one is often a pretty dreadful task.
Craft.js solves this problem by modularising the building blocks of a page editor. It provides a drag-n-drop system and handles the way user components should be rendered, updated and moved – among other things. With this, you’ll be able to focus on building the page editor according to your own specifications and needs.
Harry Roberts on how to set a Performance Budgets if you really don’t have a clue where to start:
Time and again I hear clients discussing their performance budgets in terms of goals: “We’re aiming toward a budget of 250KB uncompressed JavaScript; we hope to be interactive in 2.75s”. While it’s absolutely vital that these goals exist and are actively worked toward, this is not part of your budgeting. Your budgeting is actually far, far simpler:
Our budget for [metric] is never-worse-than-it-is-right-now.
Harry suggest to measure in periods of two weeks (or whatever the length of your sprints I guess) and always compare against the previous value. If performance is equal or better: great, you’ve got your next maximum to compare against next time. If performance is worse: you’ve got work (or some serious explaining) to do.
By constantly revisiting and redefining budgets in two-weekly snapshots, we’re able to make slow, steady, and incremental improvements.
24ways – the advent calendar for web geeks – is back! First post is “Making a Better Custom Select Element” in which Julie Grundy tries to create an accessible Custom Select Element:
Sometimes, I can’t recommend the select input. We want a way for someone to choose an item from a list of options, but it’s more complicated than just that. We want autocomplete options. We want to put images in there, not just text. The optgroup element is ugly, hard to style, and not announced by screen readers. The focus styles are low contrast. I had high hopes for the datalist element, but although it works well with screen readers, it’s no good for people with low vision who zoom or use high contrast themes.
jsonbox.io lets you store, read & modify JSON data over HTTP APIs for free. Copy the URL below and start sending HTTP requests to play around with your data store.
Oh, this will come in handy for Workshops and quick Proof Of Concepts:
The Web Perception Toolkit is an open-source library that provides the tools for you to add visual search to your website. The toolkit works by taking a stream from the device camera, and passing it through a set of detectors. Any markers or targets that are identified by the detectors are mapped to structured data on your site, and the user is provided with customizable UI that offers them extended information.
This mapping is defined using Structured Data (JSON-LD). Here’s a barcode for example:
[
{
"@context": "https://schema.googleapis.com/",
"@type": "ARArtifact",
"arTarget": {
"@type": "Barcode",
"text": "012345678912"
},
"arContent": {
"@type": "WebPage",
"url": "http://localhost:8080/demo/artifact-map/products/product1.html",
"name": "Product 1",
"description": "This is a product with a barcode",
"image": "http://localhost:8080/demo/artifact-map/products/product1.png"
}
}
]
When the user now scans an object with that barcode (as defined in arTarget), the description page (defined in arContent) will be shown on screen.
Next to BarCodes, other supported detectors include QR Codes, Geolocation, and 2D Images. ML Image Classification is not supported, but planned.
Jeremy recently implemented “Dark Mode” on his site. Tanks to CSS Custom Properties the implementation is pretty straightforward (also see my writeup here).
But of course, Jeremy added some extra details that make the difference:
In Dark Mode images are toned down to make ‘m blend in better, as detailed by Melanie Richards:
Here are two small scroll-snapping carousels that I made. In the top one the items are laid out using CSS Flexbox, whereas the second one uses CSS Grid.
The code also works fine with arbitrarily sized .scroll-items elements, they don’t need to have the same width.