Debugging Intelligent Tracking Prevention in Safari

Simo Ahava:

The purpose of ITP is to prevent tracking tools’ access to data stored and processed in the browser. This involves things like blocking all third-party cookies and restricting the lifetime of first-party cookies.

In this article, I want to show you how to use the ITP Debug Mode. It’s a console logger that outputs all the actions taken by Intelligent Tracking Prevention on the user’s device or browser.

You can enable the logger using the “Enable Intelligent Tracking Prevention Debug Mode” option the Debug Menu.

ITP Debug Mode in Safari →

Why The Web Is Such A Mess

Video by Tom Scott on 3rd party cookies and tracking and what not.

Pose Animator – Animate SVG Illustrations using your Camera

This is crazy:

Pose Animator takes a 2D vector illustration and animates its containing curves in real-time based on the recognition result from PoseNet and FaceMesh. It borrows the idea of skeleton-based animation from computer graphics and applies it to vector characters.

Built on top of PoseNet to track the body’s pose, and the aforementioned FaceMesh to track the face.

Pose Animator →
Pose Animator Demos →

Realtime Face and Hand Tracking in the browser with TensorFlow

The MediaPipe and Tensorflow.js teams have released facemesh and handpose:

The facemesh package infers approximate 3D facial surface geometry from an image or video stream, requiring only a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips within the face, including details such as lip contours and the facial silhouette.

The handpose package detects hands in an input image or video stream, and returns twenty-one 3-dimensional landmarks locating features within each hand. Such landmarks include the locations of each finger joint and the palm.

Once you have one of the packages installed, it’s really easy to use. Here’s an example using facemesh:

import * as facemesh from '@tensorflow-models/facemesh;

// Load the MediaPipe facemesh model assets.
const model = await facemesh.load();
 
// Pass in a video stream to the model to obtain 
// an array of detected faces from the MediaPipe graph.
const video = document.querySelector("video");
const faces = await model.estimateFaces(video);
 
// Each face object contains a `scaledMesh` property,
// which is an array of 468 landmarks.
faces.forEach(face => console.log(face.scaledMesh));

The output will be a prediction object:

{
    faceInViewConfidence: 1,
    boundingBox: {
        topLeft: [232.28, 145.26], // [x, y]
        bottomRight: [449.75, 308.36],
    },
    mesh: [
        [92.07, 119.49, -17.54], // [x, y, z]
        [91.97, 102.52, -30.54],
        ...
    ],
    scaledMesh: [
        [322.32, 297.58, -17.54],
        [322.18, 263.95, -30.54]
    ],
    annotations: {
        silhouette: [
            [326.19, 124.72, -3.82],
            [351.06, 126.30, -3.00],
            ...
        ],
        ...
    }
}

Both packages run entirely within the browser so data never leaves the user’s device.

Be sure to check the demos as they’re quite nice. I did notice that the handpose demo only shows one hand, even though the library can detect more than one.

Face and hand tracking in the browser with MediaPipe and TensorFlow.js →
facemesh Demo →
handpose Demo →

How ads follow you around the internet

A video-version of How tracking pixels work by Vox:

In this video, we explain how cookies work and what you should know about how they’re being used. And we get a little help from the man who invented them.

Spot on “Finding Dory” analogy. One thing where they do go off a bit is that they somewhat imply that all data (such as shopping cart contents) is stored in the cookies themselves. That’s not the case: the cart contents are – or at least they should be – stored on the server. Only the cart’s ID (or your visitor/session ID) is stored in the cookie. The serverside code will then use that ID to get your card contents.

Also: would have loved to see a few examples with actual mentions of domain names to have it more clear.

How tracking pixels work

Julia Evans:

I spent some time talking to a reporter yesterday about how advertisers track people on the internet. We had a really fun time looking at Firefox’s developer tools together (I’m not an internet privacy expert, but I do know how to use the network tab in developer tools!) and I learned a few things about how tracking pixels actually work in practice!

How tracking pixels work →

💡 You might also want to read up on first and third party cookies

GravitySpace

GravitySpace is a new approach to tracking people and objects indoors. Unlike traditional solutions based on cameras, GravitySpace reconstructs scene data from a pressure-sensing floor. While the floor is limited to sensing objects in direct contact with the ground, GravitySpace reconstructs contents above the ground by first identifying objects based on their texture and then applying inverse kinematics.

GravitySpace: Tracking Users and Their Poses in a Smart Room Using a Pressure-Sensing Floor →

(via Teusje)