Pose Animator takes a 2D vector illustration and animates its containing curves in real-time based on the recognition result from PoseNet and FaceMesh. It borrows the idea of skeleton-based animation from computer graphics and applies it to vector characters.
The MediaPipe and Tensorflow.js teams have released facemesh and handpose:
The facemesh package infers approximate 3D facial surface geometry from an image or video stream, requiring only a single camera input without the need for a depth sensor. This geometry locates features such as the eyes, nose, and lips within the face, including details such as lip contours and the facial silhouette.
The handpose package detects hands in an input image or video stream, and returns twenty-one 3-dimensional landmarks locating features within each hand. Such landmarks include the locations of each finger joint and the palm.
Once you have one of the packages installed, it’s really easy to use. Here’s an example using facemesh:
import * as facemesh from '@tensorflow-models/facemesh;
// Load the MediaPipe facemesh model assets.
const model = await facemesh.load();
// Pass in a video stream to the model to obtain
// an array of detected faces from the MediaPipe graph.
const video = document.querySelector("video");
const faces = await model.estimateFaces(video);
// Each face object contains a `scaledMesh` property,
// which is an array of 468 landmarks.
faces.forEach(face => console.log(face.scaledMesh));
Both packages run entirely within the browser so data never leaves the user’s device.
Be sure to check the demos as they’re quite nice. I did notice that the handpose demo only shows one hand, even though the library can detect more than one.
Sparked by that tweet, João Miguel Cunha set out to play with it. The code itself is pretty simple: create an instance of FaceDetector and call .detect() on it. That promise eventually resolves with an array of detected faces.
var faceDetector = new FaceDetector();
faceDetector.detect(image)
.then(faces => faces.forEach(face => console.log(face)))
.catch(e => {
console.error("Boo, Face Detection failed: " + e);
});
Interesting project by Russian photographer Egor Tsvetkov in which he took photos of random, anonymous, people riding the subway, and then running them through a face recognition app named FindFace. The result: 70% of those photographed could be linked to one or social network profiles of ‘m, thus un-anonymizing them.
We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photo-realistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.
Wow. On the Internet, nobody knows you’re a dog … or someone else with your facial expressions transferred onto.
headtrackr is a javascript library for real-time face tracking and head tracking, tracking the position of a users head in relation to the computer screen, via a web camera and the webRTC/getUserMedia standard.
Pinokio is an exploration into the expressive and behavioural potentials of robotic computing. Customized computer code and electronic circuit design imbues Lamp with the ability to be aware of its environment, especially people, and to expresses a dynamic range of behaviour. As it negotiates its world, we the human audience can see that Lamp shares many traits possessed by animals, generating a range of emotional sympathies. In the end we may ask: Is Pinokio only a lamp? – a useful machine? Perhaps we should put the book aside and meet a new friend.
Inspired by Pixar. Built using Processing, Arduino, and OpenCV