Live Caption yourself using Azure Cognitive Services and Ably Realtime

Very cool project by Jo Francetti in which she created a live captioning service.

She uses a webpage on a phone to capture her speech — using getUserMedia() — which she then sends over to Azure Cognitive Services’ “Speech to Text” Service to get back the text. The text eventually ends up on the flexible LED display.

Making a wearable live caption display using Azure Cognitive Services and Ably Realtime →

💡 This looks like a perfect use case for the Web Speech API. Unfortunately that API is Chrome only right now …

Published by Bramus!

Bramus is a frontend web developer from Belgium, working as a Chrome Developer Relations Engineer at Google. From the moment he discovered view-source at the age of 14 (way back in 1997), he fell in love with the web and has been tinkering with it ever since (more …)

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.