There’s no doubt that voice control interfaces like Alexa and Google have had a huge impact on the way we search for recipes, access our appliances and add things to our grocery lists.
But what if that voice assistant had a contextual understanding of where you where looking when you are cooking the evening meal?
That’s the big idea behind a new prototype from Synapse, a division of Cambridge Consultants. The new Hobgoblin technology concept utilizes machine vision to gather information about where a person is looking when issuing a voice command and applies that information to the cooking experience.
We have been exploring the use of computer-vision based sensing as context, and for this cooktop demonstration we augmented the VUI using gaze tracking to make what feels like a magical interaction. The cooktop infers which burner is being addressed in a voice command by using its camera tracking to know which burner you’re looking at. This way, when the system detects a person standing in front of it looking at a burner, commands can omit the burner designation, e.g. “turn that burner on,” or simply saying a level like “medium high.”
In the concept video, a user is cooking and says “Alexa, turn up the heat.” Using a camera that is built into the cooktop, Alexa is able to infer that the user is cooking because they are looking at the cooktop.
There’s no doubt that the ability to combine contextual understanding of what is happening in a room to power commands given to digital assistants like Alexa could create much more powerful potential “smart kitchen” user scenarios. One could easily imagine combining other information to create more contextually relevant experiences, including facial recognition to, for example, apply a personalized knowledge base that understands a person’s cooking capabilities or their favorite recipes.
You can see the Hobgoblin demo in action below: