In the past few years, voice assistants have been playing an increasing role in the meal journey. Take Google, for example; its voice assistant has developed a guided cooking platform, made integration partnerships with big appliance brands, and even added the ability to make eerily realistic-sounding restaurant reservations.
As a Senior Interaction Designer for Google, Geoff Barnes works to constantly improve the user experience with the Google Assistant. Next week he’ll take the stage at the Smart Kitchen Summit (SKS) to speak about how voice can play a helpful role in the connected kitchen. To warm up, we asked him about the uncanny valley, the future of voice technology, and the task he uses his Google Assistant for every single day.
The Q&A has been edited for clarity.
The Spoon: As voice recognition and AI become more and more realistic, how will you develop user experience (UX) to avoid the uncanny valley?
Geoff Barnes: The problem with the uncanny valley as Masahiro Mori described it was in people’s discomfort with the artificial trying to imitate the real and getting uncannily close. But he was dealing with physical robots, and the chasm between an obvious robot and a human being is both hugely multidimensional compared to a voice assistant, and broader than I think most people imagine when they conceptualize the uncanny valley in terms of the ubiquitous chart.
In voice user interface (VUI) — especially in the age of autotune and after decades of AI voices in entertainment and interactive voice response (IVR) in everyday life — I think we’ve both narrowed and made shallow the uncanny valley. The result is that Siri, Alexa, and the Google Assistant sit perched to its left. At this point in history, I think the salient question is less about how to avoid getting stuck in the uncanny valley and more about deciding on which side of it your product can better serve your users.
What are some unexpected things you think about when designing UX for Google Assistant?
What comes to mind first is I do all sorts of things to keep from focusing on how a sample dialog looks in writing. When you’re designing for voice, what matters is how things sound spoken. In school, we all learn not to write the way that we talk. Yet that’s exactly what we want of the Assistant: conversationality. We want to make computers talk like humans — not the other way around. So I read all dialog aloud. I’ll pull colleagues aside to do table-reads with me. We have text-to-speech (TTS) simulators that can play our text in the Assistant’s voice.
Another: I try to think about all the possible ways in which normal use might break the experience. For instance, Google Home has pretty great beam-forming [Ed note: beam-forming is technology which allows it to pick up your voice amid extraneous noise], but the mic still picks up cross-talk and ambient sounds. If something the user says is polluted by stray words, the Assistant needs to be able to help with conversation repair. The same goes for users responding in snippets, sentence fragments, and other partial structures where their intent might not be fully revealed, and thus might not be recognized. Part of human conversation is natural repair in situations like these, so we spend a lot of time designing to emulate that in the Google Assistant.
What role do you envision voice assistants playing in the homes — and kitchens — of the future?
Right now, a lot of what we see people using virtual assistants for is task completion, and I think that will continue. As virtual assistants evolve, though, we’ll see the complexity of tasks they can handle increase, and people will put them to more complicated and valuable use. Imagine virtual assistants that can interpret and address really complex requests, can carry on multiple conversational threads at once without losing track of contexts, and can work with you on a range of time-scales. Suddenly, you’re not talking about a fact-checking timer in the kitchen with you; you’re talking about a presence that can be a communications hub, keep you company while guiding you through complex recipes and routines, and do things like plan your date night — from booking dinner reservations, to hiring the babysitter, to ordering flowers.
What’s something you use your Google Assistant for every single day?
I like to drink good coffee in the morning, and for nearly 10 years, that meant I’d get up and make a Chemex. Friends thought I was a little crazy for not having a pre-set automatic coffee maker, but every automatic coffee maker I’d tried had two problems: First, it made coffee that smelled and tasted like burning plastic, and second, most burners are so hot that they scald the coffee within a few minutes of it being brewed. So you have to make sure you’re awake right when you told the coffee maker you would be, or else your plastic-flavored coffee would be even further burnt by the time you got to it.
Enslaving myself to a timer for a bad cup of coffee didn’t sound that great. With the advent of smart switches and Google Assistant integrations, though, I found a great solution: I got a Technivorm Moccamaster (which makes delicious coffee) and plugged it into a Wemo switch that I named “the coffee maker”. Now, every morning when I wake up — no matter whether I’m up early or I’ve overslept — I tell my Google Home, “Hey Google, start the coffee maker,” and in 5 minutes I have a perfect pot of coffee waiting for me two floors away. It’s a real first-world life-changer, but it’s a life-changer nonetheless.
—
Thanks, Geoff! If you want to see him speak more about Google’s work on connected kitchen platforms, snag your tickets to the Smart Kitchen Summit on October 8-9th in Seattle.
Leave a Reply