• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Speak, Swipe, Smile: Virtual Assistants Quickly Moving Beyond Voice

by Chris Albrecht
January 19, 2018January 21, 2018Filed under:
  • Interfaces
  • Smart Home
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email this to a friend (Opens in new window)

Amazon’s Alexa plays a central role in my family’s morning routine: she tells us the weather, plays the radio, and sets timers so we can catch the bus. Alexa is great for simple tasks, but she, and other smart speakers stumble when it comes to more complicated requests. That’s just a limitation of having only a voice and speaker interface.

But the way we interact with Alexa and Google Home is about to undergo a dramatic shift as those devices add screens and cameras. In doing so, we’ll move from talking and listening to our virtual assistants to looking, touching and smiling to get what we want.

Smart speakers like Amazon Alexa and Google Home are quickly crossing over into the mainstream. Amazon touted that it sold “tens of millions” of Alexa devices this past holiday season, while rival Google said it sold “tens of millions” of its Home devices throughout last year. According to a recent study by NPR and Edison research, roughly 39 million people (18 and older) in America own a smart speaker, and 65 percent of those surveyed “wouldn’t want to go back to life without their smart speaker.”

But in a room like a kitchen, voice assistance only gets you so far. Think about trying to put together a recipe by just listening. Sure you can do it, but listening to a set of instructions is not the best way to make a meal. Enter the kitchen screen, which was a big trend at CES this year with LG and Samsung building them into their fridges, and GE unveiling its giant monitor that’s meant to hang over your oven.

Screens will add much needed visual component to smart speakers, turning them into smart displays. Instead of just a voice walking you through a recipe, you can see accompanying photos and videos to demonstrate technique and what the end product should look like. Since all of these screens have touch capabilities, they will make more general tasks like swiping through music and news much easier and faster than saying “next” every time you want to skip ahead.

Both Amazon and Google already recognize this and are adding screens to their smart speakers. Amazon released its Echo Show last year, and the smaller Echo Spot in time for the holidays. Instead of making its own device (for now, anyway), Google is being built into new smart displays from JBL, Lenovo, LG and Sony.

These smart displays will also come with a built-in camera for things like video calling, but eventually, these cameras will do more. Touchscreens are good for displaying information, but the “touch” part becomes a little harder in the kitchen when your hands are greasy or covered in cookie dough. The smart display camera then could become a motion sensor. Rather than touching the screen you wave your hand to go back a page, swipe through a list of ingredients or scrub a video to the exact part you want.

In addition to gesture control, the camera could also be used for facial recognition. As each member of the family looks into the smart display, a personalized view of news, messages, reminders and more will appear on the screen.

But these cameras wouldn’t just look at our faces; they will also see the contents of our kitchens. Already LG and Samsung fridges come with internal cameras to show what food you have and help you label it. Machine learning, RFID tags, and scent sensors will work in unison to automatically recognize and inventory food in our fridge and pantries. These cameras will all tie into our virtual assistant of choice to let us know when we’re running out of items, order replacements and make recipe recommendations.

The interface, in this more extreme case, becomes invisible and just predicts and presents us with the information and items that we want, with no interaction with us at all. And that will definitely be something to smile about.


Related

Get the Spoon in your inbox

Just enter your email and we’ll take care of the rest:

Find us on some of these other platforms:

  • Apple Podcasts
  • Spotify

Post navigation

Previous Post Kickstarted or Stopped? We Check In On Food Tech Crowdfunders To See How They’ve Fared
Next Post Chefling Raises $1 Million for its Inventory and Recipe Assistant

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Get The Spoon in Your Inbox

The Spoon Podcast Network!

Feed your mind! Subscribe to one of our podcasts!

The Grocery Store is the Food System
Nearly Seven Years After Launching Kickstarter, Silo Finally Delivers Next-Gen Home Food Storage System
What Flavor Unlocks
Starbucks Unveils Green Dot Assist, a Generative AI Virtual Assistant for Coffee Shop Employees
Impulse Announces Its Battery-Integrated Cooktop Becomes First Certified to Applicable UL Safety Standards

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.