• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Video: Google Brain Director on Creating Robots for a Messy World (Like Kitchens!)

by Chris Albrecht
April 24, 2019April 24, 2019Filed under:
  • Articulate 2019
  • Behind the Bot
  • Robotics, AI & Data
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email this to a friend (Opens in new window)

Robots are really good at moving around. They can roam the aisles of grocery stores easily or manipulate an articulating arm to pour you a cup of coffee. But what robots aren’t good at is understanding the world around them.

“The human world is by definition messy and complicated,” Vincent Vanhoucke, Principal Scientist and Director of Perception and Robotics for the Google Brain Team told the crowd during his presentation at our recent ArticulATE food robot conference in San Francisco.

Vanhoucke’s team is working on taking the things robots do well — moving around — and marrying that with advancements in computer vision and deep learning to make robots more useful in the messy and complicated real world. And it turns out that food in particular, with its different textures and properties, is quite messy and complicated.

The Google team does this by training robots to pick up objects of various sizes and shapes up using deep learning. Through recognition, repetition and reinforcement, the robots develop their own strategies and behavior for solving problems (the inability to pick up an object) on their own without a human programming specific solutions.

The applications for this can be seen in something like a feeding robot for the disabled. Rather than having a “dumb” arm that only scoops food from a predefined area in a bowl and lifting that food to a predetermined height, a deep-learning enabled robot can identify different food on a plate no matter where it is, pick it up and lift it directly to a person’s mouth.

It’s really fascinating and cutting-edge science, and you should definitely watch the video of his full presentation.

ArticulATE 2019: Using Robotics in Messy Environments (Like Kitchens!)

Be on the lookout for more videos from ArticulATE 2019, coming throughout the week.


Related

Get the Spoon in your inbox

Just enter your email and we’ll take care of the rest:

Find us on some of these other platforms:

  • Apple Podcasts
  • Spotify

Post navigation

Previous Post New Culture is Developing Creamy Mozzarella Cheese Without the Cow
Next Post Days Before Its IPO, Beyond Meat Parts Ways with Tyson Foods

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Get The Spoon in Your Inbox

The Spoon Podcast Network!

Feed your mind! Subscribe to one of our podcasts!

A Week in Rome: Conclaves, Coffee, and Reflections on the Ethics of AI in Our Food System
How ReShape is Using AI to Accelerate Biotech Research
How Eva Goulbourne Turned Her ‘Party Trick’ Into a Career Building Sustainable Food Systems
Combustion Acquires Recipe App Crouton
Next-Gen Fridge Startup Tomorrow Shuts Down

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.