• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

ArticlATE Q&A: Google Brain’s Vanhoucke on Robots, AI and Programming vs. Learning

by Chris Albrecht
February 24, 2019February 25, 2019Filed under:
  • Articulate 2019
  • Behind the Bot
  • Robotics, AI & Data
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email this to a friend (Opens in new window)

The hardware of a robot is only as good as the software brain that powers it. This is why we are so excited to have Vincent Vanhoucke, Principle Scientist with the Google Brain team speak at our upcoming ArticulATE conference.

In addition to being the director of Google’s robotics research efforts, Vanhoucke has spent his career researching artificial intelligence and machine learning. Before we sit down with him at the show, we wanted to give you a little taste of what he’ll be talking about with a brief Q&A that we conducted over email.

If you want to see Vanhoucke in person and be a part of the discussion on the future of food robotics and automation, get your ticket to ArticulATE today!

What is Googley about robots and automation?

There is something new and exciting happening in the world of robotics: thanks to the advances in deep learning of the last few years, we now have vision systems that work amazingly well. It means that robots can see, understand, and interact with the complicated, often messy and forever changing human world. This opens up the possibility of a robot helper that understands its environment and physically assists you in your daily life. We are asking ourselves: what could happen if the devices you interact with every day could carry out physical tasks, moving and picking things up — if they could ask ‘How can I help you’ and do it directly?

What are some broad applications that AI and machine learning are really good at right now and where is biggest room for improvement?

Perception at large has made enormous progress: visual understanding, localization, sensing, speech and audio recognition. Much of the remaining challenge is to turn this progress into something actionable: connecting perception and action means understanding the impact of what a robot does on the world around it, and how that relates to what it sees. When a robot operates in a human environment, safety is paramount, but also understanding social preferences, as well as people’s goals and desires. I believe that enabling robots to learn, as opposed to being programmed, is how we can establish that tight degree of human connection.

Do you need massive amounts of data for training AI, or do you just need the right data?

Improving data efficiency has been a major focus in recent years. In the early days of deep learning, we explored what was possible when lots of data was available. Today, we’re probing whether we can do the same thing with a lot less data. In most cases, the answer is yes. In robotics, we’ve been able to leverage lots of simulated data for instance. We’re also finding new ways to improve systems on the fly, as they operate, by leveraging data-efficient techniques such as self-supervision and meta-learning.

Computer vision + AI is shaping up to be a versatile and powerful combination (spotting diseases on crops, assessing food quality, automating store checkout). Where is this technology headed and what are some untapped uses for it?

If you look at the perception systems that have evolved in animals and humans, they’re largely driven by function: for instance, our own visual system is very good at sensing motion, and at focusing its attention on the entities that we interact with in a scene. Computer vision systems don’t work like that today, because we haven’t closed the loop between sensing and acting. I think that one of the grand challenges of computer vision is how to optimize visual representations for the tasks we care about. Robotics will enable us to close this functional loop, and I expect that we will see the technology improve dramatically as a result.

What is your favorite fictional robot?

Japanese giant robots were a fixture of TV shows when I was a kid in 80’s France. Of the lot, I’m going to have to go with UFO Robot Grendizer out of sheer nostalgia. Today, I find inspiration watching Simone Giertz’ terrific robot contraptions on YouTube.


Related

Get the Spoon in your inbox

Just enter your email and we’ll take care of the rest:

Find us on some of these other platforms:

  • Apple Podcasts
  • Spotify
Tagged:
  • ArticulATE 2019
  • Google
  • robots

Post navigation

Previous Post Food Tech News: A New Snack made Using NASA Tech, Cell-Based Meat in India
Next Post Gourmia to Debut Pressure Cooker with Built In Refrigeration

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Get The Spoon in Your Inbox

The Spoon Podcast Network!

Feed your mind! Subscribe to one of our podcasts!

Brian Canlis on Leaving an Iconic Restaurant Behind to Start Over in Nashville With Will Guidara
Food Waste Gadgets Can’t Get VC Love, But Kickstarter Backers Are All In
Report: Restaurant Tech Funding Drops to $1.3B in 2024, But AI & Automation Provide Glimmer of Hope
Don’t Forget to Tip Your Robot: Survey Shows Diners Not Quite Ready for AI to Replace Humans
A Week in Rome: Conclaves, Coffee, and Reflections on the Ethics of AI in Our Food System

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.