Robots are really good at moving around. They can roam the aisles of grocery stores easily or manipulate an articulating arm to pour you a cup of coffee. But what robots aren’t good at is understanding the world around them.
“The human world is by definition messy and complicated,” Vincent Vanhoucke, Principal Scientist and Director of Perception and Robotics for the Google Brain Team told the crowd during his presentation at our recent ArticulATE food robot conference in San Francisco.
Vanhoucke’s team is working on taking the things robots do well — moving around — and marrying that with advancements in computer vision and deep learning to make robots more useful in the messy and complicated real world. And it turns out that food in particular, with its different textures and properties, is quite messy and complicated.
The Google team does this by training robots to pick up objects of various sizes and shapes up using deep learning. Through recognition, repetition and reinforcement, the robots develop their own strategies and behavior for solving problems (the inability to pick up an object) on their own without a human programming specific solutions.
The applications for this can be seen in something like a feeding robot for the disabled. Rather than having a “dumb” arm that only scoops food from a predefined area in a bowl and lifting that food to a predetermined height, a deep-learning enabled robot can identify different food on a plate no matter where it is, pick it up and lift it directly to a person’s mouth.
It’s really fascinating and cutting-edge science, and you should definitely watch the video of his full presentation.
Be on the lookout for more videos from ArticulATE 2019, coming throughout the week.
Leave a Reply