Football wide receivers that can catch the ball well are said to have soft hands. Food robots who use too much force grabbing delicate produce are, well, just being robots.
But now, robot system designers can turn their food robot into a veggie-shuffling Jerry Rice with the new mGrip “hand” from Soft Robotics.
The mGrip is part of a new SoftAI product suite from Soft Robotics that robot designers can add to existing systems to optimize them for handling food like meat and produce in high-volume environments. In addition to a food-grabbing hand, the SoftAI suite includes a “perception module” that pairs cameras with machine vision software that the company says will add “eye-hand coordination” to industrial robots. The on-board processing of the perception module uses machine learning to understand how to categorize and segment different types of food.
Robotics has long been used for tasks like packing food, but only in highly structured environments. Often, this meant using humans as part of the process to do things like sorting. However, advances in machine vision over the past couple of years have meant machines can essentially replace the need for humans to do some of the tasks they’ve been needed for in the past. With an off-the-shelf product like SoftAI, what companies like Amazon have probably spent millions to build now becomes more turnkey.
Beyond high-volume warehousing applications, I can also see how a platform like SoftAI could be used in more consumer-facing food robotic systems. For example, first-generation robotic food kiosks often used off-the-shelf robotic arms that required lots of customization to make them work. With new food-optimized plug-in hardware like SoftAI, small teams could accelerate their time to market and dedicate their time to other engineering problems.
You can see the mGrip and the SoftAI perception module in the video below.