In this season of Silicon Valley, one story line has housemate and programmer Jian-Yang developing a food recognition app called ‘See Food.’ Since the idea was born out of a spitballed pitch for a “Shazam for Food” by Yang’s landlord Erlich Bachman, it’s not altogether surprising that when Jian-Yang finally gets around to hacking together the app, it’s only good for one thing: telling us whether whatever is in front of the camera is a hot dog or not a hot dog (and yes, the once-fake app is now a real fake app you can now download for real in the app store).
While a “Shazam for Food” pitch seems like the perfect sendup concept for a satirical show about the tech world, the truth is there has been significant advances in machine vision and learning in the past few years that make food recognition a very real and potentially useful application.
These advances were on display this month by both Google and Pinterest as they both touted image recognition services called “Lens”. While Pinterest has been working on image search since at least 2015, they rolled out their Lens this past February. In a blog post, Pinterest CEO Evan Sharp highlighted a food use-case as an example of how Lens could work.
“You can also use Lens with food,” wrote Sharp. “Just point it at broccoli or a pomegranate to see what recipes come up.”
And this week, after Google launched a similar feature with the same name, Pinterest apparently felt they needed to emphasize the food recognition capability of their Lens offering in the form of a new blog post that repeated what Sharp told us in February: users can use Lens to serve up recipe suggestions.
“Our visual discovery technology already recognizes objects in more than 750 categories, and people have been busily pointing Lens beta at everything from lemons to strawberries to find new recipes to try. And now we’re rolling out a way for you to Lens an entire dish and get recipes to recreate the meal.”
Google introduced its Lens image recognition technology last week at its annual developer conference, Google I/O. Not that Google is new to image AI or even food recognition. The company has been working on image search for probably close to a decade, and in 2015 introduced an app called Im2Calories that gave calorie estimates of food based on image analysis of food. And while Google didn’t highlight any specific food use cases for their version of Lens at I/O, there’s no doubt that the company and its partners will explore using image AI to surface information such as recommendations for recipes like Pinterest or nutritional information (like Im2calories).
Of course, all of this follows Amazon’s recent push into image AI with the debut of its own camera-enabled Echo devices and the continued maturation of its AWS AI service, Rekognition. My guess is that as big players double down on voice assistant services, image recognition and analysis has reached a maturation point that makes it ready for consumer applications.
While clumsy efforts at food recognition like the Smart Plate – as well as two big companies launching an image recognition service with the same exact name – make food image recognition a ready topic for satire, the reality is the technology is reaching a point of maturity and usefulness that maybe – just maybe – we’ll soon have a Shazam for Food that consumers really want.