The fruits of advances in visual search technology offer great promise for the future of food in a wide range of applications.
Applying such technologies as computer vision, machine learning, artificial intelligence and the ability to delve deeply into tags and other metadata, innovators can build applications that can assist in identifying food properties. These can be valuable to help build healthy eating habits or even speed up the line at your average fast food restaurant. KFC in China is working on using a facial-recognition payment system geared to moving people through long waits at record pace. Ant, a financial subsidiary of Alibaba has the means to identify a person in two seconds and match them with a photo ID that lives on the Alipay network. Using a special algorithm, Ant says it is nearly impossible to spoof the system using a photo instead of a live shot, as well as being able to identify an individual even if he or she is wearing heavy makeup.
The long arm of visual search can touch the food industry in ways that use the power of the smart home with great utility. Amazon, for example, has its Rekognition technology which uses deep learning and a massive database to identify images, including pictures of food. Rekognition’s API is available to developers who could apply its power to many smart kitchen appliances, such as the family fridge. Take Samsung’s smart fridge as an example. Its ability to see what’s inside the appliance, take a recipe and identify what needs to be purchased can be married with a database using visual recognition. That list then is passed on to Alexa (which pairs with Samsung’s appliances) and within hours, everything a cook needs to make a special dinner is on hand.
While many of the culinary applications of advanced visual search are a bit “blue sky” in nature, especially as they apply to the smart home, there are enough efforts in market to show the concept’s promise. Pinterest and Google have individually developed what could be called “Shazaam for food.” Pinterest’s version of Lens allows users to snap a photo of food and bring up recipe images for that ingredient. As with other search products, Pinterest’s Lens still is in learning mode, not able to distinguish between a yellow squash and a banana.
Google’s visual search product, also called Lens, was announced at its May developers conference and is not yet available. Google plans to incorporate Lens into its smart home portfolio using Google Assistant.
And then you have an idea that looks good on paper but perhaps has too many moving parts to be viable on a mass scale. Smartplate, is a countertop device that works with a specially designed plate/tray combo that works with an app. The app weighs and identifies your food, and then tracks your dietary intake. The plate uses built-in cameras to take photos of the food and matches the contents to a database that lives in the cloud. The results from a search can be saved to a personal journal or synched to other devices such as a smartwatch.
The application of visual search into the core of the smart kitchen will depend on how easily the individual pieces (and who owns them) of this puzzle fit together. Marrying multiple technologies, image database, hardware and delivery mechanism are just a few of the elements that need to come together. Even Amazon, who owns much of the necessary components, won’t be able to crack the visual search code alone. The speed that valuable partnerships can be built will dictate how this market fares moving forward.