• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Visual Search Holds Great Promise for the Future of Food

by Allen Weiner
September 11, 2017September 13, 2017Filed under:
  • Connected Kitchen
  • Education & Discovery
  • Internet of Food
  • Next-Gen Cooking
  • Robotics, AI & Data
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email this to a friend (Opens in new window)

The fruits of advances in visual search technology offer great promise for the future of food in a wide range of applications.

Applying such technologies as computer vision, machine learning, artificial intelligence and the ability to delve deeply into tags and other metadata, innovators can build applications that can assist in identifying food properties. These can be valuable to help build healthy eating habits or even speed up the line at your average fast food restaurant. KFC in China is working on using a facial-recognition payment system geared to moving people through long waits at record pace. Ant, a financial subsidiary of Alibaba has the means to identify a person in two seconds and match them with a photo ID that lives on the Alipay network. Using a special algorithm, Ant says it is nearly impossible to spoof the system using a photo instead of a live shot, as well as being able to identify an individual even if he or she is wearing heavy makeup.

The long arm of visual search can touch the food industry in ways that use the power of the smart home with great utility. Amazon, for example, has its Rekognition technology which uses deep learning and a massive database to identify images, including pictures of food. Rekognition’s API is available to developers who could apply its power to many smart kitchen appliances, such as the family fridge. Take Samsung’s smart fridge as an example. Its ability to see what’s inside the appliance, take a recipe and identify what needs to be purchased can be married with a database using visual recognition. That list then is passed on to Alexa (which pairs with Samsung’s appliances) and within hours, everything a cook needs to make a special dinner is on hand.

While many of the culinary applications of advanced visual search are a bit “blue sky” in nature, especially as they apply to the smart home, there are enough efforts in market to show the concept’s promise. Pinterest and Google have individually developed what could be called “Shazaam for food.” Pinterest’s version of Lens allows users to snap a photo of food and bring up recipe images for that ingredient. As with other search products, Pinterest’s Lens still is in learning mode, not able to distinguish between a yellow squash and a banana.

Google’s visual search product, also called Lens, was announced at its May developers conference and is not yet available. Google plans to incorporate Lens into its smart home portfolio using Google Assistant.

And then you have an idea that looks good on paper but perhaps has too many moving parts to be viable on a mass scale. Smartplate, is a countertop device that works with a specially designed plate/tray combo that works with an app. The app weighs and identifies your food, and then tracks your dietary intake. The plate uses built-in cameras to take photos of the food and matches the contents to a database that lives in the cloud. The results from a search can be saved to a personal journal or synched to other devices such as a smartwatch.

The application of visual search into the core of the smart kitchen will depend on how easily the individual pieces (and who owns them) of this puzzle fit together. Marrying multiple technologies, image database, hardware and delivery mechanism are just a few of the elements that need to come together. Even Amazon, who owns much of the necessary components, won’t be able to crack the visual search code alone. The speed that valuable partnerships can be built will dictate how this market fares moving forward.


Related

Get the Spoon in your inbox

Just enter your email and we’ll take care of the rest:

Find us on some of these other platforms:

  • Apple Podcasts
  • Spotify
Tagged:
  • AI
  • image recognition
  • image search
  • visual search

Post navigation

Previous Post Spoon Video Top 3: Blockchain’s Impact on Food System & Bosch’s Mykie Gets Smarter
Next Post The IFA 2017 Smart Refrigerator Roundup

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Get The Spoon in Your Inbox

The Spoon Podcast Network!

Feed your mind! Subscribe to one of our podcasts!

Impulse Announces Its Battery-Integrated Cooktop Becomes First Certified to Applicable UL Safety Standards
Tasting Cultivated Seafood in London’s East-end
After Leaving Starbucks, Mesh Gelman Swore Off The Coffee Biz. Now He Wants To Reinvent Cold Brew Coffee
Brian Canlis on Leaving an Iconic Restaurant Behind to Start Over in Nashville With Will Guidara
Food Waste Gadgets Can’t Get VC Love, But Kickstarter Backers Are All In

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.