• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

computer vision

February 3, 2025

Bonsai’s Announcement Shows Momentum for ‘Physical AI’ in Food & Ag Continues Post-CES

While NVIDIA’s taken a beating the last couple of weeks with the industry-shaking release of DeepSeek, company CEO Jensen Huang’s talk at CES about how AI models are now extending to help us gain a better understanding of our physical world continues to be a tailwind behind those startups levering AI for robotics, computer vision systems and more in a variety of industries.

Including agriculture. Sure, Bonsai’s round was probably nailed down before CES but the announcement’s big emphasis on physical AI was undoubtedly influenced by the big buzz coming out of the big tech show. Bonsai, which makes AI-driven autonomous systems for harsh farming environments, announced last week they’ve secured $15 million in Series A funding to enhance its software, expand its platform, and accelerate commercialization.

The company’s flagship technology, Visionsteer, enables autonomous navigation and data analysis in orchards, even in challenging conditions such as dust, darkness, and uneven terrain. The company says it has over 40 deployed units and has collected data from more than 500,000 acres, which it says translates into lower costs, increased yields, and operational insights previously unavailable through traditional farming methods.

You can check out the company’s hero reel of their computer vision below.

Bonsai 2024 Introduction Video

April 7, 2021

Trax Raises $640M for its Computer Vision-based Grocery Inventory Management

Trax, a computer vision company that helps physical retailers and CPG companies with inventory management, announced today that it has raised a $640 million Series E round of funding. The round was led by SoftBank Vision Fund 2 and funds management by existing investor BlackRock. Other investors include OMERS and Sony Innovation Fund by IGV. This brings the total amount of funding raised by Trax to more than $1 billion.

The Trax system uses a combination of camera installations, shelf-scanning robots and computer vision to monitor products on store shelves. Product Images are sent to Trax’s cloud-based machine learning system to analyze and identify when inventory is low on store shelves, or when items are misplaced.

Trax is also used by CPG companies to help them audit store shelves to ensure they are getting the placement they paid for.

Trax, which launched its Retail Watch service in the U.S. back in October of 2020, isn’t the only company using computer vision to help stores manage inventory. Cashierless checkout systems like Grabango and Trigo promise similar, constant shelf monitoring and visibility through camera installations and advanced computer vision. And robots from Simbe and Bossa Nova offer less installation-intensive solutions.

Trax is certainly striking while the funding iron is hot. Grocery related startups have raised a bunch of money in 2021. In addition to the raft of grocery delivery startups that have secured big raises, companies that are helping physical grocery retailers are also raking in cash. Stor.AI, which helps grocers deploy online shopping services, and Shelf Engine, which helps grocers with inventory forecasting, both raised significant rounds last month.

October 16, 2020

Trax Launches its Computer Vision-Based Inventory System in the U.S.

Singapore-based Trax announced earlier this week that it has officially launched its Retail Watch computer vision-based inventory management system for grocers here in the U.S.

As we’ve covered previously, Trax retrofits stores with customized camera modules on ceilings and on shelves. These cameras snap a picture of the store’s inventory every hour and send those pics to Trax’s cloud, which uses machine learning to analyze each image to see which products are empty and if any products have been misplaced or mis-priced. A report is then sent to store management, who can then dispatch associates to re-stock and replace items.

According to this week’s U.S. announcement], the company has added autonomous robots to its tech stack, but didn’t specify their use or prominence in the overall offering.

A Trax rep told me last year that stores using its system saw a 1 percent bump in sales through improved item stocking. That 1 percent may not sound like much, but it adds up if used across multiple locations for a single retail chain. Plus, 1 percent promises to just come from better workflow of existing practices (inventory management), not adding new ones.

In addition to retailers, Trax’ also counts CPG companies among its customers. For CPG companies, Trax is more of an auditing tool to monitor shelf presence at stores to ensure they are getting what they paid for.

While keeping a close eye on store shelves may not sound sexy, it is becoming an increasingly competitive field. Walmart has developed its own computer vision-based inventory tool. And Bossa Nova, Simbe Robotics and Badger Technologies all make shelf-scanning robots.

For retailers, having more up-to-date inventory data can translate into more revenue. For consumers, it can mean fewer product outages should shocks to the system like pandemic panic buying occur again.

October 16, 2020

SKS 2020: Grabango Says its Computer Vision Avoids Biases Because it Doesn’t Use Facial Recognition

Grabango CEO Will Glaser said this week that because his company’s technology doesn’t use facial recognition, it can avoid some of the same types of racial biases found in other computer vision and machine learning algorithms. Glaser’s comments came during my fireside chat with him at our Smart Kitchen Summit this week.

Grabango makes cashierless (or as Grabango calls it, “lineless”) checkout technology for grocery retail. Grabango installs hundreds of cameras in the ceiling of a store, and through a combination of computer vision and AI, keeps track of what people put in their basket (or pocket). When shoppers go to check out, they choose a cashier or use an accompanying mobile app that automatically charges for the items kept.

This type of computer vision-based system is also used by other cashierless checkout companies such as Zippin, Standard Cognition and Amazon, and could become more widespread as demand for contactless retail experiences grows.

But the problem with some computer vision + AI systems is that their algorithms can contain the human biases of their creators. As TechCrunch wrote earlier this year, “MIT researchers reported in January 2019 that facial recognition software is less accurate in identifying humans with darker pigmentation.” In a busy retail environment where a lot of people are grabbing a lot of different items all throughout the day, it’s not hard to see how this type of bias could adversely impact people of color.

Glaser recognized some of the problems that can arise when algorithms are only developed or trained on white men in a lab. He made a point of saying that Grabango’s technology does not use facial recognition, and that it has a 99.6 percent revenue accuracy rate. Grabango’s system does follow a person around store, but doesn’t personally identify them (it can, however, stay with a person even after they have put on or taken off a jacket).

Grabango’s system is also being used in real world conditions, so it continues to train its algorithms in a diverse setting. Last month, Grabango announced that it’s integration with Giant Eagle’s GetGo Market in the Pittsburgh area went live. So now we’ll be able to see in a more open environment if Glaser’s claims hold true.

July 24, 2020

QCify Goes 3D for Quality Control and Fair Pricing in the Food Supply Chain

As the economy is barraged daily by some kind of pandemic-related bad news, many businesses remain closed (or serving far fewer customers), job losses continue to pile up and people all over are being more cautious about how much they spend.

The food supply chain is not immune from this belt-tightenting. Looking to save some money, food buyers may haggle more vigorously over what they pay per pound for something like almonds, costing the growers and processors money. This problem, Raf Peeters told me, is where QCify can help.

Peeters is the CEO of QCify (pronounced kew-sih-fye), which uses a combination of computer vision and artificial intelligence (AI) to perform quality control on food items at processing plants. Right now QCify inspects almonds and pistachios by running samples through a special machine that uses six cameras to capture a 3D image of each almond. The company’s AI then analyzes the image and grades the almonds based on USDA (or other) criteria such as size, color, insect damage, imperfections, etc.

All that data collected by the inspection machines are sent back to to QCify HQ, where it is incorporated into the company’s algorithms. Twice a year QCify then sends out updates to all of its installed machines, which means that even if a customer bought a QCify system a couple years ago, it will run the newest AI. “Customers feel like they have the latest and greatest,” Peeters told me by phone this week.

Qcify explainer video

The result of all this computer vision and machine learning is that nut processors can set a fair price for their wares, based on objective criteria (like the USDA grading). Right now, QCify works with almonds and pistachios, and has customers in both the U.S. and Australia. A buyer can’t argue over the quality of the almonds (and thereby demand a lower price) because the processor not only has the grade from the QCify system, but it can also produce the sample images to show exactly what quality the almonds or pistachios are in.

QCify isn’t the only company looking to remove biases from the food supply chain using computer vision and AI. AgShift and Intello Labs do much the same thing. Peeters said that QCify is different from the competition because its six-camera setup captures 3D images of the nuts, instead of just scanning the top an bottom of the food, which Peeters claims is what his competition does.

QCify was founded in 2015 and Peeters said they company has only raised an unspecified amount of angel investment money. The company sells the machines themselves and charges a monthly/annual subscription fee for updates and calibration. While he wouldn’t reveal pricing, Peeters said that customers can earn their money back within a year.

In these cash-strapped times, a faster ROI isn’t just peanuts, which, coincidentally is one of the next nut categories QCify is expanding into.

May 13, 2020

Intello Labs Raises $5.9M for its AI-Based Food Grading

Intello Labs, which uses a combination of computer vision and artificial intelligence to grade food quality along the supply chain, announced today that it has raised a $5.9 million Series A. Saama Capital led the round with participation from global agritech funds GROW (Singapore) and SVG Ventures THRIVE (USA), and existing investors Omnivore and Nexus Venture Partners. This brings the total amount raised by Intello to $8.3 million.

Based in Gurgaon, India, Intello Labs offers a suite of computer vision products and cloud-based AI that can be used by farmers, food packers, exporters and retailers to assess the quality of fruits and vegetables. The goal is to bring transparency and objectivity to the food buying and selling business and establish fair prices throughout the supply chain. By having AI judge food grades, you eliminate haggling between biased people over what a particular bushel of apples is worth, for example.

When we first wrote about the company in 2018, it was focused on rural farmers and only used smartphone cameras to capture images of food. Finding that smaller farmers were more apprehensive about working with technology, Intello shifted to work with bigger, more corporate clients last year.

The company offers a number of different assessment tools:

  • Intello Track uses smartphone cameras to capture images of produce, which are analyzed by Intello’s cloud to assess size, color, and defects before returning a food grade.
  • Intello Sort is a machine that separates produce based on quality.
  • Intello Pack can be used to monitor produce as it is being packed for shipment
  • Intello Deep is a handheld scanner that detects Brix , pH , TSS, dry matter, moisture, pesticide residue

Intello isn’t the only company using AI and computer vision to assist with the assessment of fresh fruits and vegetables. AgShift offers similar services and launched its Hydra scanning system for bulk inspections last year.

Quickly and fairly assessing food quality has taken on greater importance during this time of global pandemic. Mass restaurant closings and general upheaval revealed the weaknesses and inequalities in our food supply chain with farmers throwing out tons of food. Tools like Intello Labs’ can bring efficiency to the food buying process and help keep the world fed.

April 23, 2020

Expect More Restaurants to Use AI Cameras Like DragonTail’s to Show a Kitchen’s Cleanliness

Before the COVID-19 pandemic, people mainly worried about where their food came from. Was it organic, locally raised, grain-fed, etc. (It was a simpler time.) But as the virus has spread, the bigger concern for consumers is who has touched their food, and were they wearing gloves and a face mask when they did so.

This is the socially distant, contactless delivery world that we now live in, which actually makes it the perfect time for Dragontail Systems to launch its new AI-powered camera that detects the sanitary conditions of food prep areas as the food is being packaged up for delivery.

You may remember Dragontail when its camera + computer vision system debuted at Domino’s in Australia last year. Dragontail’s camera is mounted above the workstation in Domino’s back of house where pizzas come out of the oven, are sliced up and boxed.

Back then, the Dragontail system was being used to assess quality control: that the pizza was the right shape with the right toppings, and that it was cooked properly. Dragontail’s camera took pictures of the pizza and Domino’s sent those pics to the customer as a way of showing hey, your pizza order was made properly and it’s heading out to you.

Now, in addition to the existing quality control features, Dragontail announced today that its camera system can check for sanitation conditions. The camera detects things like whether gloves and facemasks being worn, or how often a workspace is sanitized. The exact parameters of what to look for are up to the restaurant, as are how violations are communicated to the worker and/or manager.

So now, in addition to a picture of the food cooked properly, restaurants can provide a customer with a checklist of steps taken to show that the food was also handled properly. The end customer, then, can feel a little more comfortable knowing the restaurant followed proper cleanliness procedures.

We actually started seeing similar technology pop up last year in, of all places, China. As we wrote then, AI systems were installed in restaurants in the Shaoxing Province of China to monitor for unsanitary conditions like improper uniforms and mixed use of cutting boards. More recently in India, home cook marketplace FoodCloud launched a Kitchen Cam, which offers customers video footage of the kitchen and cooks as they make and package food.

Dragontail’s camera is currently being used in more than 2,500 stores across Australia, Canada, Singapore, the UK and Belgium. With today’s announcement, the company is ready to expand into the U.S. and work with restaurants of all sizes. The cameras cost between $500 and $1,000 and there is a $50 monthly fee for the computer vision systems.

As we’ve written before, the COVID-19 pandemic is accelerating changes throughout the meal journey. With cameras like Dragontail’s likely becoming more commonplace, the meal journey will now include pictures.

December 27, 2019

Trax Uses Shelf Cameras and Computer Vision to Track Grocery Inventory

For grocery retailers that operate at scale, small improvements can have a big impact on the bottom line. One of the ways to boost that bottom line, according to Trax, is better shelf inventory management.

Based in Singapore, Trax uses a combination of shelf and ceiling cameras, along with computer vision, to give retailers and CPG companies a more accurate reading of what’s on store shelves.

Trax installs customized camera modules on store shelves to keep tabs on products. Trax also augments these shelf cameras with ceiling mounted cameras for monitoring fresh sections of stores. The cameras take a picture of the every hour, and those images are sent to the cloud, where they are analyzed to ensure that products are available, on the correct shelf and price compliant. Based on what the system reports back, store associates can be dispatched to address any issues.

I spoke with Mark Cook, Head of Retailer Solutions for Trax, by phone this week, and he told me that retailers using the Trax inventory management system have seen a 1 percent bump in sales by just ensuring that more product is in stock. One percent may not sound like a lot, but again, when multiplied across all the shelves in a store, and across all the stores in a chain, that small percentage can represent big money.

In addition to grocery retailers, Trax also counts CPG companies like Coca-Cola and Henkel as customers. CPG companies use Trax basically for auditing. With the system, they can monitor their shelf presence in stores and ensure they are getting what they are paying for in terms of promotion.

Trax has a SaaS business model where the hardware is provided and the customer pays a subscription fee for the analytics.

If Trax’s solution sounds familiar, that’s because other companies are taking similar approach to automating shelf management. Walmart is deploying Bossa Nova shelf-scanning robots to roam it store floors, and the mega-retailer also launched its IRL store this year, which uses banks of cameras to monitor store inventory in real-time.

Trigo is another company using cameras to help stores with inventory management, though Trigo is mostly focused on using computer vision to help create cashierless checkout systems for stores. I asked Cook if cashierless checkout was on Trax’s roadmap and he said the company is more focused now just on shelf management. “There’s money on the table with inventory.”

Founded in 2010, Trax is headquartered in Singapore and has roughly 800 employees. The company has CPG customers in 70 different countries. Cook wouldn’t disclose retail partners, but said it is working with retailers in the U.S., U.K. and Israel.

October 16, 2019

Presto Launches Computer Vision System for Front-of-House Restaurant Operations

Redwood, CA-based restaurant-tech company Presto has launched a a computer vision product that monitors the front of house in the restaurant and alerts managers in real time to any potential problems.

Called Presto Vision, the system leverages the same computer vision technology found in automated toll booths, traffic monitoring, and Amazon Go cashierless stores. Presto Vision uses cameras placed in the restaurant lobby to tag staff and guests and analyze their motions in real time. The company makes a point of saying it doesn’t track or store information that would personally identify someone. Rather, it “models individuals as abstracted entities (e.g., boxes)” to gather data on things like guest interactions with the host, excessively long wait times, and if customers leave before getting seated because of those long waits.

The product joins Presto’s existing lineup of restaurant management tools, which also includes a tabletop terminal for guests, Presto A.I., and Presto Wearables.

In addition to monitoring the lobby, the system can alert a restaurant manager, via a mobile or wearable to device, to any potential problems. For example, Vision could tell a restaurant GM that the host station has been empty for 10 minutes, and the GM could then immediately send someone to fill in, to keep wait times for customers down. The system also generates next-day reports that include data on things like bounce rates (i.e., how many customers left before getting seated) and how tidy the lobby area is.

“With this product, restaurants can now have access to critical insights on how their stores actually work. This helps them provide better service, operate more efficiently, and reduce overhead,” said Rajat Suri, Presto’s founder and CEO, said in a statement.

Suri, who is also the cofounder of Lyft, founded Presto in 2008 after working in restaurants and prototyping a product based on his observations of front-of-house activity. The company raised $30 million in February of this year.

Right now, Presto Vision is currently only being tested in the lobby area at undisclosed Outback Steakhouse locations. Presto says it plans to expand the product to other parts of the restaurant soon, including the back of house, the dining room, and curbside areas.

July 25, 2019

Standard Cognition Raises $35M Series B for its Cashierless Checkout Tech

Standard Cognition, one of a host of startups looking to bring cashierless checkout to the retail experience, announced today that it has raised a $35 million Series B round of funding. The round was lead by EQT Ventures fund, with participation from existing investors Initialized Capital, CRV and Y Combinator. This brings the total amount raised by Standard Cognition to $86.1 million.

Standard Cognition retrofits retailers with Amazon Go-like grab-and-go capabilities. Using computer vision and artificial intelligence, it keeps track of items shoppers take from the shelves and automatically charges them via mobile app when they exit the store.

With this Series B round, Standard Cognition now has a far bigger warchest than the other players in the cashierless checkout space, which breaks down as follows:

  • Grabango – $17 million
  • AWM Smartshelf – $12 million
  • Trigo Vision – $7 million
  • Caper – $3 million
  • Zippin – $2.7 million

However, the one thing Standard Cognition doesn’t have right now is a publicly named customer. Standard Cognition opened its own proof-of-concept store in San Francisco last year, and according to today’s press release, “The company has signed several retail customers and is deploying for two already in multiple locations, with scheduled go-live dates in Q3 and Q4 of this year.” But Standard Cognition has to name names.

This is a fairly common scenario: most startups making cashierless checkout solutions can’t actually name their actual customers right now. In fact, the only two that have done so at this point are Grabango with Giant Eagle and Trigo Vision with Shufersal in Israel. The reluctance of retailers to officially announce a partnerships is understandable as they typically move slowly with such technology implementations, wanting to make sure they pick a solution that can scale appropriately.

The cashierless checkout sector is still wide open, and with today’s big raise, Standard Cognition has the money to withstand the competition.

July 24, 2019

There Should be an Open Brand Image Database to Train Robots and Cashierless Checkout Systems

It was a box of Cheez-Its that made me think of the idea.

We were recently at Nvidia’s Robotics Seattle Robotics Lab, watching a presentation on how the company uses computer vision and synthetic data to train robots in the kitchen. In order for a robot to grab a box of Cheez-Its, that robot needs to know what a Cheez-Its box looks like. In order to teach the robot what a Cheez-Its box looks like, Nvidia needs give the robot detailed information about that box including size, shape, and artwork on the front.

This isn’t that hard if your kitchen, like the one at Nvidia, is stocked with only one box of Cheez-Its. You scan that box, input the dimensions and imagery so the robot can match what you’ve scanned to the real thing. But what happens when when you want the robot to find a box of something other than Cheez-Its? Or if the pantry has many different types of Cheez-Its, that kinda look similar but have different flavors? Or if it’s the holidays and the box has been altered from that first model so it now has a snowman on it?

Being able to identify particular products via computer vision isn’t just an issue for robotic hands. Startups like Grabango and Trigo Vision are retro-fitting grocery stores with lots of tiny cameras that use computer vision for cashierless checkout. These cameras need to precisely recognize the items that shoppers pick up and so that the consumer can be accurately charged. That means the AI powering the system needs to know the differences between a bottle of Coke, Diet Coke and Coke Zero, and be able to understand any changes to branding, like a new logo or seasonal updates.

Rather than having each robotics company and every cashierless checkout company separately create their own database of product images, it seems like having some sort of central repository of brand images would be useful. Think of it as a giant library of constantly updating brand images for all the products in a grocery store. CPG companies would upload 3D models of the latest versions of their products to this database, giving computer vision companies access the most up-to-date imagery for training their respective applications.

This is definitely not the most pressing issue facing CPG companies or retailers; cashierless checkout and product picking robots are still very much in the early stages. But they are coming — and preparing for their arrival now would make the evolution of computer vision and robotics that much faster. After all, training those systems is much easier when you can just download an image rather than creating it yourself.

During our visit I asked Dieter Fox, Senior Director of Robotics Research at NVIDIA, if there was such a system. He said there was for common objects, but not brand specific. ShapeNet has a 3D database of 50,000 common objects, and its subset, PartNet recently launched with a database of more than 26,000 objects broken down into their various parts.

There are competitive issues that might have CPG brands balking at the idea. Coke may not want people knowing about a particular branding change or partnership in advance. But the overall concept could be a tide that lifts all boats. It gives computer vision-related companies the most accurate 3D models of products for training purposes. The faster computer vision systems can be trained, the faster they can work in the real world without any hiccups, which would ideally allow brands to sell more products. It would also make it easier for kitchen robots, when they eventually arrive, to autonomously grab ingredients needed while cooking (“Robot, grab the turmeric.”).

This isn’t just for food, obviously. This type of repository could work for any brand across any sector that will involve computer vision. Perhaps it’s something Dieter Fox can talk about when he speaks at our upcoming Smart Kitchen Summit in October. Get your ticket now and maybe you can talk with him about it over a box of Cheez-Its.

April 25, 2019

Walmart Unveils New AI-Powered Store to Monitor Inventory (But No Cashierless Checkout)

Walmart officially took the wraps off its new high-tech store today, which features thousands of cameras powering computer vision and an AI platform that monitors in-store inventory in real time.

Located in the Levittown, NY Walmart store, the Intelligent Retail Lab (IRL)(har, har) sounds a lot like Amazon Go–lots of cameras built into the store constantly monitoring products on the shelves–but there are some key differences.

First, Walmart’s computer vision application is just for inventory management, so no cashierless checkout for now. Instead, the cameras watch when an item is gone from a shelf and notify a worker to replenish the item quickly. According to the AP, which got a look at the store, the cameras can also detect spills and sees when shopping carts are running low.

Second, Walmart is already running IRL at scale. The New York store is 50,000 square feet, compared to the roughly 2,000 – 3,000 square feet of Amazon Go stores. The difference, though, as noted earlier, is that while IRL may be able to monitor inventory on shelves at scale, it’s not doing cashierless payments…yet. Walmart told the AP that it’s not currently using the technology to track movements of shoppers or for facial recognition, though it’s hard to imagine Walmart developing and deploying all this technology just to make sure there isn’t a gap in the Cheerios display.

Walmart’s launch of IRL will probably turn up the heat on rival grocers to implement their own such systems so they can achieve similar efficiencies. Thankfully, there are a ton of startups like Trigo Vision, Grabango, Standard Cognition and AWM looking to outfit stores with banks of cameras and AI for cashierless checkout and inventory management.

IRL also comes just weeks after Walmart announced it was expanding its use of shelf scanning robots to manage inventory. Those robots seem to be more of a stopgap measure as Walmart works out any bugs IRL encounters as it runs IRL (in real life).

Regardless, IRL, with all its high-tech cameras and AI gives us a (computer) vision into the retailer’s future.

Next

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...