• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

computer vision

April 8, 2019

Intello Labs Raises $2M, Takes its AI to the Skies with Drones for Crop Assessments

Intello Labs, which uses computer vision and AI to assess food quality, announced last week that it has raised a $2 million seed round from Nexus Venture Partners and Omnivore. This brings the total amount raised by Intello to roughly $2.4 million.

A few things have changed for Intello Labs since we last checked in with the company almost a year ago. Back then it was using its computer vision and AI to create a neutral food grade to help farmers earn a fair price for their crops. With an AI-based grade, farmers were better able to defend and earn higher prices from picky buyers who might try to question the quality of the produce to pay less.

Sreevidya Ghantasala, a Research and Development Analyst at Intello Labs who also heads up its U.S. operations (the company is headquartered in India), told me in a phone interview that the company is moving away from independent farmers and more towards corporate ones. “We believe independent farmers are apprehensive about using new technology,” Ghantasala said.

Additionally, Intello Labs is also moving on up, literally, and using its computer vision and AI platform with drones. Previously, Intello only worked with smartphone cameras with which parties involved would take pictures of food and submit it to Intello’s cloud for analysis.

With the move to drones, Intello has been able to expand its features and crop capabilities. Its software has now been used by drones flying over rice fields to count flowers that will produce rice seeds. Ghantasala said that having successfully completed this expansion into aerial image taking and rice, Intello is working with the rice producer to broaden into even more applications.

Intello Labs isn’t the only company using computer vision and AI to do crop assessments. AgShift does pretty much the same thing, though as Intello appears to be going further into the fields with drones, AgShift is diving deeper into the supply chain with the recent launch of its bulk food inspector.

Ghantasala said that Intello Labs will be using this seed round to establish a larger presence in the U.S. as well as improve the company’s technology.

March 27, 2019

Winnow Launches Computer Vision + AI Tool to Help Commercial Kitchens Fight Food Waste

Winnow this week announced the launch of its new Winnow Vision product, which combines computer vision, a weight scale and AI to help commercial kitchens reduce food waste.

From the press announcement:

Using a camera, a set of smart scales and the same type of machine learning technology found in autonomous vehicles, Winnow Vision ‘learns’ to recognise different foods being thrown in the bin and calculates the financial and environment cost of this discarded food to commercial kitchens.

A demo video on Winnow’s site shows its product in use at an IKEA cafeteria. As unsold food is thrown straight into the garbage can, Winnow’s computer vision sees and identify the food that is being thrown out. The garbage can sits on top of a scale, allowing the system to know just how much food is being tossed.

With this type of information, chefs and kitchen managers can then see what foods they are over-ordering or making too much of and adjust both their inventory management and production accordingly. The hope is that they’ll buy only the ingredients they need and create less food waste in the process.

Winnow isn’t the only company using scales and cameras to fight food waste in high-volume kitchens. LeanPath offers a similar solution, though it requires users to manually enter in the food being thrown out.

With 40 percent of the food in the United States never getting eaten, fighting food waste is a big problem — and as my colleague, Catherine Lamb recently wrote, a there are a lot of startups looking to tackle it.

We’ll now have to see how much Winnow’s computer vision system helps.

March 11, 2019

Snap a Pic, Log Your Meal. Bite.AI Offers API for Visual Food Tracking

One of the best tips for getting healthy and losing weight is to log the food you eat. Unfortunately, food logging is also onerous. Food journals are inexpensive, but it’s easy to forget to log everything and even if you do, paper or spreadsheets don’t connect to nutritional information. Apps that scan barcodes make it a little easier, but that doesn’t work when you’re at a restaurant.

Bite.ai is looking to remove those pain points by using computer vision to do all the work of recording what you eat and automatically providing nutritional information. It’s basically a photo food journal that tracks 47 nutrient information points (calories, fats, carbs, etc.) and helps you use that data to reach dieting or other health-related goals. The app is free and currently has roughly 150,000 users, according to Bite.ai Co-Founder and CTO, Michal Wolski. You can see it in action in this video.

Bite.ai Food Logging app in action

As you can see from the video, we aren’t quite living in a fully automatic, sci-fi world quite yet. You take a picture and Bite.ai brings up food suggestions that you confirm, but you still have to enter how much of something you’re eating. As with most things AI, the more you use it, the more you’re training the algorithms and the better/more accurate the app will get.

Wolski says the plan is to keep the consumer app free and generate revenue through a B2B play. Bite.ai offers an API for other apps to use Bite’s computer vision and analysis platform. Wolski said that Bite currently has about a dozen customers using the APIs and integrating Bite’s technology for wide ranging purposes such as interactive marketing campaigns, glucose monitoring, weight loss and even research studies currently being conducted at two universities.

Bite is by no means the only company looking to use computer vision to make food logging easier. Calorie Mama offers a similar service, and was integrated into Samsung’s Bixby AI platform. And last year Microsoft was awarded a patent for “Food Logging from Images.”

Going up against giants like that, Bite.ai certainly has its work cut out for it, especially considering the company is currently bootstrapped. However, everyone eats, so the visual food logging space doesn’t have to be a zero sum game. If Bite’s technology performs, it can take its own bite out of the health and wellness market.

February 27, 2019

AgShift Launches Hydra, its AI-Powered Food Quality Analyzer for Bulk Inspections

AgShift is on a mission to remove human biases from food quality inspection by using computer vision and artificial intelligence. It started off doing this by having inspectors use smartphone cameras to snap pictures of food (like berries), which were then analyzed by AgShift‘s machine learning algorithms to assess quality.

While the company’s software platform may bring objectivity to quality assessment, having inspectors manually take photos of fruit was still mostly manual. And when you consider that one state, California, produces one billion pounds of one fruit, strawberries, each year, these manual inspections can still take quite a bit of time. In addition to accuracy, there is a need for speed in the supply chain.

This is why AgShift created Hydra F100 BQ, a new hardware analyzer that the company officially announced today. With Hydra, companies can do more bulk inspection and thereby faster assessment of food like berries and edible nuts.

We wrote about this hardware analyzer before, but that was when it was in the prototype/development stage. Today’s news takes the wraps off the full industrial version of the device.

The Hydra is a kiosk like machine with a touchscreen that is installed at a food processing facility. Instead of inspectors manually selecting and inspecting samples, whole trays of samples can now be inserted into the machine to be assessed at once. The Hydra has cameras above and below the fruit to capture images of this bulk sample, which is sent to AgShift’s cloud platform to analyze it for weight, size, color and to check for defects like mold or bruising. The result is the same objective analysis, but AgShift says it’s now done in half the time of manual inspections.

“When you do a manual inspection [of strawberries], you are inspecting roughly 4 to 6 clamshells in a sample size, roughly 100 berries in total,” said Miku Jha, Founder and CEO of AgShift. “[That] takes 6 to 8 minutes with manual inspection. Hydra does it in under three minutes.”

AgShift’s Hydra has already been running in trials with both Driscoll’s for strawberry inspection and Olam for cashews.

By removing manual inspections by hand, AgShift says it can also reduce waste because. As we reported last June:

[Jha] said that traditionally cashews are examined by hand, with inspectors looking at one or two pounds of nuts at a time. That takes time, and after being touched, those particular nuts need to be discarded. Both time and waste can add up when you’re processing literally tons of cashews. Using AgShift’s analyzer, sampling can be done faster and samples do not need to be thrown out because of the workflow at the processing facility.

In a recent phone interview, Jha told The Spoon that the company was still determining the business model around Hydra, but that it wasn’t in the business of selling boxes. Instead, the Hydra would most likely be leased with the price of the software subscription coming in around $4,000 per month, depending on the volume of assessments.

AgShift isn’t alone in the computer-vision-for-food-inspection space, Bengaluru-based Intello Labs does much the same thing for farmers in India.

Earlier this month, AgShift raised another seed round from CerraCap Ventures. The amount wasn’t disclosed at the time, but in the Hydra press announcement, AgShift says it has raised $5 million in seed funding. Since just about a year ago AgShift announced a $2 million raise, it looks like the recent raise was for $3 million.

December 6, 2018

Amazon Go is On a Massive Hiring Spree, and Not Just in the U.S.

Amazon Go, the retail store that uses cashierless technology so you can walk in, choose your items, and walk out without stopping to pay, has 338 open listings on its job site (big h/t to Sean Butler).

There are a few takeaways from this, but most notable is the sheer amount of investment in engineers on both the software and hardware side. There are a whopping 130 positions in software development, and 44 in hardware development.

But that’s just the start. While the majority of the listings are for Software Engineers, listings also include everything from Data Collection Technician to Creative Director to Security Engineer to Senior Vision Research Scientist. There are even 7 listings for real estate and construction positions, Whew!

They’re also searching for a Specification Technologist to join the Amazon Meal Kits team and help out with product development. Meal kits are already some of the most popular items at Go stores, so it’s not surprising that Amazon is looking to amp up its offerings, especially as they expand into new cities.

Many of the jobs are quite recent, and were either posted or updated within the past month. Which means that Amazon is poised to make some serious Amazon Go expansion moves in the new year, and willing to invest some serious man (and woman) power to do it. Good thing too, since the company is considering a plan to open 3,000 Go stores by 2021.

It’s also worth noting where the Go jobs are located. While the locations don’t necessary indicate where Amazon will set up future Go stores, it’s a good data point to learn where they will base R&D and development of their cashierless technology.

In the U.S. there are openings in Seattle (duh), Westborough, MA, San Francisco, and New York City. Abroad, there are listings in two cities in Israel: Tel Aviv and Haifa.

Perhaps most eye-catching on the list is Westborough, MA. That’s the home of Amazon Robotics, a subsidiary which works on Amazon’s mobile robotic fulfillment systems. According to job descriptions, that’s also where Amazon is building an Advanced Projects Group, which will develop “new technologies that go well beyond the current state of the art.”

The location is certainly strategic from a hiring standpoint: Westborough is less than an hour outside of Boston, making it an easy way to recruit tech-savvy post-grads from MIT and Harvard. I’m speculating here, but the Westborough job listings, with its proximity to Amazon Robotics, could also indicate plans on Amazon’s part to add more robots to its Go store experience.

Outside of the U.S., Amazon Go is hiring in Israel. This could simply be a way for Amazon to take advantage of Israel’s flourishing AI landscape and hire some top-notch computer scientists. But it could also indicate that Amazon is ready to expand its Go stores internationally.

It wouldn’t be the first company to bring cashierless tech Israel. Trigo Vision recently partnered with Israel’s largest supermarket chain Shufersal to implement its checkout-free tech in all locations across Israel. However, Trigo Vision and Amazon aren’t direct competitors: Trigo licenses out its tech to existing retailers, while Amazon builds its Go stores from the ground up.

Of course, even outside of Israel Amazon still has plenty of competition in the cashierless tech space. Microsoft has been working on its own version and has reportedly been in partnership talks with Walmart. In San Francisco, Aipoly is developing its own walk-in-walk-out store solution and Standard Cognition recently opened up a store in San Francisco to show off its technology.

Which is all the more reason that Amazon needs to grow fast if it wants to keep up its unique value proposition in the food retail space. The high number of job listings, and their wide geographic reach, show that when it comes to Go stores (and most things grocery, in fact), Amazon isn’t slowing down anytime soon. Now we just have to wait and see when they launch a cashierless Whole Foods.

Thanks to Sean Butler, who posted on his Linkedin about Amazon Go’s massive open jobs list. Do you have a tip for us at the Spoon? We’d love to hear it. 

November 29, 2018

DeepMagic Combines Computer Vision and AI to Make Mini, Unattended Amazon Gos

One of the questions that comes up when talking about Amazon Go cashierless stores is when the grab-and-go technology experience will scale up from a bodega-sized convenience store to a full-on grocery experience. But instead of thinking big, startup DeepMagic is going the other direction: developing small unattended, cashierless micro-retail outlets.

Using a combination of computer vision and artificial intelligence (AI), DeepMagic creates self-contained, cashierless walk-in “Qick Kiosks” that can be placed within existing locations. Customers use an app on their phone to unlock the kiosk doors, go inside, grab what they want and leave. Cameras in the Qick Kiosk keep track of everything taken (just like Amazon Go) and automatically charge your card when you exit the kiosk.

DeepMagic doesn’t want to own and operate its own chain of cashierless stores; rather, it wants to provide these kiosks as a way to create retail opportunities within existing high-traffic areas. Think: pop-up shops inside office building lobbies or big apartment complexes.

“It’s not about replacing existing store formats,” DeepMagic Co-Founder and CEO, Bernd Schoner told me by phone. “We want to give store owners the ability to create additional locations. Office space not big enough for a canteen? You can put a kiosk in that space.’

Schoner said DeepMagic’s approach lets retailers easily create satellite locations that can operate 24/7, without having to build a full store, or hire extra staff. For instance a bodega could run a smaller bodega inside a nearby apartment building.

DeepMagic combines a number of elements and approaches already happening in the automated, cashierless retail space. It has the Amazon grab-and-Go element. But it’s also similar to Stockwell (formerly Bodega), which creates even computer vision driven, credenza-sized containers with snacks and sundries for densely populated buildings. And Schoner’s canteen example is reminiscent of Byte Foods, which puts smart fridges stocked with food in offices.

While DeepMagic’s kiosks may add flexibility to retail locations, the company’s approach has a downside: the kiosks can only deal with one purchaser at a time. There can be multiple people in the same kiosks, but whatever they grab will be charged to the person who unlocked the store with their phone. So it seems like there could be lines that form to get into each kiosk, which kind of kills the convenience of cashierless checkout.

Having said that, DeepMagic’s turnkey kiosks could be big enough to offer a decent selection of items and branding experience for a retailer, yet small enough to create new retail opportunities within existing locations at an attractive cost. That is, if DeepMagic kiosks are at an attractive cost. Schoner wouldn’t disclose pricing on a DeepMagic kiosk, only saying that there will most likely be some combination of lease, SaaS subscription and percentage of retail sales.

DeepMagic has, however, proved its technology in public. Earlier this year, Cisco set up a DeepMagic kiosk to sell swag at its conference. Schoner says the company is working on a number of other deals right now. DeepMagic is self-funded, and has 15 employees across New York and Mexico.

While we wait and see how big cashierless stores can scale up, we’ll also have to keep an eye out to see if staying small pays off for DeepMagic.

August 2, 2018

Trigo Vision’s Cashierless Tech Isn’t Just for Shoppers; It Provides Insight for Stores, Too

When we talk about Amazon Go-like cashierless stores, the conversation mostly focuses on the shopping experience from the customer’s perspective. The ease with which you can walk into a store, grab what you want — and just walk out. But as Israeli-based startup Trigo Vision points out, there are plenty of benefits for the retailer as well.

Trigo Vision (pronounced tree-go vision) is one of several companies using computer vision and AI to help retail stores re-create the Amazon Go cashierless experience. Trigo Vision works by installing cameras in the ceilings of stores, and the AI is trained by having people (customers or employees, if need be) pick up and handle each item off the shelf in the store so the computer can “see” and learn what it is.

Jenya Beilin, COO of Trigo Vision, told me that using this approach for data collection is better than using synthetic data or existing images of products, because it trains the AI to recognize items under all the existing lighting and background conditions in that store. The result, according to Beilin, is that the technology is able to track purchases with greater accuracy.

The most obvious beneficiary of this computer recognition is the consumer. Trigo Vision’s system sees what you are putting in your cart (and putting back), tallies up your total and automatically charges you on the way out the door.

The same cameras that make it easy to leave also make it easy to track shoppers through the store. The result is that Trigo Vision’s system can provide retailers with useful information on consumer behavior: Where do they buy first? What items are they picking (and returning)? What items are purchased together? Etc.

One obvious benefit of Trigo Vision’s system is inventory management. Sorry, shelf-bot, but your days of manually scanning the aisles looking for gaps in inventory are probably numbered. Trigo’s platform can keep track of how many items are bought and when something needs to be re-stocked.

Digging into the consumer data a little deeper, stores can use the platform to understand how they can better present items to customers. Trigo Vision can help a store realize (in real-time) if a particular product is popular and should be highlighted, or if an end-cap promotion is attracting attention and/or sales.

This type of data is, in turn, helpful for CPG companies. For example, Coca-Cola could gain insight into whether people are buying more sodas in the aisle or at the checkout line. Armed with this data, grocers can even re-architect their shelves and store layout to take advantage of how people are shopping and the order in which they see inventory.

Unlike the bodega-sized Amazon Go stores, Trigo Vision says its platform can scale up to full-sized stores. Right now, the company, which has raised $7 million in funding, is in one pilot program with an unnamed European store. Beilin says they are in talks with more retailers, with the goal of opening in a full store in Europe within twelve months.

Trigo Vision is facing some stiff competition, however. Standard Cognition announced a deal with Paltac Corporation to put its cashierless tech in 3,000 stores in Japan before the Tokyo Olympics in 2020. AIPoly also touts its data insights for retailers as well. And in June it was revealed that Microsoft, which is no slouch when it comes to AI, is working on similar technology and is talking to retailers like Walmart about a potential collaboration.

All of this activity means widespread adoption of cashierless technology is not that far off —and will soon be so commonplace that we won’t even be talking about it at all.

June 29, 2018

AgShift Expands its Deep Learning Food Inspection to Nuts and Seafood

AgShift, a startup that uses computer vision and deep learning to determine food quality in the supply chain, is broadening its capabilities beyond produce into edible nuts and seafood.

We named AgShift as one of our Food Tech 25 companies last month because of the way it allows produce buyers and sellers to come to an objective price. AgShift’s software lets inspectors take a picture of fruit with their mobile phone and upload it to the cloud where AgShift’s deep learning algorithms would assess its quality (color, bruising, etc.) and base that against USDA, or other customized ratings, to provide a food grade.

AgShift is expanding this type of objective algorithmic grading to inspect edible nuts, starting with cashews, and is in the process of exploring seafood inspections.

To do this, AgShift has developed its own hardware to do bulk analysis. This analyzer is an enclosed box with cameras optimized for deep learning that is installed on-site at a food processing facility. Food, in this case cashews, is loaded into the analyzer which takes pictures of the product and sends the images through AgShift’s platform for analysis and grading.

Miku Jha, AgShift Founder and CEO, told me that using the analyzer on-site allows for better, and more efficient, large-scale assessments. She said that traditionally cashews are examined by hand, with inspectors looking at one or two pounds of nuts at a time. That takes time, and after being touched, those particular nuts need to be discarded. Both time and waste can add up when you’re processing literally tons of cashews. Using AgShift’s analyzer, sampling can be done faster and samples do not need to be thrown out because of the workflow at the processing facility.

Jha said that AgShift has finalized production units of its analyzer, and that the company has already been working with a global nut processor doing inspections and assessing the quality of their cashews.

AgShifts’s expansion into seafood is not as far along. Jha said that AgShift’s deep learning models can help the seafood supply chain with objective species identification and freshness assessment, as well as determine any change in quality of fish as they move through the supply chain. The company is still looking for its first customers in that field.

AgShift isn’t alone in its computer vision and deep learning approach to food inspection. Intello Labs uses the same type of technology to help farmers get a fair price from buyers in India, and it too is eyeing a move into seafood. Here at home, Walmart is using its Eden technology to assess general food freshness.

Earlier this year, AgShift raised its first $2 million seed round and announced it is working with RJO Produce marketing.

June 14, 2018

Report: Microsoft Working on Amazon Go-like Cashierless Tech

Microsoft is reportedly working on its own cashierless checkout technology in a bid to take on cross-town rival, Amazon, according to a story in Reuters.

The reported technology is similar to the Amazon Go store experience, where what you put in your cart is automatically tracked and charged to you without the need to go through a checkout line or cashier. Reuters goes on to report that Microsoft has engaged in talks with Walmart about the technology.

If true, the news isn’t that surprising for a number of reasons. First, Amazon Go uses technology like computer vision and artificial intelligence to know what you put (and keep) in your bag. Computer vision and AI are two areas of focus for Microsoft research. Second, we’ve known since December that Walmart is exploring its own computer vision-based cashierless store experience (and last month, the retailer killed its Scan and Go approach to cashierless shopping).

Finally, and most obvious, nobody wants to cede even more of the future of shopping to Amazon, and grocery shopping is no exception. Amazon already owns Whole Foods and is expanding discounts and two-hour delivery for its 100 million-strong Prime members. Plus, the first Amazon Go store is very impressive, is expanding into Chicago and San Francisco, and absolutely should be replicated elsewhere.

Moves like these have sent grocery retailers scrambling to compete. Target and Walmart are expanding their two-hour delivery service. Albertsons partnered with Instacart, and Kroger just invested more heavily in Ocado to build out twenty rapid-delivery robot warehouses here in the U.S.. Not to mention Walmart experimenting with its own fridge-to-fridge delivery service similar to Amazon Key.

Plus, other smaller players are working on their own versions of cashierless tech. All_ebt has Amazon Go-like ambitions for those on food stamps. Caper has its own computer vision and deep learning smart checkout cart. And AI Poly, whose CEO is speaking at our Smart Kitchen Summit in Seattle, has its own autonomous market in the works.

So while Microsoft provided a big “no comment” for Reuters, the idea of the Redmond giant working on such technology shouldn’t come as a news flash to anyone following the industry.

May 23, 2018

Microsoft Gets Visual Food Logging Patent

Microsoft appears to be applying its computer vision and AI smarts to make watching what you eat easier. The Redmond giant was awarded a patent yesterday for “Food Logging From Images.” That basically means, you can take a picture of your food and Microsoft will provide you with its nutritional information (calories, protein, vitamins, etc.).

Yesterday’s patent indicates that it is a continuation of a previous Microsoft patent in May of 2017 for “Restaurant-Specific Food Logging From Images.” Restaurants are called out specifically in this new patent because the Microsoft Food Logger would use GPS to know when you’re at a restaurant. From there, the Food Logger could use information from text menus online via Yelp! or a restaurant’s own site to assess nutritional information.

The technology would supposedly also work outside restaurants, using image recognition to understand home cooked meals as well. And there are tools to allow the user to edit or correct any inaccuracies in what the Logger identifies. So if you slathered butter on a piece of bread, you could specify the amount.

The obvious use case with this patent is a mobile phone app, which is listed. However, Microsoft goes even further to say this technology would work with camera-equipped glasses. From how the company describe it, if you walk into a restaurant wearing these hypothetical Food Logger glasses, you would almost get Terminator-like vision looking at the nutritional content of various meals people around you were eating.

The idea of taking a picture of food and automatically getting its nutritional content isn’t new. Apps like Lose It and Calorie Mama AI say they offer the same type of functionality. Samsung even recently added Calorie Mama’s technology into its Bixby virtual assistant.

Google, of course, has also been working on food recognition for awhile. And this week it came to light that Google is reportedly adding human-powered food identification capabilities to Google Maps. Humans labeling pictures of food taken from different angles and visibility will be beneficial to help train Google’s image recognition algorithms.

Right now, this is just a patent for Microsoft, so who knows how this will ever make it to market. But that market is huge, and it’s unlikely Microsoft will sit on the sidelines.

April 23, 2018

Intello Labs Uses AI to Help Farmers Get a Fair Price for Their Crops

When we talk about artificial intelligence (AI), we often speak in giant, world-shifting terms about revolutionizing a certain industry. But AI can also benefit a single person at a time. In the case of Intello Labs, its AI can be used to help prevent a poor farmer from getting screwed.

Food inspection is often still done manually. One person’s perfect tomato may be another’s piece of trash, and these basic biases can lead to an imbalance of power. A poor, rural, farmer may not be educated on price points or what “fresh” produce means to a buyer. As a result, they may want to sell tomatoes at a dollar per tomato, but buyers may scoff, refuting the quality of those tomatoes, and only offer fifty cents. How are they to know how much the literal fruits of their labor are actually worth?

Intello Labs is working to help balance these scales through a combination of computer vision and artificial intelligence. Using their mobile phone app, the tomato farmer could take a picture of a bushel of tomatoes and upload it into Intello’s system. The company’s algorithms would examine the photo of the tomatoes and gives it a rating based on a set of government (i.e. USDA) or other criteria. With this objective, algorithmic rating in place, each party in the negotiation now knows the quality of the tomatoes being sold — and they can be priced accordingly.

The company started with commodities like tomatoes and potatoes, but according to Sreevidya Ghantasala, Intello Labs Head of U.S. Operations, the company’s core technology can be customized for almost any food. It could be used to rate products like seafood and chicken, or even as a tool for plant disease identification. “We have a pest and disease application for six or seven different crops,” said Ghantasala, “Our system is highly customizeable. If there’s something we don’t see on our library, we can update it in 2 to 3 months.”

Intello, which is headquartered in Bengaluru, India, has already gone live elsewhere in that country at the farmer’s market in Rajasthan to work with 10,000 farmers there for wheat and grain analysis. The company has also worked with the Reliance Foundation in India to help 100,000 farmers with pest and disease detection for crops.

Pricing for Intello’s software is subscription based, and Ghantasala wouldn’t provide specific numbers. She said that cost was dependent on what was being analyzed, and what users want to use it to detect. The company was founded in May 2016, and has raised money through friends, families and various different accelerator programs. It now has 30 employees across offices in Bengaluru, Stockholm, Sweden and Plano, Texas.

Intello isn’t the only one using computer vision and AI to generate objective food ratings. Here in the U.S., AgShift is using a similar mobile phone app to provide better data for food buyers in the supply chain to help reduce food waste. And grocery giant Walmart has implemented its own machine learning-based Eden technology to assess food freshness.

But according to Ghantasala, Intello’s ambitions go beyond food altogether. The company is working with gas and oil companies in Sweden to apply its computer vision to parts identification, and they want to expand its vision into hyperspectral imaging for more in-depth analysis.

Intello, it seems, wants to use its AI to change the world. But for now, it’s changing the world for one farmer at a time.

March 28, 2018

AgShift Raises $2M, Reveals RJO as First Client

Today AgShift, a startup that uses computer vision and deep learning to reduce food waste, announced its first client: RJO Produce Marketing. This news comes just days after AgShift raised $2 million in its first seed round

Agshift uses its technology to attack food waste generated in the supply chain. Right now food inspection is done manually at different points along the food system, with workers literally eyeballing product to assess its quality using their own judgment, which can vary.

“The food supply chain is fragmented,” said Miku Jha, Founder and CEO of AgShift. “Inspections are done by different people at different points.” The results, according to Jha, are “subjective and inconsistent.” One person’s Grade A is another person’s Grade B.

Jha wants to take the subjectivity out of this process with — what else? — a mobile phone app. Instead of just looking at a piece of fruit, inspectors at wholesalers and distributors hold the produce up to the phone’s camera and take a picture (like depositing a check via mobile app, the software guides you for proper positioning). AgShift’s software in the cloud analyzes that image to quantify its bruising, color distribution, average size, mold, etc. to determine its quality.

Using the USDA’s produce guidelines (or a customized set of specifications), AgShift says its software can objectively give fruit its proper rating, and provide precise reasons why it made that choice. According to Jha, this level of consistency throughout food supply chain will deliver higher-quality produce to consumers.

It will also reduce food waste on multiple fronts. AgShift analyzes color distribution better than the human eye. So if, for example, it sees some strawberries that are 90 percent dark red at a shipping point, it can tell suppliers that it is more ripe. Then the suppliers can divert those riper berries to closer destinations, rather than running the risk of them getting spoiled on a cross-country trip.

AgShift can also reduce food waste by removing human judgment from the equation. Right now, vendors and buyers might dispute the rating of a food shipment. This can lead to canceled orders and food getting thrown out. With a computer generated rating, lower quality food can be assessed earlier in the supply chain and re-sold at a lower price or re-purposed, rather than discarded.

Industry watchers may note that AgShift sounds a lot like the Eden technology Walmart recently rolled out at its grocery stores to prevent food waste. Walmart is obviously a giant and a master of the supply chain, but Eden appears to be Walmart-specific. AgShift’s agnostic platform will give it a broad range of potential customers.

AgShift’s platform is already in trial use by a number of companies, the first of which to be publicly announced is RJO Produce Marketing. According to the press release, RJO provides “quality assurance inspections, in-depth market analysis and category management services for key perishable commodities.”

While the two million raised by AgShift is a rounding error for a company like Walmart, it’s just the start for this startup. The Sunnyvale-based company was founded two years ago and currently has 12 people working in their California and India offices. Jha said the money will be used to fund R&D and expedite the product.

Jha’s mission with AgShift is a global one. As she points out, we spend a lot of time talking about growing more food for a growing population — but a good first step is reducing the amount of food we waste right now.

Previous
Next

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...