• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

deep learning

July 12, 2019

BluePlanet Ecosystems’ Aquaculture System Mimics Nature to Grow Fish Sustainably

Globally, the U.S. gets over one half of all its seafood from aquaculture. The process is meant to give some relief to overfished ocean stocks and create a more reliable seafood source, but it doesn’t always work out that way.

“The problem is that the production method is still connected to the oceans,” said Paul Schmitzberger, CEO and co-founder of BluePlanet Ecosystems, over the phone. He explained that farmed fish are often fed small fish caught from oceans, and the smaller fish might contain mercury or microplastics. And even when fish farms use soy for their animal feed, it comes with an environmental cost all its own.

But what if the aquaculture farms could produce their own food and repurpose their own waste, just like in natural ecosystems? Schmitzberger and his team at BluePlanet Ecosystems, which was incorporated in 2018, set out to do just that.

The Austrian company’s LARA (Land-based Automated Recirculating Aquaculture) is a completely turnkey system which consists of three horizontal units stacked on top of each other. The top unit uses energy from the sun to grow microalgae, the “power source” for the entire system. The microalgae is then pumped down to the second unit where it feeds tiny zooplankton. Finally, the zooplankton is moved down to the bottom unit, where it’s eaten by fish and crustaceans. Any waste from the sea life is recirculated up to the top to be used as food for the algae.

In short, it’s a completely contained system that grows seafood without the need to input any food or extract any waste.

20190618 230839

When I first heard about the LARA, I had one question: Why had nobody else done this before? According to Schmitzberger, the answer is because it’s extremely difficult to manage. Without constant monitoring, the delicate balance of algae/zooplankton/seafood can get out of whack and ruin the whole system.

However, BluePlanet Ecosystems believes they can do what others couldn’t by adding something that nature doesn’t have: high tech. The startup uses a combination of water sensors, computer vision and machine learning to constantly monitor and optimize the dynamics of the tanks. “We’re monitoring every organism in the system,” said Schmitzberger.

For example, if the cameras notice there are too many plankton in the middle tank, the system will automatically activate a water pump to flush a portion of those plankton out to be eaten. If the software recognizes that a fish is sick, it can alert managers to look for a root cause before the rest of the population falls ill.

The LARAs can fit in 40-ton shipping containers and contain 100 cubic meters of water. They’re modular, so companies can connect multiple units together to be managed under BluePlanet’s software. The units will cost about $50,000 each plus a recurring $2,500 fee for every ton of seafood produced through the system, which Schmitzberger estimates companies should be able to recoup in four to five years. He said they hope to start constructing full-size prototypes over the next six months, with plans to deliver them at the beginning of 2021.

BluePlanet Ecosystems just signed a letter of intent with Singapore largest caterer, sats, which processes 100 tons of seafood every month. They have $250,000 in funding from IndieBio and are currently raising a seed round for $2.8 million.

Several companies are already putting parts of the LARA system in practice. TimberFish has a no-waste aquaculture system based upon brewery waste and woodchips. Aquabyte uses machine learning and computer vision to optimize fish maintenance in seafood farm pens. However, BluePlanet combines both of those value-adds into one — and promises to do it without any necessary inputs.

As of yet, LARAs are untested and expensive. But if they work as promised, we might see our farmed seafood sources become gradually more and more sustainable.

June 29, 2018

AgShift Expands its Deep Learning Food Inspection to Nuts and Seafood

AgShift, a startup that uses computer vision and deep learning to determine food quality in the supply chain, is broadening its capabilities beyond produce into edible nuts and seafood.

We named AgShift as one of our Food Tech 25 companies last month because of the way it allows produce buyers and sellers to come to an objective price. AgShift’s software lets inspectors take a picture of fruit with their mobile phone and upload it to the cloud where AgShift’s deep learning algorithms would assess its quality (color, bruising, etc.) and base that against USDA, or other customized ratings, to provide a food grade.

AgShift is expanding this type of objective algorithmic grading to inspect edible nuts, starting with cashews, and is in the process of exploring seafood inspections.

To do this, AgShift has developed its own hardware to do bulk analysis. This analyzer is an enclosed box with cameras optimized for deep learning that is installed on-site at a food processing facility. Food, in this case cashews, is loaded into the analyzer which takes pictures of the product and sends the images through AgShift’s platform for analysis and grading.

Miku Jha, AgShift Founder and CEO, told me that using the analyzer on-site allows for better, and more efficient, large-scale assessments. She said that traditionally cashews are examined by hand, with inspectors looking at one or two pounds of nuts at a time. That takes time, and after being touched, those particular nuts need to be discarded. Both time and waste can add up when you’re processing literally tons of cashews. Using AgShift’s analyzer, sampling can be done faster and samples do not need to be thrown out because of the workflow at the processing facility.

Jha said that AgShift has finalized production units of its analyzer, and that the company has already been working with a global nut processor doing inspections and assessing the quality of their cashews.

AgShifts’s expansion into seafood is not as far along. Jha said that AgShift’s deep learning models can help the seafood supply chain with objective species identification and freshness assessment, as well as determine any change in quality of fish as they move through the supply chain. The company is still looking for its first customers in that field.

AgShift isn’t alone in its computer vision and deep learning approach to food inspection. Intello Labs uses the same type of technology to help farmers get a fair price from buyers in India, and it too is eyeing a move into seafood. Here at home, Walmart is using its Eden technology to assess general food freshness.

Earlier this year, AgShift raised its first $2 million seed round and announced it is working with RJO Produce marketing.

May 30, 2018

How An Obscure Academic Project May Have Just Started A Kitchen Robot Revolution

Imagine it’s 2031 and you’ve sat down for dinner with your family.

It’s middle of the week so tonight’s meal is nothing too ambitious, mac and cheese or fajitas. As is the usual routine, you catch up with the family and share a few laughs until the meal is finally served, at which point everyone loads their plates and starts chowing down on what turns out to be a tasty dinner (the third one this week!).  Soon your youngest – the finicky one – asks for seconds.

Congrats parent, another successful meal, but don’t spend too much time patting yourself on the back because here’s the thing: Neither you nor your significant other spent any time preparing tonight’s dinner.  Instead, tonight’s dinner – and every dinner this week – was prepared in its entirety by a robot, the very same robot who is now in the kitchen cleaning up after dinner and preparing dessert.

Futuristic? Yes. A science fiction movie cliche? Definitely. But the above scenario may also be a very realistic possibility in large part due to an obscure research project involving 32 GoPro adorned home cooks making dinner.

Creating A Technology Big Bang

With any technology that changes the world, there’s almost always a research breakthrough or two that helps unleash innovation. In today’s world of AI and robotics, most experts would agree that one of these technological “big bangs” was a 2012 ImageNet Challenge research team led by the University of Toronto’s Geoff Hinton.

ImageNet is a crowdsourced database of millions of annotated images. The accompanying ImageNet Challenge is an annual contest where teams of researchers in the area of machine vision come together to pit their machine vision algorithms on the ImageNet dataset and against one another to try and achieve the highest degree of accuracy.

Hinton’s 2012 team had what is widely believed to be a breakthrough in AI research by utilizing deep learning techniques to achieve much greater accuracy than before (85%).  Since this breakthrough effort six years ago, there’s been leaps forward each year – today’s ImageNet Challenge teams routinely achieve 95% accuracy, better than most humans –   helping to drive significant progress in all corners of the AI world from autonomous driving to augmented reality to industrial and consumer robotics.

All of which brings us back to the kitchen.

And Now Into the Kitchen (The Epic Kitchen)

Now, a group of research academics is trying to create what is the equivalent of an ImageNet for the kitchen. Called EPIC-KITCHENS, the project is an ambitious effort to capture people performing natural tasks in their home kitchens like cooking, cleaning and doing laundry and then release the resulting millions of annotated images into the wild. The ultimate goal behind EPIC-KITCHENS is to create an open dataset about kitchen-centric objects, behavior, and interactions upon which researchers across the world can then focus their deep-learning algorithms on in the hope of advancing artificial intelligence in the kitchen.

Why the kitchen? According to the study’s lead, Dr. Dima Damen, the kitchen is one of the most complex environments in everyday life for artificial intelligence to master because it involves so many tasks and actions.

EPIC-KITCHENS 2018 TRAILER

“The most challenging type of object interactions tend to be in our kitchen,” said Damen in a phone interview I conducted last month. “We’re doing lots of tasks, on short notice, we’re multitasking. We might be adding something to our meal and moving something around. That makes the kitchen environment the most challenging environment for our types of perception.”

Damen, who is with the University of Bristol in the UK, partnered with researchers at the University of Toronto and Italy’s University of Catania to bring the project to life. The project took about a year to complete and involved a panel of 32 home cooks across ten nationalities in four cities in Europe (United Kingdom) and North America (Canada and US). To capture their activity, each participant mounted a GoPro on their head and went through 1-5 hours of preparing meals, cleaning and whatever else came naturally.

“We gave them a camera, sent them home, and said just record whatever you are doing in your kitchen for 3-5 days,” said Damen.

From there, the participants watched the video and narrated their videos so researchers had an audio track from which to manually annotate the atomized images – 11.5 million in all- captured in the 55 hours of video.

The result is a massive database its creators hope will help researchers in training their AI systems to better understand the kitchen. Like ImageNet, the creators also hope to foster competition with challenges and will track the progress with online leaderboards.

The data itself is something many will find somewhat mundane:

Distribution of actions in kitchen. Source: Epic Kitchens

The above distribution of annotated actions and objects are what you would probably expect: a really long list of things – like vegetables, kitchenware, spices – found in the kitchen. Same for actions. The above distribution breaks down pretty much all the verbs we perform in the kitchen such as put, take, twist and so on.

And that’s the point, at least if you’re a researcher hoping to train an artificial intelligence system. Just as this type of granular data helped ImageNet Challenge teams achieve a 95% accuracy rate with their software, the EPIC KITCHENS team hopes to reach a similar level of accuracy. By helping these systems understand what everyday objects are and how people manipulate them in a series of actions every day to do the basic functions of like in our kitchen like cooking and cleaning, the EPIC-KITCHENS data and what evolves out of it can provide a foundation upon which technologists can eventually create robots that act like humans and perform human-like functions in the kitchen.

The result could be an explosion in innovation in spaces like augmented reality, personalized food identification apps and, yes, cooking robotics. And while a fully-functional Rosie the home cooking robot could be the ultimate end-result of this research a decade from now, chances are we’ll see much more evolutionary improvements between now and then in the form of smarter appliances, more capable virtual assistants and more immersive guided cooking experiences.

And oh yeah: if you’re the type who wants to keep the robots out of the kitchen altogether, don’t worry. One of the biggest challenges with machine understanding of food is that the three-dimensional human comprehension of taste, smell and texture is extremely hard to replicate with machines. Add in the difficulty of AI to understand context and it makes me think that while we may eventually get to cooking robots, they may only be average cooks at best.

The real artists, the chefs – whether home based are on TV – are probably safe from the robot invasion.

Probably.

March 28, 2018

AgShift Raises $2M, Reveals RJO as First Client

Today AgShift, a startup that uses computer vision and deep learning to reduce food waste, announced its first client: RJO Produce Marketing. This news comes just days after AgShift raised $2 million in its first seed round

Agshift uses its technology to attack food waste generated in the supply chain. Right now food inspection is done manually at different points along the food system, with workers literally eyeballing product to assess its quality using their own judgment, which can vary.

“The food supply chain is fragmented,” said Miku Jha, Founder and CEO of AgShift. “Inspections are done by different people at different points.” The results, according to Jha, are “subjective and inconsistent.” One person’s Grade A is another person’s Grade B.

Jha wants to take the subjectivity out of this process with — what else? — a mobile phone app. Instead of just looking at a piece of fruit, inspectors at wholesalers and distributors hold the produce up to the phone’s camera and take a picture (like depositing a check via mobile app, the software guides you for proper positioning). AgShift’s software in the cloud analyzes that image to quantify its bruising, color distribution, average size, mold, etc. to determine its quality.

Using the USDA’s produce guidelines (or a customized set of specifications), AgShift says its software can objectively give fruit its proper rating, and provide precise reasons why it made that choice. According to Jha, this level of consistency throughout food supply chain will deliver higher-quality produce to consumers.

It will also reduce food waste on multiple fronts. AgShift analyzes color distribution better than the human eye. So if, for example, it sees some strawberries that are 90 percent dark red at a shipping point, it can tell suppliers that it is more ripe. Then the suppliers can divert those riper berries to closer destinations, rather than running the risk of them getting spoiled on a cross-country trip.

AgShift can also reduce food waste by removing human judgment from the equation. Right now, vendors and buyers might dispute the rating of a food shipment. This can lead to canceled orders and food getting thrown out. With a computer generated rating, lower quality food can be assessed earlier in the supply chain and re-sold at a lower price or re-purposed, rather than discarded.

Industry watchers may note that AgShift sounds a lot like the Eden technology Walmart recently rolled out at its grocery stores to prevent food waste. Walmart is obviously a giant and a master of the supply chain, but Eden appears to be Walmart-specific. AgShift’s agnostic platform will give it a broad range of potential customers.

AgShift’s platform is already in trial use by a number of companies, the first of which to be publicly announced is RJO Produce Marketing. According to the press release, RJO provides “quality assurance inspections, in-depth market analysis and category management services for key perishable commodities.”

While the two million raised by AgShift is a rounding error for a company like Walmart, it’s just the start for this startup. The Sunnyvale-based company was founded two years ago and currently has 12 people working in their California and India offices. Jha said the money will be used to fund R&D and expedite the product.

Jha’s mission with AgShift is a global one. As she points out, we spend a lot of time talking about growing more food for a growing population — but a good first step is reducing the amount of food we waste right now.

March 2, 2018

Samsung Adds Food Image Recognition To Bixby Through Calorie Mama API

Search has come a long way since the earliest algorithms deployed by Google, Lycos, and Inktomi. After conquering basic and complex queries, search engines set their sites on images, video, and audio as frontiers that required new ways of looking at metadata to provide consumers with useful results.

Image recognition has been a focus of developers wanting to add value to the basic ability to capture and identify a picture of a peach or a fast food meal at Wendy’s. The key, as exemplified by companies such as Palo Alto-based Azumio, is to link image recognition to valuable datasets. For Azumio’s Calorie Mama AI-powered platform, the company offers an API available for third-party developers as well as a consumer download which allows users to track nutrition intake.

While Azumio faces competition from Google and Pinterest, a new partnership with Samsung may allow the folks behind Calorie Mama to separate itself from the pack. Samsung has announced a working relationship with Azumio to adds its Calorie Mama technology to Bixby, the South Korean giant’s AI personal assistant platform. Calorie Mama will be baked into the new Galaxy S9 and S9+ enabling users to obtain instant nutritional information about the food they eat.

“Our vision for the Calorie Mama API is to provide the best food image recognition technology to our partners,” Tom Xu, co-founder of Azumio said in a press release, “and to simplify nutrition tracking and food discovery for healthy living to their customers.”

While this announcement is a nice to have for Samsung smart device users, the true value goes far beyond the basic ability to count calories and set nutrition goals. Azumio’s work primarily is focused in areas related to health in such areas as diabetes and sleep disorders. The company’s Argus platform offers activity and diet tracking along with a social network focused on health and fitness. Argus powers its suite of applications such as Instant Heart Rate, Sleep Time, Fitness Buddy and Glucose Buddy.  The endgame of connecting food recognition with health-related applications, focusing on those in which diet management is essential, is where the real power lies.

Samsung and Azumio’s combined efforts began in 2013 when the Argus platform was offered in Samsung’s Gear line of devices. Samsung continues to work on its own proprietary health and fitness apps, but those have not been offered to consumers outside of Korea. The value of adding Calorie Mama to Bixby could show great promise. For example, a cook wanting to create a healthy meal could ask Bixby to recommend a substitute for a high-fat ingredient by tapping into its database of image data.

Pinterest—which is preparing for a possible IPO—will undoubtedly rise to the challenge to go beyond its image recognition work with Google (called Lens) which allows users to find recipe pins based on captured pictures. Pinterest recently has hired a new head of computer vision, Chuck Rosenberg, a 14-year Google veteran. Given the primary task of computer vision technology is to analyze images and tie those results to associated data, Pinterest is on an accelerated path in this space. Unlike Samsung, working in a somewhat closed ecosystem, Pinterest will make its application available to all takers.

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...