• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

computer vision

February 7, 2018

Aquabyte Casts its Machine Learning to Improve Fish Farming

It was estimated at one point that Dwayne “The Rock” Johnson ate upwards of 821 pounds of cod per year. He’s certainly an outlier, but as global demands for seafood increase, fish farms are rising to meet that challenge, with the aquaculture market projected to reach $219.42 billion by 2022. Already, half the seafood eaten in the U.S. is farmed, and a startup called Aquabyte is using machine learning and computer vision to make those farms more efficient and productive.

Using cameras mounted in fish farm pens, Aquabyte’s software monitors data such as the fishes’ biomass, feed consumption and sea lice counts. Armed with this data, fish farmers can better understand their inventory, optimize the feed process and maintain regulatory compliance to reduce harmful impact on the surrounding environment.

Aquabyte takes the data from the pen cameras and applies it to models developed by fish nutritionists to determine the optimal amount of feed to distribute, which is a percentage of the fishes’ size. Aquabyte’s software can better detect the biomass of the fish in pens, and can also “watch” the pellets fall through the water to determine how much of them the fish are eating.

Using its software, Aquabyte claims that farmers can save money by not overfeeding, and also promises to eliminate over and underselling by giving them a better sense of how many fish they will actually produce for sale.

Aquabyte’s computer vision and machine learning can also yield a positive impact on the local environments of fish farms. One of the ways it does this is through quantifying sea lice, which can be a destructive force in the close quarters of a fish farm. Not only can they destroy a farmer’s inventory, but sea lice can get out and attach themselves onto passing wild fish, eventually killing the native fish populations. This problem has gotten so bad that Norway has enacted regulations forcing farms to control the number of sea lice in their pens or face heavy penalties.

Traditionally, sea lice quantification is done manually by netting fish out of the water and counting the lice by hand. Aquabyte’s software, however, can automate this process to keep fish farms within regulatory compliance, without requiring anyone to hand-count sea lice.

Aquabyte was founded just under a year ago and has offices in San Francisco and Bergen, Norway, where its software is currently running on a few fish farms. (Between this and Hatch, Bergen is becoming a hotbed of aquaculture tech.)

Aquabyte is targeting salmon farming in Norway to start, and the company just raised a $3.5 million seed funding round last week. They will use the money to hire out a team of software engineers in San Francisco as it works to bring the commercial version of its software platform to market.

Future versions of the software will work in other parts of the world and with other fish such as trout, sea bass, and hopefully–assuming The Rock’s appetites don’t diminish–cod.

You can hear about Aquabyte in our daily spoon podcast.  You can also subscribe in Apple podcasts or through our Amazon Alexa skill. 

June 1, 2017

Is Nest’s New Face Recognition Cam A Sign It’s Waking From Its Slumber?

One of the great mysteries of the smart home world over the past few years has been the relative quiet of Nest, the one-time connected home star that burst onto the scene with the launch of its impressive Nest Learning Thermostat and, two years later, a smoke and carbon monoxide detector by the name of Nest Protect.

After Nest got acquired in early 2014, updates slowed to a crawl, and much of the news that did come out from the company during this time was bad. While there was occasional news about Works with Nest and Thread, you were just as likely to read about management dysfunction under Tony Fadell and product recalls. The only wholly new product line introduced into the Nest family during this time was the Nest Cam, a product that, in reality, owes more to Dropcam (another acquisition by Google) than to internal development from Nest.

But now there are indications the company might be waking up from its long slumber. In a recent story in The Verge, Nest product manager Maxine Verson hinted that the rest of 2017 should be busy for the company:

Verge writer Vlad Savov writes, “Veron tells me Nest’s relative silence in recent times is about to be a thing of the past. “I am very excited about the next six months,” he says with a grin, “we’ll talk again soon and you’ll understand why.”

Savov goes on the speculate that the next product might be a cheaper home thermostat. A welcome addition to be sure, but I think for those who witnessed Nest’s early days of innovation, a lower-cost version of an existing product is hardly something to get excited about.

The product I’m excited to see is Nest’s long-rumored home security system. My own sources have confirmed the existence of this long-gestating project and, given Google’s patent filings in the space, I think a Nest home security system could be truly differentiated. Add in the fact they just introduced a camera with facial recognition capabilities – an interesting potential component of a smart security system – and we may be getting close.

Another potential product is a video doorbell.  While the market is certainly crowded at this point, Nest’s brand name and recent development of a Pro group that supports home builders and integrators could help a Nest video doorbell get traction.

Whatever Nest does, chances are it’s been limited to a certain set of products by its parent company. Alphabet/Google has let non-Nest groups develop products in some of the most interesting areas – voice assistants and mesh Wi-Fi for example – while Nest has largely stuck with thermostats and cameras.

One thing is certain: the company’s new Nest Cam itself is a sign of progress. With it, the company has started to integrate image-based AI into its Nest cams, a potential indication that it – like Amazon – sees computer vision as one of the key new frontiers in the smart home.

And who knows? Maybe now – if the new camera and hints being dropped are any indication – maybe Nest truly has something new and interesting up its sleeves.

Make sure to subscribe to the Spoon newsletter to get it in your inbox. And don’t forget to check out Smart Kitchen Summit, the only event on the future of the food, cooking, and the kitchen. 

May 16, 2017

As Others Play Voice Catch Up, Amazon Turns Sights To Computer Vision

As everyone from Microsoft to Apple rolls out first-generation voice assistants for the home, the reigning voice control champ Amazon has turned its attention to a new area within the smart home AI universe: computer vision.

For evidence of this focus, one needs to look no further than two new products introduced by Amazon in the past month. While both the Echo Look and Show have the same built-in voice assistant power of Alexa as their predecessors, there is one big difference: both new entrants have cameras. And while Amazon hasn’t come out and said these two new devices are the beginning of a new strategic front in the AI-powered smart home, an examination of these products’ capabilities, recent efforts to bolster the AWS AI lineup and recent statements by Amazon CEO Jeff Bezos help to connect all the dots.

Rekognizing A Pattern

So why the sudden interest in putting cameras in the home?  My guess is it’s in part due to what has been a growing emphasis over the past year by Amazon on its own computer vision powered AI capabilities.

That growing interest became more evident a year ago with the acquisition of Orbeus, the company which provided the foundation for Amazon’s current computer vision service from AWS, Rekognition. According to Richard Michael, former COO, Orbeus provided a “proprietary cloud based image analysis solution that makes sense of not just faces, but also scenes and objects”.

By last October, the company had relaunched the Rekognition service as part of its suite of AWS AI products. In a blog post, AWS Chief Evangelist Jeff Barr described how Rekognition could be used:

If you have a large collection of photos, you can tag and index them using Amazon Rekognition. Because Rekognition is a service, you can process millions of photos per day without having to worry about setting up, running, or scaling any infrastructure. You can implement visual search, tag-based browsing, and all sorts of interactive discovery models.

You can use Rekognition in several different authentication and security contexts. You can compare a face on a webcam to a badge photo before allowing an employee to enter a secure zone. You can perform visual surveillance, inspecting photos for objects or people of interest or concern.

You can build “smart” marketing billboards that collect demographic data about viewers.

While Amazon hasn’t come out and announced that Rekognition is being used to power the Echo Look, the company’s “fashion assistant,” the features of the Look tells me it most likely is. The device, which lets users take selfies and build a “style book” which the Look will then analyze to make recommendations, has a feature called Style Check:

Style Check keeps your look on point using advanced machine learning algorithms and advice from fashion specialists. Submit two photos for a second opinion on which outfit looks best on you based on fit, color, styling, and current trends. Over time, these decisions get smarter through your feedback and input from our team of experienced fashion specialists.

This is exactly what the Rekognition API does. By combining machine learning with computer vision, Rekognition is constantly learning, ultimately becoming better and better at analyzing images based on an ever-growing set of data based on those images. For the Echo Look, the end result is better recommendations. And while this is a fashion-centric use case that focused on color, style and fit, there’s no doubt that this technology can be used in a variety of use cases ranging of from home security to analyzing the contents of a refrigerator.

And what about the Echo Show? While Amazon doesn’t highlight the Show’s image recognition capabilities, my guess is that Amazon will give the Show Rekognition-powered computer vision over time to add enhanced functionality.

A “Horizontal Enabling Layer”

Recent comments from Amazon CEO Jeff Bezos helps one understand the company’s ongoing effort to push AI services beyond just Alexa. In a recent interview at the Internet Association gala, he shared his thoughts on AI (per GeekWire):

“Machine learning and AI is a horizontal enabling layer. It will empower and improve every business, every government organization, every philanthropy — basically, there’s no institution in the world that cannot be improved with machine learning. At Amazon, some of the things we’re doing are superficially obvious, and they’re interesting, and they’re cool. And you should pay attention. I’m thinking of things like Alexa and Echo, our voice assistant, I’m thinking about our autonomous Prime Air delivery drones. Those things use a tremendous amount of machine learning, machine vision systems, natural language understanding and a bunch of other techniques”.

“But those are kind of the showy ones. I would say, a lot of the value that we’re getting from machine learning is actually happening beneath the surface. It is things like improved search results. Improved product recommendations for customers. Improved forecasting for inventory management. Literally hundreds of other things beneath the surface.”

While Bezos points to the voice assistant tech in Alexa and Echo, he also gives a nod to machine vision. He describes all of these technologies as a “horizontal enabling layer.” What does he mean by this? In short, he is describing AI as a technology that is broadly applicable to almost every application, whether enterprise or consumer, and how the addition of which can add immense value to the end product.

With Alexa, Amazon was able to show, not tell, about that value of voice control. That is very powerful. I am sure they hope that, in a similar way, the Echo Look and Show can act as ambassadors for computer vision to the broader world. And while we may not witness the same kind of explosive adoption of Amazon powered computer vision AI as we did with Alexa,  in part because there are already a number of products that do basic image analysis using AI (such as Closeli) in the market, I do believe that Amazon can raise the awareness about how image recognition and detection AI enhance a variety of smart home and consumer use-cases.

Can Amazon Overcome The Creep Factor?

One last caveat: inward facing cameras in the home have plateaued in recent years, while outward facing security cameras like the Ring and Arlo have flown off shelves. The reason for this is people want to know what’s going on outside their home, but they don’t want people – including potential hackers – seeing what’s going on inside. With all the stories of security vulnerabilities, who can blame them?

While Amazon seems unbothered by this, it remains to be seen if their new interest in video AI will see any pushback from consumers.

Only time – and maybe Alexa – will tell.

Previous

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...