• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

FutureProof Retail

July 20, 2020

The EPIC-KITCHENS Project is Building a Foundation For Artificial Intelligence in the Kitchen

From the post in 2018:

The ultimate goal behind EPIC-KITCHENS is to create an open dataset about kitchen-centric objects, behavior, and interactions upon which researchers across the world can then focus their deep-learning algorithms on in the hope of advancing artificial intelligence in the kitchen.

Since those early days, the project has continued to progress, recently releasing a newly expanded dataset and publishing the results from the second annual challenge. The first research challenge, completed in 2019, was focused researchers building models that can recognize actions in the kitchen. The recently completed challenge focused on action anticipation, where they asked researchers to predict what action would take place after one second of video.

Researchers who competed in the most research challenge include teams from a variety of universities spanning the globe from Cambridge to Georgia to Singapore as well as some corporate research labs such as the AI team from Facebook.

I recently caught up with the resesarch lead for EPIC-KITCHENS, Dr. Dima Damen from the University of Bristol in the United Kingdom, who told me that the various research teams competing used a variety of approaches to help make their systems better at recognizing and predicting actions based on the information from the video.

“There are some people who’ve used audio,” said Damen. “So they’ve used the audio from the video to identify something like opening the tap versus closing the tap. Traditionally, computer vision has relied on just images without like videos without sound.”

“There are some people who looked on a very big set of things, at what happened the past minute, because that’s helping them. And there are people who said, ‘no, I’ll focus on the objects, like where the hand is, where the object is, that’s a better approach.'”

For the next set of challenges, the group is providing a newly expanded set of data and asking them to focus on things such as “test of time”, where they ask if models trained two years ago still perform well and “scalabilty,” where they will have researchers look at whether more data is better.

Part of the expanded data will be a newly broadened dataset called EPIC-KITCHEN-100, where new footage brings the total number of hours of video captured to 100. According to Damen, the new video is from a cohort that included participants from both the previous study (half of the original 32 participants agreed to participate again) and 8 new participants.

According to Damen, by bringing back past participants, it will allow the computer models to better understand kitchen behavior by factoring in what happens with the passage of time, like in real life, but also better understanding how small changes can impact the results.

“It’s the natural progression, like how life will be,” said Damen. “The question is what happens to computer vision in the meanwhile? So it’s tiny tiny changes, right? It’s a slightly new camera, people might have moved home, and then we’re asking more questions that we believe would be interest to the community.”

Damen said she hopes that her technology can help build better technology and systems that could be of help to humans who need assistance.

“So there are new questions that are being asked which, interestingly, even the assistive technology community is not talking about. As in, if you want to help someone, sometimes you can guess what they’re doing, but many times you can’t.”

Spoon Plus Subscribers can read the full transcript of our conversation and watch my video interview with Daman below.

September 16, 2019

FutureProof Retail to Add AI-Based Grocery Recommendations From Halla

FutureProof Retail, which provides mobile checkout technology for grocers, announced today that it will incorporate Halla’s AI-based product recommendations into its service. FutureProof Retail uses mobile phones to create a line-free checkout process for supermarkets. Unlike other cashierless checkout solutions that install cameras and use computer vision to identify what you purchase, FutureProof moves everything to a whitelabeled app from the retailer on your phone. You scan barcodes to add items to your cart, and when you’re done, you hit a checkout button and a store employee does a quick check of your bag to make sure you aren’t shoplifting. You can see it in action in this video: Express Checkout El Rancho Awareness Video from FutureProof Retail on Vimeo. With today’s announcement, when people scan an item Halla’s recommendation engine will kick in to suggest a complementary product right there on the phone’s screen. So if you scanned a bag of Tostito’s chips, the app would recommend salsa. But Halla’s recommendations are also dynamic and adapt as you shop. As we wrote when the company raised $1.4 million in May of this year: …if you are using a grocer’s app with Halla I/O built in, the app will serve up intelligent recommendations as you continue to shop online. Buy salt, it could recommend pepper. By salt and noodles and beef, and it might guess that you are making a bolognese and recommend tomato sauce. Halla developed its recommendations based on data from anonymized grocery transactions to see what items are typically purchased at the same time, as well as restaurant and menu items (menu descriptions are, after all, typically a list of ingredients). Halla had initially started off making recommendations for restaurants as well, but pivoted away from that to focus on grocery. It’s important to note that Halla will only provide one recommendation per item. “As of today our recommendations are focused on complementary products with one goal in mind, what is most likely to be purchased,” Halla Co-Founder and CEO, Spencer Price, told me by phone this week. Price said that this focus on a single product is important because shopping is a very emotional experience for people, and one that can’t be junked up with lots of pop-ups on a phone screen with lists to scroll through. FutureProof works with Fairway Markets in New York and other regional grocery chains, though Price was unable to provide a timeline or location for where its recommendations will go live through this partnership. Almost as important as the news itself is the fact that FutureProof has publicly named Halla as a partner. There are other AI-based food recommendation engines out there like Spoonshot and Analytical Flavor Systems, but they are pretty quiet about their clientele. I’m not fully convinced about the broad adoption of FutureProof’s cashierless implementation. Manually scanning products and having a human check your bag before you leave a store seems to bring friction into a process that is supposed to be frictionless. Regardless, for those who do use it, FutureProof’s app requirement seems like a good vehicle for Halla’s technology because of its immediacy and visual cues. I imagine the company will look towards announcing partnerships with smart shelf displays like AWM Smart Shelf in the near future.

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...