• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Food AI Bulletin: Google’s Robot Breakthrough & Wendy’s Spanish-Speaking AI Drive-Thru Bot

by Michael Wolf
August 7, 2024August 7, 2024Filed under:
  • Food AI Weekly
  • News
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to email this to a friend (Opens in new window)

While it’s mid-summer, and while most of Europe (and a good chunk of the American workforce) is taking some well-deserved time off, the AI news hasn’t slowed down one bit.

This week’s Food AI bulletin has updates on a new Google breakthrough on enabling better contextual understanding of our homes (including our kitchens), how Gemini is powering new features in Google’s smart home products, Wendy’s release of a Spanish-language edition of its AI drive-thru assistant, Amazon’s AI refresh of Just Walk Out, a new AI-powered digital tool called the NOURISH to help those living in food deserts make better food choices, a Danone and Microsoft multiyear deal to upskill employees on AI tools, and a survey that shows South Korean students prefer AI-generated healthy food options over more conventionally developed products.

Here we go:

Google’s New Robot Breakthrough Could Make It Easier to Train Your Robot Butler to Cook or Grab You a Cola

In the past, robots were challenged in doing useful tasks with autonomy, in part because they didn’t generally understand what they were seeing and how it related to a person’s specific living situation, etc.

That’s begun to change in recent years, in part because we’ve seen significant advances in robot navigation as researchers using new tools such as Object Goal Navigation (ObjNav) and Vision Language Navigation (VLN) have allowed robots to understand open commands such as “go to the kitchen.”

More recently, researchers have created systems called Multimodal Instruction Navigation (MIN), which enable robots to understand both verbal and visual instructions simultaneously. For example, a person can show a robot something like a toothbrush and ask it where to return it using both the spoken request and the visual context.

Now, Google researchers have taken things a step further by creating what they call Mobility VLA, a hierarchical Vision-Language-Action (VLA). This is a “navigation policy that combines the environment understanding and common sense reasoning power of long-context VLMs and a robust low-level navigation policy based on topological graphs.”

In other words, showing a robot an exploration video of a given environment will allow it to understand how to navigate an area. According to the researchers, by using a walkthrough video and Mobility VLA, they were able to ask the robot and have it achieve previously infeasible tasks such as “I want to store something out of sight from the public eye. Where should I go?” They also write that they achieved significant advances in how easily users can interact with the robot, giving the example of a user recording a video walkthrough in a home environment with a smartphone and then ask, “Where did I leave my coaster?”

One of the biggest challenges around having robots be useful in a food context is that the act of cooking is complex and requires multiple steps and contextual understanding of a specific cooking space. One could imagine using this type of training framework to enable more complex and useful cooking robots or even personal butlers that will actually be able to do something like fetching you a cold beverage.

You can watch a robot using this new Gemini-enable navigation framework in the video below:

“You’re Food Delivery Is Here”: Google Bringing Gemini Intelligence to Google Home

Speaking of Google, this week, the company announced a new set of features coming to their suite of smart home products that their Gemini model will power. The new features were revealed as part of an announcement about a new version of the company’s smart thermostat and its TV streaming device. According to the company, they are adding Gemini-powered capabilities across a range of products, including their Nest security cameras and its smart voice assistant, Google Home.

By underpinning its Nest camera products with Gemini, the company says its Nest Cams will go from “understanding a narrow set of specific things (i.e., motion, people, packages, etc.) to being able to more broadly understand what it sees and hears, and then surface what’s most important.” Google says that this will mean that you can ask your Google Home app questions like “Did I leave my bikes in the driveway?” and “Is my food delivery at the front door?”

During a presentation to The Verge, Google Home head of product Anish Kattukaran showed an example of a video of a grocery delivery driver which was accompanied by an alert powered by Gemini:

“A young person in casual clothing, standing next to a parked black SUV. They are carrying grocery bags. The car is partially in the garage and the area appears peaceful.”

After what’s been a somewhat moribund period of feature-set innovation for smart homes over the past couple of years, both Google and Amazon are now tapping into generative AI to create new capabilities that I’m actually looking forward to. By empowering their existing smart home products like cameras and their smart home assistants with generative AI models, we are finally starting to seeing leaps in useful functionality that are bringing the smart home closer to the futuristic promise we’ve been imagining for the last decade.

Wendy’s Pilots Spanish-Language Drive-Thru AI Voice Assistant

This week, Wendy’s showed off its new Spanish-language capabilities for its Fresh AI drive-thru voice assistant according to announcement sent to The Spoon. The new assistant, which can be seen in the Wendy’ s-provided b-reel below, has a conversant AI bot that seamlessly switches to Spanish, clarifies the order, and upsells the meal.

Wendy's Demos Fresh AI Drive-Thru in Espanol

According to Wendy’s, the company launched its Fresh AI in December of last year and has expanded it to 28 locations across two states.

This news comes just a week after Yum! Brands announced plans to expand Voice AI technology to hundreds of Taco Bell drive-thrus in the U.S. by the end of 2024, with future global implementation across KFC, Pizza Hut, and Taco Bell. Currently, in over 100 Taco Bell locations, the company believes the technology will enhance operations, improve order accuracy, and reduce wait times.

Amazon Previews New Generative AI-Powered Just Walk Out

Last week, Amazon gave a sneak peek at the new AI model that powers its Just Walk Out platform.

In a post written by Jon Jenkins, the VP of Just Walk Out (and, as Spoon readers may remember, the former founder of Meld and head of engineering for the Hestan Cue), we get a peek at the new AI model from Amazon. Jenkins writes the new technology is a “multi-modal foundation model for physical stores is a significant advancement in the evolution of checkout-free shopping.” He says the new model will increase the accuracy of Just Walk Out technology “even in complex shopping scenarios with variables such as camera obstructions, lighting conditions, and the behavior of other shoppers while allowing us to simplify the system.”

The new system differs from the previous system in that it analyzes data from multiple sources—cameras, weight sensors, and other data—simultaneously rather than sequentially. It also uses “continuous self-learning and transformer technology, a type of neural network architecture that transforms inputs (sensor data, in the case of Just Walk Out) into outputs (receipts for checkout-free shopping).”

Academic Researchers Creating AI Tool to Help Americans Living in Food Deserts Access Better Food Options

A team of researchers led by the University of Kansas and the University of California-San Francisco is tackling the issue of food deserts in the U.S. with an AI-powered digital tool called the NOURISH platform. According to an announcement released this week about the initiative, the group is supported by a $5 million grant from the National Science Foundation’s Convergence Accelerator program and the U.S. Department of Agriculture. The project aims to provide fresh and nutritious food options to the estimated 24 million Americans living in areas with limited access to healthy food. The platform will utilize geospatial analyses and AI to identify optimal locations for new fresh food businesses, linking entrepreneurs with local providers and creating dynamic, interactive maps accessible via mobile devices in multiple languages.

Danone Announces Multiyear Partnership with Microsoft for AI

An interesting deal focused on bringing AI training to a large CPG brand’s workforce:

Danone has announced a multi-year collaboration with Microsoft to integrate artificial intelligence (AI) across its operations, including creating a ‘Danone Microsoft AI Academy.’ This initiative aims to upskill and reskill around 100,000 Danone employees, building on Danone’s existing ‘DanSkills’ program. Through the AI Academy, Danone plans to enhance AI literacy and expertise throughout the organization, offering tailored learning opportunities to ensure comprehensive training coverage. The partnership will initially focus on developing an AI-enabled supply chain to improve operational efficiency through predictive forecasting and real-time adjustments. Juergen Esser, Danone’s Deputy CEO, emphasized that collaboration is not just about technology but also about fostering a culture of continuous learning and innovation. Microsoft’s Hayete Gallot highlighted the significance of AI in transforming Danone’s operations and the broader industry, aiming to empower Danone’s workforce to thrive in an AI-driven economy.

My main critique of a deal like this is that it essentially brings training and curriculum to train employees from an AI platform provider with skin in the game in Microsoft. As someone who’s long weaned myself off of most of Microsoft’s software products, I’d hate to go into a curriculum that will mostly be largely Microsoft AI tools training, not really broader AI training.

It is a good deal for Microsoft, with a smart focus on upskilling by Danone. Let’s hope Microsoft’s training brings a broad-based AI tool belt to the Danone workforce that is not entirely walled-gardened within Microsoft’s products.

Survey: Korean Students Prefer AI-Driven Health Foods

While some Americans are becoming more concerned about AI’s impact on our lives, it appears that at least some South Korean students are embracing AI in the development of healthier food options.

According to a recent survey conducted by Korea University Business School, young South Koreans are more likely to trust and purchase healthy functional foods (HFF) developed using artificial intelligence (AI) than those created through traditional methods. The study involved 300 participants and revealed that AI-developed HFFs scored higher in trustworthiness, perceived expertise, positive attitude, and purchase intention. The AI model, NaturaPredicta™, uses natural language processing to analyze botanical ingredients, significantly reducing the time and cost required for new product development. However, researchers noted the potential bias due to the relatively young demographic of the participants and suggested broader studies for more representative results.


Related

Wendy’s Announces FreshAI, a Generative AI for Drive-Thrus Powered by Google Cloud

Today Wendy's announced it is working with Google Cloud to develop a generative AI solution for drive-thrus called Wendy's Fresh AI. The new solution, which is powered by Google Cloud's generative AI and large language model technology, will go into a pilot test next month at a Wendy's company-operated store…

Delivery Giants DoorDash and Uber Eats Join The Rush to Integrate AI Into Ordering Platforms

Over the last six months, we've watched as seemingly every quick-service restaurant chain jumped on the AI freight train, integrating new generative AI technology into apps, chatbots, and voice ordering tools to expedite the customer experience. Now, it looks like food-ordering platforms DoorDash and Uber Eats are taking their turn…

ConverseNow Acquires Drive-Thru Voice AI Specialist Valyant AI

Today, ConverseNow announced that it has acquired drive-thru conversational AI specialist Valyant AI. According to the announcement about the deal - the terms of which were not disclosed - the entire Valyant AI team will be retained post-acquisition. Until now, ConverseNow has largely succeeded in winning restaurant chain business through…

Get the Spoon in your inbox

Just enter your email and we’ll take care of the rest:

Find us on some of these other platforms:

  • Apple Podcasts
  • Spotify
Tagged:
  • AI
  • Amazon
  • Food AI Weekly
  • Gemini
  • Google
  • Wendy's

Post navigation

Previous Post Food Tech News Show: Blackbird Launches Pay, DoorDash Delivers Big Numbers
Next Post Loch Launches Second-Gen Tiny Dishwasher With the $299 Capsule Solo

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Get The Spoon in Your Inbox

The Spoon Podcast Network!

Feed your mind! Subscribe to one of our podcasts!

After Leaving Starbucks, Mesh Gelman Swore Off The Coffee Biz. Now He Wants To Reinvent Cold Brew Coffee
Brian Canlis on Leaving an Iconic Restaurant Behind to Start Over in Nashville With Will Guidara
Food Waste Gadgets Can’t Get VC Love, But Kickstarter Backers Are All In
Report: Restaurant Tech Funding Drops to $1.3B in 2024, But AI & Automation Provide Glimmer of Hope
Don’t Forget to Tip Your Robot: Survey Shows Diners Not Quite Ready for AI to Replace Humans

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
loading Cancel
Post was not sent - check your email addresses!
Email check failed, please try again
Sorry, your blog cannot share posts by email.