• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

AI

September 9, 2024

IFA Smart Kitchen Roundup: Appliance Brands Try to Tap Into AI Zeitgeist With AI-Powered Food Recognition

This weekend at IFA, several big appliance brands used the show to tell the world that they are all in on AI, mainly through the integration of cameras into their ovens paired with software to enable personalized recipes and customized shopping lists.

Siemens showed off the iQ 700 oven has a built-in camera that recognizes over 80 different dishes and automatically adjusts to the ideal cooking settings. This feature allows users to place food, like a frozen pizza, in the oven and hit start for optimized cooking. The updated model offers more food recognition capabilities than previous versions and includes an optional steam function to achieve a crispy crust on baked goods.

Hisense debuted the Hi9 Series Oven, equipped with AI-powered InCamera technology for intelligent baking with over 140 pre-programmed recipes. The company also introduced a smart fridge in the Hisense Refrigerator PureFlat Smart Series, and its description sounds like they’ve been taking cues from Samsung and the Family Hub. The company described the fridge as “a home appliance control center” that “allows you to adjust temperature settings remotely through the ConnectLife app.”. The fridge also has AI-powered inventory tracking, though the company was light on details about how the tracking feature works.

Beko also let everyone know that they are trying to jam AI into as many things as possible, including their ovens. Like with HiSense and Siemens, they pointed to camera-assisted cooking in their ovens. From the release: “Beko brings AI-assisted camera technology to its Smart Home ovens, delivering a self-improving cooking experience for optimal results in the kitchen whatever the dish. With food recognition and cooking suggestions across more than 30 different food types, the new Beko Autonomous Cooking technology uses AI to finish cooking according to personalized browning levels.”

Ovens with cameras and food recognition aren’t exactly new, as we’ve been seeing this feature for the better part of a decade since June (RIP) debuted the technology. The appliance industry often displays a herd mentality, and clearly, the herd feels they’ve got to show off their AI chops, even if the technology is somewhat pedestrian at this point.

Electrolux Debuts Taste Assist AI on AEG Line

Not every new AI-feature introduction at IFA was tied to integrated cameras and image recognition. Electrolux introduced its AI Taste Assist feature on its AEG line of kitchen appliances. According to the announcement, AI Taste Assist will take recipes from the Internet, import them, and send cooking instructions to the oven, but not before it recommends ways to enhance and optimize the cook. In a demo on-stage by Electrolux at IFA, the company emphasized how the new feature was meant to overcome what they called the “cooking gap”, which they described as the limitations of existing recipes and the enhanced capabilities of modern cooking equipment. The feature that Electrolux primarily promoted to bridge this gap was steam cooking, a feature that was injected into a lasagna recipe in an on-stage demo of the Taste Assist feature by Christopher Duncan, Electrolux’s SVP of Taste for Europe.

One notable absence at Electrolux’s IFA new conference was GRO, the next-generation modular kitchen concept the company announced in June of 2022. All indications are that the Swedish appliance brand has not made any progress in commercializing GRO, probably partly due to the company’s struggles over the past couple of years. The company laid off approximately three thousand employees last year, and earlier this year, it saw the departure of its longtime CEO, Jonas Samuelson, as the company continued to struggle post-pandemic and in the fast of increased competition from Asian appliance brands.

SideChef Unveils AI Feature in App That Creates Step-by-Step Recipes From Photos of Food

SideChef recently introduced RecipeGen AI, a new beta feature that generates step-by-step recipes from a photo of any dish. Users can upload pictures of meals from restaurants or social media, and the app will provide a shoppable recipe based on the image.

From the release: “We are living in exciting times, where every inspiration can become a person’s reality,” says SideChef Founder & CEO, Kevin Yu. “At SideChef we’re excited to be the first to use AI to allow any home cook to make their food inspiration a reality for themselves and loved ones, with a single photo!

CNET writer Amanda Smith gave the app a test drive and came away with mixed feelings. While the app successfully identified many ingredients, it missed key components in some cases, such as sourdough focaccia and strawberry butter. It also occasionally added ingredients that weren’t in the dish, like bell peppers, leaving Smith feeling the accuracy was somewhat hit or miss.

Smith’s takeaway: Succes “depends on the recipe. It has a hard time with nuance and, like other AI tools, tends to make it up if it’s unsure. It’s a handy little app that could be used to inspire new ideas and ingredient concoctions or if you’re in a restaurant and don’t want to bother the waiter with dish details.”

Samsung Food Also Debuts AI-Powered Shopping Lists From Photos

SideChef isn’t the only smart kitchen company debuting photo-to-recipes/shopping lists powered by AI in their apps. At IFA last week, Samsung announced new AI-powered meal planning and food management features. The Vision AI feature now allows users to add ingredients to their Food List by simply taking a photo with their smartphone, expanding beyond the previous limitations of Samsung’s Family Hub smart fridge. This list can be used to suggest recipes, prioritize items nearing expiration, and automatically update after meals are cooked or ingredients are purchased.

Additionally, the company announced a new premium tier called Samsung Food+, a $7/month subscription service offers personalized weekly meal plans, tailored to users’ nutritional goals and dietary preferences, and tracks macronutrients and caloric intake. This premium tier also integrates more advanced AI functionality, allowing users to customize recipes and receive a full week of meal recommendations, helping reduce food waste and simplify grocery shopping by making the app a central hub for food management and meal preparation.

September 4, 2024

From Data-Scraping to Discernment Layer: How NotCo’s Giuseppe AI Has Evolved Over the Past Decade

Almost a decade ago, while others experimenting with AI focused on algorithms for trading, diagnostics, or digital advertising, a company called NotCo was experimenting with AI by the name of Giuseppe to create plant-based foods that could match the taste and texture of their animal-based counterparts.

According to Aadit Patel, SVP of AI Product and Engineering at NotCo, the company’s founders (Patel would join a couple of years after the company was founded in 2015) realized early on that, in order to build an AI model that could help create plant-based products mimicking the taste, texture, and functionality of their animal-based counterparts, they would need a whole lot of data.

The problem was, as a startup, they didn’t have any.

When I asked Patel in a recent interview how the company overcame the infamous “cold start” problem—the challenge many embryonic AI models face before they have built large datasets on which to train—he told me they found the solution in a very public place: the U.S. government’s website.

“In the early days, when we had no money, we literally scraped the USDA website,” said Patel. “If you go to the USDA website, there’s a bunch of free data materials for you to use. And I guess no one had actually joined it together to create a comprehensive dataset… So the first versions of Giuseppe were built on that.”

This cobbled-together dataset formed the foundation for Giuseppe’s recommendations, leading to the creation of products like NotMilk, which uses unexpected combinations like pineapple and cabbage to replicate the taste of dairy milk.

As NotCo grew, so did Giuseppe’s capabilities. New analytical labs in San Francisco and Santiago, Chile, gave the company a wealth of new data on which to train its AI. Over time, the model’s ability to create innovative food products also improved.

One of the biggest hurdles in food development is the fragmented nature of the supply chain. Data is scattered across various entities—ingredient suppliers, flavor houses, manufacturers, and research institutions—each holding critical information that contributes to the success of a product. Over time, the company realized that to create an AI capable of building innovative products, it couldn’t rely solely on NotCo’s datasets. Instead, Giuseppe would need to integrate and analyze data from across this complex web of partners.

“What we’ve done with Giuseppe is figure out a way to incentivize this very fragmented ecosystem,” Patel said.

According to Patel, pulling together these disparate datasets from across the product development and supply chain would result in a more holistic understanding of what is needed for a successful product that is better aligned with market realities.

“We realized that if we just made an AI system that’s specific to CPG, we’d be losing out,” said Patel.

Generative AI and Flavor and Fragrance Development

One recent expansion of Giuseppe’s capabilities has been the exploration of new flavors and fragrances using generative AI. While GenAI models like ChatGPT have become infamous for creating sometimes strange and off-putting combinations when designing recipes and new food product formulations, Patel explained that the company has been able to overcome issues with general LLMs by creating what he calls a discernment layer. This layer filters and evaluates the multitude of generated possibilities, narrowing them down to the most promising candidates.

“Discernment is key because it’s not just about generating ideas; it’s about identifying the ones that are likely to succeed in the real world,” Patel said. “With generative AI, you can prompt it however you want and get an infinite amount of answers. The question is, how do we discern which of these 10,000 ideas are the ones most likely to work in a lab setting, a pilot setting, or beyond?”

The discernment layer works by incorporating additional data points and contextual knowledge into the model. For instance, it might consider a formulation’s scalability, cost-effectiveness, or alignment with consumer preferences. This layer also allows human experts to provide feedback and fine-tune the AI’s outputs, creating a process that combines AI’s creativity with the expertise of flavor and fragrance professionals.

Early tests have shown positive results. When tasked with creating a new flavor, both the AI and the human perfumers receive the same brief. When the results are compared in A/B tests, Patel says the outputs of Giuseppe’s generative AI were indistinguishable from those created by human experts.

“What we’ve built is a system where AI and human expertise complement each other,” said Patel. “This gives us the flexibility to create products that are not just theoretically possible but also market-ready.”

CPG Brands Still Have a Long Way to Go With AI-Enhanced Food Creation

Nearly a decade after building an AI model with scraped data from the USDA website, NotCo has evolved its AI to create new products through a collaborative approach that results in a modern generative AI model incorporating inputs from its partners up and down the food value chain. This collaborative approach is being used for internal product development and third-party CPG partners, many of whom Patel said approached the company after they announced their joint venture with Kraft Heinz.

“Ever since our announcement with Kraft Heinz and signing a joint venture, there’s been a lot of inbound interest from a lot of other large CPGs asking ‘What can you do for us?’ and ‘What is Giuseppe?’ They want to see it.”

When I told Patel I thought that big CPG brands have come a long way over the past twelve months in their embrace and planning for using AI, he slightly disagreed. He said that while there’s a lot of interest, most big brands haven’t actually transformed their business to fully create products with the help of AI.

“I would say there’s strong intent to adopt it, but I think there hasn’t been put forth like a concrete action plan to actually develop the first AI-enabled R&D workforce,” said Patel. “There is room, I think, for new AI tech for formulators, and room for best practices and lessons learned of adopting AI.”

You can watch my full interview with Aadit below.

The NotCo team will be at the Food AI Summit talking about their new efforts using generative AI to develop flavor and fragrance, so make sure to get your tickets here.

NotCo's Aadit Patel Talks About the Evolution The Company's Food AI Giuseppe

August 28, 2024

Jason Cohen Believes Generative AI-Powered Synthetic Data Will Transform CPG Development

Back in 2007, Jason Cohen was an aspiring political scientist studying in China. As it turned out, locals—and the Chinese government—weren’t too enthusiastic about political science students from America asking lots of questions.

Luckily for Cohen, that initial pushback from Chinese officials was the beginning of a circuitous path that would eventually lead him to tea and, surprisingly, to developing AI tools that help food brands accelerate their path to market. The Spoon recently caught up with Cohen to hear about his journey from the tea markets of Yunnan province to his current role at Simulacra Data.

A Serendipitous Start in the Tea Markets

Shortly after Cohen arrived in China as a young prodigy who had graduated high school early and was sent to study politics, things quickly unraveled.

“Turns out, blonde hair, blue eyes, and bad Chinese don’t really endear you to asking about the government in rural southwestern China,” Cohen said. With his political studies cut short, Cohen was drawn to the local tea markets, where he encountered Ji Hai, a fermentation master at the Communist-era tea conglomerate CNNP. It was here that Cohen’s fascination with tea took root.

“I started hanging out in the tea market, originally out of a mix of interest in practicing Chinese,” he said. “But pretty quickly, I realized there was something more going on here.” This unexpected immersion in tea tasting honed Cohen’s palate and laid the foundation for his future endeavors in understanding consumer preferences.

From there, Cohen went to live at the Makaibari Tea Plantation in India, where he continued to study tea. He then embarked on a long journey from Guangzhou, China, through Tibet and Nepal into India, visiting tea places and picking up odd jobs along the way.

Eventually, Cohen returned to the United States, where he attended Penn State on a political science fellowship. However, as in China, his interest in politics was pushed aside by his passion for tea. “Like everything I touch, it kind of spiraled out of control,” Cohen says, describing how a small research group he started evolved into a full-fledged tea research institute, where he did his studies in sensory science and artificial intelligence. Cohen’s research at the Tea Institute eventually became the basis for his first company, Gastrograph AI.

Gastrograph AI: A Pioneering Venture in Flavor Prediction

In 2011, Cohen took the learnings from the tea institute and used them to found Gastrograph AI. At the time, he thought he could build an AI model to predict consumer preferences based on flavor. Over time, Gastrograph built a proprietary dataset of over 100,000 product evaluations from 35 countries, which Cohen claims allowed the company to accurately forecast which flavors would appeal to specific consumer segments.

“We were building a foundation model for flavor,” Cohen explained.

As CEO, Cohen helped Gastrograph AI secure large CPG brands as customers, where the company’s model helped fine-tune their products to meet the tastes of different demographics. Around this time, Cohen observed that AI researchers began to build large language models using neural networks and deep learning, but he wasn’t yet convinced of the power of generative AI for CPG research.

“I had always been a skeptic of the use of traditional neural networks and deep learning models,” he said. “In consumer research, you deal with small, expensive, and difficult-to-collect data sets. You can’t just throw a deep learning model at it and expect good results.”

The Turning Point

Cohen’s skepticism about generative AI shifted as he observed the rapid advancements in new tools based on LLMs over the past couple of years. One particular tool that caught his eye was Midjourney, the generative AI tool that creates lifelike images with simple prompts.

“The moment that the switch flipped was with the release of MidJourney,” Cohen said. “If you can generate images based on a text prompt, you should be able to do that with tabular business data.”

Once Midjourney led Cohen to reconsider the potential of AI in consumer research, he began to think about how generative AI could enable companies to generate synthetic data for scenarios that would otherwise be too costly or time-consuming to study. “It became very, very clear to me in 2022 that generative AI was going to change what’s possible to achieve in consumer research,” Cohen said.

It wasn’t long after this realization that Cohen stepped back from his role at Gastrograph and founded Simulacra Synthetic Data Studio.

Simulacra: Redefining Consumer Research with Generative AI

According to Cohen, Simulacra uses AI in a significantly different way than what he and his team pioneered at Gastrograph; instead of relying on proprietary data, Simulacra uses a “bring your own data” model. This allows companies to input their existing consumer data into the company’s model, which then uses generative AI to create synthetic data for a wide range of scenarios.

“We built an AI that learns to build a synthetic data generation model on whatever data is uploaded,” Cohen said. He explained that this allows companies to simulate outcomes—from market reactions to new products to optimizing pricing strategies—without extensive market research. “It’s much more mathematically accurate. It’s much more correct for drawing direct statistical inference,” he said.

At the core of Simulacra’s technology is diffusion modeling, which Cohen describes as challenging conventional thinking about AI models. “Synthetic data generation turns a lot of what we think about models on its head,” he said. By treating all variables as both dependent and independent, Simulacra’s AI can create a more holistic and accurate model of consumer behavior.

The Impact of Generative AI on the Food Industry

Cohen believes that generative AI will have a profound impact on the food and consumer goods industries.

“We’ve seen the market fracture, and we’ve seen a greater number of consumer cohorts than there had previously been.”

Cohen believes that in a fast-changing market, traditional market research is often too slow and expensive to keep up with changing consumer preferences. Because of the rising cost of traditional research, companies are forced to rely on smaller studies with less statistical power, making decisions based on incomplete data or gut instinct. Simulacra, Cohen explains, offers companies a way to make data-driven decisions that are both accurate and affordable.

“That’s where Simulacra is really going to make an impact.”

Beyond Digital Twins

According to Cohen, there is a big difference between Simulacra’s approach and traditional digital twin technology. While digital twin technology typically involves creating exact virtual replicas of specific entities or datasets to model and predict behaviors, Simulacra uses survey data—ranging from hundreds to hundreds of thousands of observations—to synthetically generate new data or incorporate new knowledge. He believes this approach allows Simulacra to adjust and predict outcomes with more mathematical accuracy and statistical relevance. Rather than producing textual outputs like those from large language models (LLMs), Simulacra returns quantitative and categorical data that companies can use for rigorous statistical analysis.

Looking Ahead: The Future of AI in Consumer Research

As AI technology evolves, Cohen envisions a future where AI-driven consumer research—including synthetic data—is the norm rather than the exception. He predicts that tools like Simulacra will help companies reduce the high failure rates associated with new product launches by providing more reliable data and insights earlier in the development process.

Despite the transformative potential of this technology, Cohen is quick to dismiss concerns that using AI model and synthetic data will lead to consumer product homogenization.

“The idea that this technology is going to be a convergent force across different product development cycles, I don’t think that’s the case,” he said. Companies will still have different goals, constraints, and consumer segments, leading to diverse outcomes even when using similar technologies.

You can watch Cohen’s full interview below. If you’d like to hear him talk about Simulacra and meet him in person, he will be at the Food AI Summit on September 25th!

The Spoon Talks with Analytical Flavor Systems

August 7, 2024

Food AI Bulletin: Google’s Robot Breakthrough & Wendy’s Spanish-Speaking AI Drive-Thru Bot

While it’s mid-summer, and while most of Europe (and a good chunk of the American workforce) is taking some well-deserved time off, the AI news hasn’t slowed down one bit.

This week’s Food AI bulletin has updates on a new Google breakthrough on enabling better contextual understanding of our homes (including our kitchens), how Gemini is powering new features in Google’s smart home products, Wendy’s release of a Spanish-language edition of its AI drive-thru assistant, Amazon’s AI refresh of Just Walk Out, a new AI-powered digital tool called the NOURISH to help those living in food deserts make better food choices, a Danone and Microsoft multiyear deal to upskill employees on AI tools, and a survey that shows South Korean students prefer AI-generated healthy food options over more conventionally developed products.

Here we go:

Google’s New Robot Breakthrough Could Make It Easier to Train Your Robot Butler to Cook or Grab You a Cola

In the past, robots were challenged in doing useful tasks with autonomy, in part because they didn’t generally understand what they were seeing and how it related to a person’s specific living situation, etc.

That’s begun to change in recent years, in part because we’ve seen significant advances in robot navigation as researchers using new tools such as Object Goal Navigation (ObjNav) and Vision Language Navigation (VLN) have allowed robots to understand open commands such as “go to the kitchen.”

More recently, researchers have created systems called Multimodal Instruction Navigation (MIN), which enable robots to understand both verbal and visual instructions simultaneously. For example, a person can show a robot something like a toothbrush and ask it where to return it using both the spoken request and the visual context.

Now, Google researchers have taken things a step further by creating what they call Mobility VLA, a hierarchical Vision-Language-Action (VLA). This is a “navigation policy that combines the environment understanding and common sense reasoning power of long-context VLMs and a robust low-level navigation policy based on topological graphs.”

In other words, showing a robot an exploration video of a given environment will allow it to understand how to navigate an area. According to the researchers, by using a walkthrough video and Mobility VLA, they were able to ask the robot and have it achieve previously infeasible tasks such as “I want to store something out of sight from the public eye. Where should I go?” They also write that they achieved significant advances in how easily users can interact with the robot, giving the example of a user recording a video walkthrough in a home environment with a smartphone and then ask, “Where did I leave my coaster?”

One of the biggest challenges around having robots be useful in a food context is that the act of cooking is complex and requires multiple steps and contextual understanding of a specific cooking space. One could imagine using this type of training framework to enable more complex and useful cooking robots or even personal butlers that will actually be able to do something like fetching you a cold beverage.

You can watch a robot using this new Gemini-enable navigation framework in the video below:

“You’re Food Delivery Is Here”: Google Bringing Gemini Intelligence to Google Home

Speaking of Google, this week, the company announced a new set of features coming to their suite of smart home products that their Gemini model will power. The new features were revealed as part of an announcement about a new version of the company’s smart thermostat and its TV streaming device. According to the company, they are adding Gemini-powered capabilities across a range of products, including their Nest security cameras and its smart voice assistant, Google Home.

By underpinning its Nest camera products with Gemini, the company says its Nest Cams will go from “understanding a narrow set of specific things (i.e., motion, people, packages, etc.) to being able to more broadly understand what it sees and hears, and then surface what’s most important.” Google says that this will mean that you can ask your Google Home app questions like “Did I leave my bikes in the driveway?” and “Is my food delivery at the front door?”

During a presentation to The Verge, Google Home head of product Anish Kattukaran showed an example of a video of a grocery delivery driver which was accompanied by an alert powered by Gemini:

“A young person in casual clothing, standing next to a parked black SUV. They are carrying grocery bags. The car is partially in the garage and the area appears peaceful.”

After what’s been a somewhat moribund period of feature-set innovation for smart homes over the past couple of years, both Google and Amazon are now tapping into generative AI to create new capabilities that I’m actually looking forward to. By empowering their existing smart home products like cameras and their smart home assistants with generative AI models, we are finally starting to seeing leaps in useful functionality that are bringing the smart home closer to the futuristic promise we’ve been imagining for the last decade.

Wendy’s Pilots Spanish-Language Drive-Thru AI Voice Assistant

This week, Wendy’s showed off its new Spanish-language capabilities for its Fresh AI drive-thru voice assistant according to announcement sent to The Spoon. The new assistant, which can be seen in the Wendy’ s-provided b-reel below, has a conversant AI bot that seamlessly switches to Spanish, clarifies the order, and upsells the meal.

Wendy's Demos Fresh AI Drive-Thru in Espanol

According to Wendy’s, the company launched its Fresh AI in December of last year and has expanded it to 28 locations across two states.

This news comes just a week after Yum! Brands announced plans to expand Voice AI technology to hundreds of Taco Bell drive-thrus in the U.S. by the end of 2024, with future global implementation across KFC, Pizza Hut, and Taco Bell. Currently, in over 100 Taco Bell locations, the company believes the technology will enhance operations, improve order accuracy, and reduce wait times.

Amazon Previews New Generative AI-Powered Just Walk Out

Last week, Amazon gave a sneak peek at the new AI model that powers its Just Walk Out platform.

In a post written by Jon Jenkins, the VP of Just Walk Out (and, as Spoon readers may remember, the former founder of Meld and head of engineering for the Hestan Cue), we get a peek at the new AI model from Amazon. Jenkins writes the new technology is a “multi-modal foundation model for physical stores is a significant advancement in the evolution of checkout-free shopping.” He says the new model will increase the accuracy of Just Walk Out technology “even in complex shopping scenarios with variables such as camera obstructions, lighting conditions, and the behavior of other shoppers while allowing us to simplify the system.”

The new system differs from the previous system in that it analyzes data from multiple sources—cameras, weight sensors, and other data—simultaneously rather than sequentially. It also uses “continuous self-learning and transformer technology, a type of neural network architecture that transforms inputs (sensor data, in the case of Just Walk Out) into outputs (receipts for checkout-free shopping).”

Academic Researchers Creating AI Tool to Help Americans Living in Food Deserts Access Better Food Options

A team of researchers led by the University of Kansas and the University of California-San Francisco is tackling the issue of food deserts in the U.S. with an AI-powered digital tool called the NOURISH platform. According to an announcement released this week about the initiative, the group is supported by a $5 million grant from the National Science Foundation’s Convergence Accelerator program and the U.S. Department of Agriculture. The project aims to provide fresh and nutritious food options to the estimated 24 million Americans living in areas with limited access to healthy food. The platform will utilize geospatial analyses and AI to identify optimal locations for new fresh food businesses, linking entrepreneurs with local providers and creating dynamic, interactive maps accessible via mobile devices in multiple languages.

Danone Announces Multiyear Partnership with Microsoft for AI

An interesting deal focused on bringing AI training to a large CPG brand’s workforce:

Danone has announced a multi-year collaboration with Microsoft to integrate artificial intelligence (AI) across its operations, including creating a ‘Danone Microsoft AI Academy.’ This initiative aims to upskill and reskill around 100,000 Danone employees, building on Danone’s existing ‘DanSkills’ program. Through the AI Academy, Danone plans to enhance AI literacy and expertise throughout the organization, offering tailored learning opportunities to ensure comprehensive training coverage. The partnership will initially focus on developing an AI-enabled supply chain to improve operational efficiency through predictive forecasting and real-time adjustments. Juergen Esser, Danone’s Deputy CEO, emphasized that collaboration is not just about technology but also about fostering a culture of continuous learning and innovation. Microsoft’s Hayete Gallot highlighted the significance of AI in transforming Danone’s operations and the broader industry, aiming to empower Danone’s workforce to thrive in an AI-driven economy.

My main critique of a deal like this is that it essentially brings training and curriculum to train employees from an AI platform provider with skin in the game in Microsoft. As someone who’s long weaned myself off of most of Microsoft’s software products, I’d hate to go into a curriculum that will mostly be largely Microsoft AI tools training, not really broader AI training.

It is a good deal for Microsoft, with a smart focus on upskilling by Danone. Let’s hope Microsoft’s training brings a broad-based AI tool belt to the Danone workforce that is not entirely walled-gardened within Microsoft’s products.

Survey: Korean Students Prefer AI-Driven Health Foods

While some Americans are becoming more concerned about AI’s impact on our lives, it appears that at least some South Korean students are embracing AI in the development of healthier food options.

According to a recent survey conducted by Korea University Business School, young South Koreans are more likely to trust and purchase healthy functional foods (HFF) developed using artificial intelligence (AI) than those created through traditional methods. The study involved 300 participants and revealed that AI-developed HFFs scored higher in trustworthiness, perceived expertise, positive attitude, and purchase intention. The AI model, NaturaPredicta™, uses natural language processing to analyze botanical ingredients, significantly reducing the time and cost required for new product development. However, researchers noted the potential bias due to the relatively young demographic of the participants and suggested broader studies for more representative results.

July 11, 2024

Food AI Weekly Bulletin: Is AI-Washing a Problem For Food Tech?

Welcome to this week’s edition of the Food AI Weekly Bulletin, our weekly wrapup that highlights important happenings at the intersection of AI and food. If you’d like to sign up to get this bulletin delivered to your inbox, you can do so here.

Is AI-Washing a Problem for Food Tech? Some predict one-third of startups will feature AI as part of their core product by the end of this year. Is stretching the truth about how truly AI-powered your product is a problem for food tech?

Gatorade’s AI Hydration Coach. Is Gatorade’s AI-powered hydration coach a marketing trick or another sign that wellness copilots are beginning to pop up everywhere?

Researchers Build RhizoNet, Which Uses Next-Gen Neural Network to Analyze Plant Roots. A new tool called RhizoNet could provide a big leap in understanding root growth.

Number of Retailers Using AI Doubles In The Past Year. AI adoption is skyrocketing, and retail looks to be one of the fastest growing sector.

CaperCart Continues Roll-Out of AI-Enabled Shopping Carts. Speaking of AI at retail, Instacart is hoping to do its part to spread the technology through its computer-vision enabled smart shopping carts.

Mineral AI Winds Down. Somewhat surprisingly, Mineral AI announced last week they would wind down and distribute their technology through license to partners such as Driscoll’s.

Mars Using AI to Develop 50 Product Concepts a Day. Big CPG is getting the hang of this generative AI thing.

eGrowcery Launches 70 Thousand AI-Generated Recipes. Are recipes set to become almost all AI-generated?

Is AI-Washing a Problem for Food Tech?

Anytime a new technology captures the public zeitgeist, brands invariably jump on the bandwagon. After all, that’s what brands do, and it’s incumbent on any good marketer to capitalize on any buzzy association possible.

Where it could be problematic is when a company claims to have a key competitive differentiator through a given technology and they’re stretching the truth. Being an early adopter of, say, cloud computing, Web3, or, yes, AI is worth mentioning or making the center of a marketing campaign (even if it can be eyeroll-inducing), but when it’s on your pitch deck and you are exaggerating just how core it is to your product, it can be potentially deceptive, at least according to the SEC.

And now the regulatory body has started to take a stand against what they’re calling ‘AI-washing’. While the focus of the SEC is on claims by investment firms claiming to deliver investor value through AI-powered decision-making, it’s clear they are policing the broader use of claims by companies looking to benefit by association.

According to European investment fund OpenOcean, by the end of the year, one-third of startups will feature AI in their pitch decks. My guess is many will have legitimate claims, but as we’ve seen over the past year in food tech, sometimes such claims seem a stretch. It’s a logical move as a founder, particularly in a market where raising capital has become exceedingly difficult. But as everyone jumps on the AI bandwagon, it’s worth it for startups to be cautious about their claims, and those looking to invest or partner with these companies should do their due diligence to see if there’s real substance behind the pitch deck.

Gatorade’s AI Hydration Coach

A couple of weeks ago, at Cannes Lions, the advertising industry’s biggest international confab and awards gala, Gatorade debuted its generative AI-powered app that acts as a hydration coach.

From Marketing Dive:

Gatorade’s AI Hydration coach app applied AI “to educate users about the best ways to stay hydrated through an assistant that draws on decades of historical data from the sport beverage brand’s research institute. The concept leans into the idea that AI has the power to democratize services that were once exclusive, giving everyday consumers the type of expert guidance usually reserved for elite athletes.”

It’s an interesting concept – after all, the future very well may be filled with AI-powered co-pilots, assistants will sit on our proverbial shoulders whispering in our ears to coach us through life – but I’m not sure how seriously consumers will take specific brand-activated assistants. Millennials and Gen-Z and are for tooling up when it comes to their health, but they all have great authenticity sniffing capabilities and brands aren’t always the most trusted advisors in part because it’s easy to question their motivation.

Researchers Build RhizoNet, Which Uses Next-Gen Neural Network to Analyze Plant Roots

According to a story in Interesting Engineering, the Lawrence Berkeley National Laboratory’s Applied Mathematics and Computational Research (AMCR) and Environmental Genomics and Systems Biology (EGSB) Divisions have developed a new neural network tool called RhizoNet to analyze plant roots.

From IE:

It paved the way for scientists to accurately measure root growth and biomass, making it much easier and faster to study plant roots in the lab. 

In simple terms, RhizoNet automatically interprets images of plant roots which makes it more effortless and faster to comprehend the workings of root growth and how they respond to different conditions.

According to the report, the system will enable much faster and more accurate analysis of root biomass, especially compared to more manual (i.e. human-driven) analysis of plant roots.

The researchers noted they tested the system’s effectiveness through analysis of Brachypodium distachyon, a grass species of plants, as they subjected it to deprived nutrients over five weeks. 

“We’ve made a lot of progress in reducing the manual work involved in plant cultivation experiments with the EcoBOT, and now RhizoNet is reducing the manual work involved in analyzing the data generated,” stated Peter Andeer, a research scientist in EGSB and a lead developer of EcoBOT.

“This increases our throughput and moves us toward the goal of self-driving labs,” he added.

Number of Retailers Using AI Doubles In The Past Year

According to the Food Industry Association’s just-published annual study on the state of food retail, The Food Retailing Industry Speaks 2024, retailers are increasingly turning to AI to optimize parts of their business. The report, which surveyed large and small retail chains, said that 41% of those surveyed say they are using AI for parts of their business. This is double the number of retailers using AI over just a year ago.

A doubling of AI usage at retail is not surprising given the proliferation of various tools, from supply-chain optimization to in-store computer vision systems for real-time inventory and theft reduction analysis. My guess is that by next year, most retailers will be deploying AI in some parts of their operations.

CaperCart Continues Roll-Out of AI-Enabled Shopping Carts

Instacart has added another round of stores to the list of those its CaperCart smart shopping carts. The company announced that Price Chopper and  McKeever’s Market & Eatery have each added the “AI-powered grocery carts” to a store locations in Missouri. This comes a week after the company announced Wakefern announced they were increasing the number of storefronts using the Capercart.

As we’ve noted for the past couple years, the company is increasingly looking to diversify from its personal shopper business. It has been focused on creating technology platforms for the non-Amazon grocery retailers of the world to transform themselves in an increasingly digitized grocery shopping industry.

Mineral AI Winds Down

Whenever a company graduates from Google’s moonshot factory X to become an operating company under parent company Alphabet, most assume said company will be a success. But the reality is these graduates, for whatever reason, sometimes just don’t get the traction required as an independent company and eventually wind down.

The latest example of this is Mineral AI. Mineral, a ‘computational agriculture’ company that graduated from X with significant fanfare last year, announced last week it was winding down. In a post by Mineral CEO, Elliott Grant, he says the company’s technology will live on through a license to Driscoll’s and other “leading agribusinesses where they can have maximum impact.”

It’s hard to decipher variables around decision-making at a giant like Alphabet, but it’s been clear since the beginning of the year that the company has been paring back its moonshot initiatives, both through layoffs and in how much capital they expand to fund the initiatives. A decade ago, X was home to a bunch of out-there concepts, such as balloon-powered broadband, but as cheap capital has dried up past and the company is funneling resources towards keeping up in LLM space race, those days look to be coming to a close.

Mars Using AI to Develop 50 Product Concepts a Day

While it’s easy to caricature big food brands as giant behemoths slow to adapt to new innovation and technology, it’s becoming increasingly clear that many are quickly building internal capabilities leveraging AI to accelerate core product development processes.

The latest indication is the news that Mars has developed its own generative AI-powered tool called Brahma to develop up to fifty product concepts a day. According to a report in Consumer Goods Technology, Brahma “uses data from consumer insights studies the CPG conducted last year involving 80,000 consumers and 800,000 consumption moments across 11 countries.” 

eGrowcery Launches 70 Thousand AI-Generated Recipes

eGrowcery, a white-label grocery e-commerce platform company, announced today they have launched an AI-powered personalized recipe offering to their SaaS customers.. The company said their new AI recipe feature uses AI to tailor recipe suggestions based on regional preferences and store inventory. The company hopes to boost shopper satisfaction, sales, and market share for retailers with a suite of over 70 thousand personalized recipes.

The move by eGrowcery is an indication that shoppable recipes are an obvious early candidate of a category that will be consumed by generative AI. It will be interesting to watch how much consumers take to these offerings, given that so many home cooks take inspiration nowadays from other sources (such as social media) to get recipe and meal ideas. However, we’ve begun to hear that the influencer-recipe space is struggling to keep up with the rapidly changing landscape resulting from Google’s push towards using AI-powered summaries rather than link-outs to other sites.

In other words, AI is beginning to sink its hooks ever deeper into the food planning and inspiration space from seemingly every angle.

Our Favorite AI-Generated Food Images of The Week

Over on our Spoon community Slack, we had some of our Spoon community drop some AI-generated artwork

First this tasty looking food from Min Fan. You can see all the images from Min on our Slack.

And we also liked this futuristic food creation facility from Emma Forman:

If you would like your AI-art work featured on The Spoon, drop them into our Spoon Slack.

July 9, 2024

ConverseNow Acquires Drive-Thru Voice AI Specialist Valyant AI

Today, ConverseNow announced that it has acquired drive-thru conversational AI specialist Valyant AI. According to the announcement about the deal – the terms of which were not disclosed – the entire Valyant AI team will be retained post-acquisition.

Until now, ConverseNow has largely succeeded in winning restaurant chain business through call center voice automation. With this deal, ConverseNow brings a drive-thru AI specialist into the fold, jumpstarting its efforts to penetrate the growing market for AI-voice-assisted drive-thru technology.

Valyant AI, which started quickly out of the gate in 2019 with a win at a Techcrunch automation and AI pitch competition in 2019, has since run trials with its Holly voice AI at CKE and Checkers. The company’s technology, which is differentiated from ConverseNow’s in that the AI runs locally on proprietary hardware (ConverseNow’s AI runs in the cloud), also includes a generative AI training application called AI Employee Assist, which restaurant staff can ask questions and receive instant responses. According to the announcement, the Valyant AI technology will be integrated into both the drive-thru and tables by the end of Q3.

The deal is another signal that the restaurant voice AI and automation is heating up. Other providers, such as Presto and Soundhound, have been growing their installed base for both phone and drive-thru automation solutions, while large chains like Wendy’s have been developing their own voice AI technology. McDonald’s, which had been running a trial of voice assistant powered by IBM, recently announced they were sunsetting the initiative and would evaluate other voice AI technologies and select another solution by the year’s end.

Beyond the obvious employee cost-saving benefits of voice automation, QSRs also see opportunities to increase total order size through upselling capabilities. According to ConverseNow, its technology can detect conversational nuances and personalize orders based on contextual data and information gained in real time during the order process.

June 27, 2024

The Food AI Weekly Bulletin: Will AI Fakery Make Restaurant Reviews a Thing of the Past?

Welcome to the Food AI Weekly Bulletin, our new weekly wrapup that highlights important happenings at the intersection of AI and food.

Nowadays, there’s a lot of noise around how AI is changing food, so we decided to create a weekly brief to bring you what’s important, decipher through all the noise, and deliver actionable insights. If you’d like to sign up for our weekly Food AI Weekly, you can do so here.

Highlights

Is AI Ruining Restaurant Reviews? A new study shows people cannot distinguish between real and AI-generated reviews.

AI Food Art Is Everywhere (And It’s Not Great for Freelancers) Generative AI tools like Midjourney and DALL-E are revolutionizing food imagery, but what does this mean for freelancers and creatives who traditionally provided these services?

First, Al Michaels. Next, How About an AI-powered Anthony Bourdain? The news of Al Michaels allowing AI to replicate his voice has almost everyone freaking out, but what does it mean for the future of AI-generated avatars of famous food personalities?

Swallowing A Robot. Endiatx has developed the Pillbot, a tiny robot that can be swallowed to explore the gastrointestinal tract, potentially revolutionizing diagnostics and personalized nutrition.

Food & Nutrition Centric LLMs Could Be an Investible Opportunity. VCs see potential in industry-specific AI models, particularly in the domains of biology, chemistry, and materials, as these specialized LLMs could offer unique investment opportunities.

Brightseed’s Forager AI Finds Novel Bioactives. Cranberry giant Ocean Spray teams up with Brightseed to uncover new bioactive compounds in cranberries.

Our Favorite AI Food Art of the Week. We’ll be making this a regular feature. If you’d like your art featured, submit it on our Spoon slack. 

We’re going to be exploring all of this at our Food AI Summit in September. Join us won’t you? Super Early Bird pricing expires at the end of this month.

Is AI Ruining Restaurant Reviews?

What happens when humans can’t tell real restaurant reviews from fake ones? The restaurant industry has begun asking itself this question as a tidal wave of fake AI reviews floods online sites.

According to Yale professor Balazs Kovacs, humans are already losing their ability to discern the real from the fake. Kovacs recently unveiled the results of a study demonstrating AI’s ability to mimic human-written restaurant reviews convincingly. For his test, Kovacs fed Yelp reviews into GPT-4 and then asked a panel of human test subjects whether they could tell the difference. At first, the results generated by GPT-4 were too perfect, so Kovacs then prompted GPT-4 to insert “

While this raises obvious concerns about the authenticity of online reviews and the trustworthiness of consumer-generated content, it shouldn’t be surprising. Figure 01’s human-like speech tics were creepy, but mostly because of how human its awkward conversation seemed. With typos and sub-par grammar—in other words, what we see every day on social media—it makes sense that AI-generated reviews seemed more human.

One potential workaround to this problem of AI-generated reviews is using AI to detect and notify us what fake content is, but early tests show that even AI can’t tell what is real and what is fake. Another suggestion is to require reviewers to have purchased a product to review it (similar to having Amazon labels whose reviews are from verified purchasers) and apply it to restaurants. My guess is that this will be the best (and potentially last) line of defense against the coming tidal wave of AI reviews.

AI Food Art Is Everywhere (And It’s Not Great for Freelancers)

One early application of generative AI, as it applies to food, is the creation of images. Midjourney, DALL-E, and other tools allow us to create instant realistic images with a few sentences. As a result, we’ve seen CPGs, food tech software companies, and restaurant tech startups jump on the generative art trend.

While that isn’t necessarily good news for actual artists (this WSJ article is a must-read about the impact of AI on freelancers and creatives), these tools have democratized professional-ish like photos and art for folks in the same way Canva made professional-style graphics and presentations available to anyone.

One company that’s benefitted significantly is Innit. The company, which focused in its early life on hiring celebrity chefs like Tyler Florence and spending tens of thousands on photo shoots for a recipe, is now whipping them instantly with generative AI for its Innit FoodLM.

While most Internet-savvy marketing types at food brands, restaurants, and other food-related businesses have at least learned to dabble in generative AI prompt engineering, that hasn’t stopped some from trying to create a business out of it. Lunchbox created an AI food image generator utilizing DALL-E as the underlying LLM over a year ago (the website has since gone dark), and just this week I got pitched on a new AI-powered food generator that wants to charge for its service (which is essentially a user interface to manage prompt engineering for an underlying LLM (which most likely is Midjourney or GPT-4). There’s likely a small lifespan for these types of services, but my guess is most marketing folks will learn to prompt engineers directly with popular image generators like Midjourney.

First, Al Michaels. Next, an AI-Powered Anthony Bourdain?

The Internet freaked out yesterday when news broke that Al Michaels has agreed to let an AI copy his voice, and rightly so. First off, it’s creepy. Second, this is the exact thing was the main reason the Hollywood writers and actors guilds kept striking for so long, so I’m guessing the Hollywood creative community isn’t exactly happy with Al. And finally, it goes to show you that if you throw enough money at us humans, the temptation to cave to the bots will be too much.

My guess is we’ll eventually see AI-generated avatars of famous chefs. All it would take is for the estate of Julia Child or Anthony Bourdain to get a good enough offer and it won’t be long before we hear (and maybe see) their avatars.

Swallowing A Robot

According to Venturebeat, Endiatx has developed a microscopic robot that can traverse your body and is equipped with cameras, sensors, and wireless communication capabilities. The robot, called Pillbot, allowing doctors to examine the gastrointestinal tract and be used both for diagnostic and therapeutic purposes.

The company’s CEO, Torrey Smith, has taken 43 of these Pillbots and swallowed one live on stage, which can be seen here. If this technology actually works (and those pills can be made smaller because, holy cow, that’s a literal big pill to swallow), it’s not hard to imagine these being used to dial in and optimize personalized nutrition regimens.

Food & Nutrition Centric LLMs Could Be an Investible Opportunity

Business Insider asked some VCs what they’re bored by when it comes to AI and what they’re excited about. Not surprisingly, they talked alot about how it will be hard for startups to break through in foundational large language models, where big players like Open AI and Google play. And like any good VC looking at an early market they talked up up picks and shovels

Even as investors shift their focus to promising AI infrastructure startups, there may still be some opportunities for new LLM startups to win, especially when they’re trained for specific industries, explained Kahini Shah, a principal at Obvious Ventures.

“We’re excited about what we call Generative Science at Obvious, i.e, large, multi-modal models trained in domains such as biology, chemistry, materials,” she said.

Brightseed’s Forager AI Finds Novel Bioactives

Brightseed, a company that uses AI to accelerate bioactive and food compound discovery, announced that it has (in partnership with Ocean Spray) used its Forager AI to uncover novel bioactive compounds in cranberries. Forager identified multiple bioactives, such as terpenes, which Brightseed believes hold significant potential for human health. These findings, based on in silico analyses, will undergo further clinical validation and will be presented at the American Society of Nutrition’s NUTRITION 2024 conference.

This acceleration effect of new health-positive compounds is another example of the AI acceleration effect I wrote about yesterday. Things are beginning to move exponentially faster at every stage of the food value chain, which over time means our basic understanding of the rules underpinning what we do (such as food product development) gives way to entirely new rules that are rewritten in large part by AI.

Our Favorite AI Food Image of the Week: Hungry Monkey

We like looking at AI-generated food art and figured we’d show you some of our favorites on a weekly basis. 

If you’d like to submit your AI-created food art (or if you’ve found one you think we should feature, drop the image and the source/attribution (preferably a link) on our Spoon Slack.

June 26, 2024

‘All The Rules Are Changing’: Why AI is Accelerating Change to Every Part of the Food Business (and Beyond)

This week, I attended the Fancy Food Show in New York City. It’s long been one of my favorite food conferences, mostly because I just love walking around and sampling all the great food. I mean, who wouldn’t?

While the fantastic food samples on the show floor are reason enough for me to get on a plane to NYC, the real reason I was there was to give a keynote talk on how AI is changing the food business.

Granted, the crowd at Fancy Food isn’t your typical Silicon Valley audience, the types that get excited about technology for its own sake. Instead, these are usually successful small to medium-sized businesses making anywhere from $1 million to $250 million annually by selling your favorite hot sauce or healthier crackers.

In other words, the good stuff.

Since these are food brands first and not technology companies, I kept my talk straightforward. I discussed how AI has long been used in the food business, how new forms of AI (particularly generative AI) are advancing rapidly, and how, over the next decade, every rule governing their business—from sales and supply chain to customer acquisition and product development—will change dramatically.

If you just rolled your eyes, I understand; I’ve long been skeptical of hyperbolic warnings about ‘disruption,’ and by now, most of us are tired of hearing how AI is a big deal. But that didn’t stop me because, despite all the talk, I still think most people underestimate the significant difference AI will make in our daily lives in the next decade. In other words, most of us are unprepared for how dramatically the rules governing business and everyday professional life will change.

This belief was reinforced last week when I caught up with Samantha Rose, a long-time consumer-product entrepreneur. She transitioned from being an editor at a Yale magazine and an award-winning poet to building a highly successful housewares startup, which she sold in 2021 to Pattern Brands. Since then, she started a third-party logistics and business services company and is now raising funds for a new venture that buys distressed consumer product brands to turn them around. And, somewhere along the way, she was featured in a Chase card commercial.

In short, Sam has mastered the modern rules of today’s business. Yet, when I asked her about AI, she said, “I wish I could take a year off to study and become an expert on AI because I feel like all the rules are changing.”

I thought if someone as savvy as Sam feels the need to go back to school on AI, what chance do the rest of us have?

After my talk, I led a panel on AI, where we delved deeper into how businesses may change and how small food business entrepreneurs should prepare.

One theme that emerged from the session is that growing food brands need to pay attention to how consumer buying behavior will be radically impacted by AI. Imagine a future where we have our own AI copilots telling us what to eat, where to get the best deals, and more. In a world where everyone is guided by an AI or multiple AIs, how will that change consumer behavior when it comes to buying food?

This is already starting to happen and will undoubtedly be widely adopted in a decade.

And then there’s the purposeful creation of AI-derived information sent to consumers with the intent of changing their buying behavior. We’re seeing it in restaurants as AI reviews flood review sites, and they’re already good enough that consumers can’t tell the difference.

As a publisher, I can’t help but think about how Google deemphasizing website search results and pushing their own AI-generated answers will impact not only my business but also the type of information consumers consume to steer their behavior.

Bottom line: Every direction we look, every industry and its associated value chains are changing faster than ever before. The rules are changing. Unfortunately, most of us can’t take the time to study and will all have to learn on the fly.

I’ll share the suggestions I made for these businesses at the Fancy Food Show in a follow-up post.

February 7, 2024

Check Out NXP’s Presence-Sensing Cooktop Demo Powered by On-Chip AI

We’re still sifting through some of the cool product demos from CES last month, and one that caught my eye was the demo by NXP of a presence-sensing cooktop powered by an embedded MCU. According to the company, the system used a Neural Processing Unit that runs the machine learning and facial-recognition algorithms within the system rather than relying on a cloud-based compute. The demo featured a device control interface from Diehl Controls.

As can be seen in the video below (taken by TIRIAS Research Principal Analyst Francis Sideco), NXP spokesperson Thomas Herbert shows that you can turn on the burner with either touch or motion sensing (including motion sensing with cooking mitts on). From there, the system is using facial recognition to detect if a person is within local proximity of the cooktop. According to Herbert, presence detection comes into play in a scenario where there is a critical state, such as the pan getting too hot and there is water boiling over. If the system detects a critical state and no one is standing around the stove, it will shut off the heat and can send an alert to the person via a Matter (an open-source smart home connectivity standard) enabled device to alert the cook that the system has intervened on their behalf.

CES 2024 - INVITE ONLY- Part 3 of 3 NXP Autonomous Home Showcase Based on Matter

The demo is interesting to me in a couple of contexts. One is that, as we’ve written about here on the Spoon, the number one reason for fires in the home is due to cooking mishaps, and enabling your stovetop or other cooking appliances to recognize both anomalies as well as the presence of someone could be a real gamechanger, akin to the dip in automobile accidents in recent years due to the widespread prevalence of blindspot detection systems in modern cars.

The second context that’s interesting is that it could become a significant technology in assisting aging-in-place scenarios, particularly for seniors who become a little more forgetful as they age. One of the key determinants of whether folks can continue to live independently is their ability to feed themselves, and by providing “blind spot” detection like this, my guess is this could extend many seniors’ ability to live independently for years.

January 29, 2024

Chris Young: Generative AI Will Provide Big Payoffs in Helping Us Cook Better, But Overhyping It Will Burn Some Folks

Chris Young has never been shy about providing his thoughts about the future of cooking.

Whether it was on stage at the Smart Kitchen Summit, on his YouTube channel, or a podcast, he’s got lots of thoughts about how technology should and eventually will help us all cook better.

So when I caught up with him last week for the Spoon Podcast, I asked him how he saw things like generative AI impacting the kitchen and whether it was necessary for big appliance brands to invest in building out their internal AI competencies as part of their product roadmaps for the next decade. You can listen to the entire conversation on The Spoon podcast.

I’ve excerpted some of his responses below (edited slightly for clarity and brevity). If you’d like to listen to the full conversation, you can click play below or find it on Apple Podcasts or wherever you get your podcasts.

On the resistance by some to using advanced technology to help us cook better:

Young: “A lot of people are focused on going backward in the kitchen. They want to go back to cooking over charcoal and cooking over fire. That can be fun, but if you look back at what it was really like in the 19th century, the kitchen was not a fun place to be.”

“The modern kitchen is much healthier and much safer. And it does a better job of cooking our food. But we’ve kind of stalled, in my opinion, for the last couple of decades of really innovating and creating a compelling vision of what the future of the kitchen can be. I think the idea that our appliances are too stupid to know when to turn the temperature up or down to cook my food correctly is bizarre in the modern world where sensitive, high-quality sensors are cheap. And we have unlimited compute and AI now to answer a lot of these questions that humans struggle with, but I don’t see the big appliance companies or the incumbents doing this on their own. So, my small contribution was to create a tool that measures temperature and makes it very easy for people to do things with those measurements.”

On why it’s important to create a vision for the future of a technology-powered kitchen:

Young: “My criticism with a lot of people in this space is they haven’t sold a vision of what the future of that your kitchen could be like that resonates with people, that feels human, that makes it a place I want to go that is forward-looking rather than backward-looking. The kitchen of the 1950s, the kitchen of the 1920s, feels more human, feels more relatable, and I think people want that. It’s not to say you can’t create a forward-looking vision of a kitchen where it’s easier to cook food, it’s easier to bring people together and have everything work out right, but nobody’s really creating that vision.”

Combustion’s thermometer runs its machine-learning calculations on the chip within the thermometer rather than in the cloud where many AI compute happens. Young explains how – and why – they made that possible:

Young: “One of the crazy challenges was this is some pretty hardcore math. I think even we initially thought, ‘Oh, we’re gonna have to run this on the cloud, where we essentially have unlimited compute to run these fairly sophisticated algorithms.’ But we have some very clever software and firmware people on our team who have a lot of experience doing these kinds of hardcore machine-learning algorithms. And we were able to basically figure out some clever trick techniques to get the stuff running on the thermometer. The benefit is that it means the thermometer is always the ground truth; if you lose a connection, if you walk too far away, or if Bluetooth gets interrupted, or if any of that happens, the thermometer doesn’t miss a beat. It’s still measuring temperatures, it’s still running its physics model. So as soon as you reconnect, the results are there, and nothing has been lost.”

Young on the benefit of generative AI:

Young: “In the short term, AI as it’s being marketed is going to be disappointing to a lot of people. It’s going to burn some people in the way that IoT burned some people. But there’s going to be meaningful things that come out of it.”

“…When I was playing with ChatGPT 3.5 and I would ask it cooking questions, the answers were mostly garbage, as judged from my chef perspective. When GPT 4 came out, and I started asking some of the same questions, the answers were actually pretty good. I might quibble with them, but they wouldn’t completely fail you and they weren’t garbage. And if you modified the prompt to rely on information from Serious Eats, ChefSteps, or other reputable sources, all of a sudden, I might have given you a different answer, but it’s not necessarily better. And in many cases, what people want is a good enough answer. Building those kinds of things into the cooking experience where, when you run into a problem, or you’re confused about what this means, something like the Crouton app, or the Combustion app, or a website can quickly give you a real-time good enough answer, that actually solves your problem and keeps you moving forward and getting dinner done. Those I think will be really, really big payoffs, and that stuff’s coming.”

Young on whether big food and appliance brands should invest on building their own AI internal competency:

Young: “It’s hard to give advice when that’s not my business. But I have a few observations from having worked with these companies. It’s very hard to sustain a multi-year effort on something like an AI software feature. For these companies, that culture doesn’t exist, the way of thinking about the long term payoff of software tends to not be a strength of these companies. And so while they have the resources to go do this, the willingness to make those investments and sustain them, for years and years and years, and learn and iterate, that hasn’t proven to be their greatest strength.”

“I think that is kind of why there was an opportunity for Combustion, and for a company like Fisher Paykel (ed note: Fisher Paykel has integrated the Combustion thermometer to work with some of their appliances) to recoup the millions and millions of dollars, we’ve invested in the AI in our algorithms team. (Fisher Paykel) could maybe build the hardware, but doing the software, investing in the hardcore machine learning research, I think it would be very hard for them to sustain that effort for three or four years when they’re only going to maybe sell 12-25,000 units a year. We’re in a much better position because we can spread it across the entire consumer base.”

“And so I think you’re going to see more partnerships emerging between the big appliance companies that can provide the infrastructure, the appliance that’s got ventilation over it, that’s plugged into a 240 volt, 40 amp or 50 amp circuit. They’re going to be very good at that. If they basically open up those appliances as a platform that third-party accessories like the predictive thermometer can take advantage of, I think over the long term, they actually take less risk, but they actually get a market benefit.”

“Because as more small companies like Combustion can get wins by integrating with these appliances inexpensively and easily, making our products more useful, I think you’ll start to get a lot of things like the rice cooker no longer has to be a dedicated appliance that you put in a cabinet. Instead, it can be a special pot that goes on the stove. But now it can communicate with the stove to do what a rice cooker does, which is turn the power on and off at the right time. And now a lot of these small appliances can migrate back to the cooktop, they can migrate back into the oven.”

If you want to hear the full conversation with Chris Young, you can click play below or find the episode on Apple Podcasts or wherever you get your podcasts.

January 25, 2024

Jersey Mike’s Jumps on the AI-Voice Order Bandwagon as it Deploys Soundhound to 50 Locations

The AI-voice bot customer service wave is coming at us fast, and the latest chain to roll out the technology is the sub-sandwich chain Jersey Mike’s.

Soundhound announced this week that Jersey Mike’s will deploy its voice AI ordering system to allow customers to place orders by phone. According to Soundhound, the AI has been trained on the entire Jersey Mike’s menu and can handle order placement and answer queries about menu items, specials, store information, and more, all while ensuring orders are taken accurately and efficiently.

The video below shows Soundhound’s Jersey Mike integration in action.

DEMO: Jersey Mike's Automated Phone Ordering System - Powered by SoundHound AI

Like many voice AIs, it sounds about 90% natural in action, but there’s still something of an Uncanny Valley stiltedness to it. Listening to the order, it seems to handle the natural conversation flow deftly, but I have to wonder just how nimble it is with various dialects and slang, which can be a natural part of incoming phone orders.

Soundhound, a company that’s been around since the mid-aughts and had focused on auto installations and music until its move into customer service interaction layers for restaurants and retail over the past couple of years, announced the acquisition of SYNQ3 Restaurant Solutions last month. At the time of the deal, Soundhound said the merger extended “its market reach by an order of magnitude to over 10,000 signed locations and accelerating the deployment of leading-edge generative AI capabilities to the industry.”

Soundhound isn’t alone in chasing fast food chains to provide voice AI customer service platforms. Par Technologies, ConverseNow, and OpenCity also offer third-party solutions, while some players, like McDonald’s, have brought voice AI in-house through acquisition.

After a decade of pushing towards digital ordering kiosks and new ways to serve customers in-store and through apps, AI-powered customer service layers have moved to the top of the list for many big chains, including Jersey Mike’s.

January 18, 2024

January AI’s New App Uses Generative AI to Predict How Food Will Impact Your Blood Sugar

If you’ve been diagnosed with a metabolic health issue, you might have used a continuous glucose monitor (CGM) at some point to track the impact of your food intake on your blood sugar. However, as of March 2023, only 2.4 million people used a CGM in the U.S., and because of the relatively small adoption rate of this technology, the vast majority of folks with diabetes or who are in danger of metabolic health issues may not have access to real-time insights into what the impact different foods may have on their glucose levels.

January AI aims to change this with its latest innovation: a free app that performs predictive analysis on the impact of various foods on blood sugar. The company, which unveiled its newest tool at CES last week, has developed an AI-powered app that analyzes meal photos and offers users immediate feedback on glucose impacts, macros, and healthier meal alternatives.

January says its app uses generative AI to automatically generate accurate food titles and estimates of ingredients and ingredient quantities within complex meals.

“It uses three kinds of generative AI to tell you your blood sugar response,” said Noosheen Hashemi, CEO of January, speaking at The Spoon’s CES Food Tech Conference last week. “It uses our own generative AI for glucose, and then it uses a vision generative AI to pick what’s in the food, and then it uses that language model to give it a title.”

According to the company, its AI-driven predictions are based on millions of data points, including wearable data, demographic information, and user reports. The company says this approach enables the app to provide personalized glucose level estimates and insights, making metabolic health management more accessible and actionable.

“It’s as simple as scanning a food,” said Hashemi. “You can also scan a barcode. You can also do a search. And we can tell you all the macro, its total calories, how much fiber, protein, fat, and carbs it has. And we can also show your blood sugar.”

According to Hashemi, the company’s platform can be customized and trained for specific users by taking data from a wearable such as a smartwatch, a person’s glucose monitor, or even food logs. With that data, the app can create highly customized predictions around a person’s biomarkers and dietary preferences.

“One out of three people in America has pre-diabetes, and 90% of them don’t know it,” said Hashemi. “And one out of nine people has diabetes, and 20% of those people don’t know it. So blood sugar is something we should all be managing, but we just don’t know that we should.”

Given the increasing popularity of GLP-1 medications, my guess is that more Americans will start to consider how their diet affects their blood sugar in the coming years. And, even if they don’t use a glucose monitor or get a prescription for a medication like Ozempic, increased awareness will push many to use apps like this one to help them better understand how a given food will impact their blood sugar and overall health.

You can hear Hashemi discussing the app and showing a demo in the video below.

January AI CEO Talks About New Generative AI App at CES
Previous
Next

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...