• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

generative AI

October 8, 2024

NotCo Has Created A Generative AI for Flavor and Fragrance That Can Create Unique Formulations With Text Prompts

Food-tech company NotCo has developed a novel generative AI model, the Generative Aroma Transformer (GAT), capable of creating new flavor and fragrance formulations. The model, which the company presented on at the Food AI Summit last month, could be a potentially disruptive new tool that could impact a variety of consumer goods markets such as food, personal care, home care, and beauty industries.

The company’s Senior VP of Product, Aadit Patel, talked about the new model in a post on Linkedin today, highlighting how GAT can translate textual prompts into unique chemical formulations. “The system intakes your prompt, such as ‘an ocean scent on a breezy summer day on a tropical island’ to create a novel chemical formulation of that scent in one-shot.” From there, the model generates a corresponding fragrance formula. According to Patel, the model is built on a “natural language to chemical composition” framework, tokenizing molecules to create a system capable of understanding and generating novel combinations.

NotCo says early tests have been extremely positive. Their research indicates that GAT’s abilities rival those of human perfumers. In blind smell tests, fragrances created by GAT proved indistinguishable from those crafted by human experts. This finding signals a potentially significant shift in the industry, where only 600 certified perfumers exist globally.

How it Works

According to research presented at the Food AI Summit, the core of GAT’s functionality lies in its ability to understand and model complex interactions between volatile molecules. The model is trained on an extensive dataset of historical fragrance formulations and the molecular structures of volatile compounds. This training enables GAT to decipher the subtle relationships between different molecules and predict how they will interact to create specific aroma profiles.

The model employs a dual-system transformer network comprising an encoder and a decoder. The encoder processes the user’s prompt (with inputs such as top note (cherry candy), middle note (vanilla) and bottom note (cherry)), capturing the desired aroma profile. This is passed to the decoder, which generates a corresponding sequence of tokens representing the fragrance’s molecular structure.

GAT leverages the atomic structure of volatiles to generate novel formulations. Each molecule is represented as a graph, with atoms described by valence, degree, hydrogen count, hybridization, formal charge, and atomic number. These details are then translated into numerical representations and fed into a Graph Neural Network (GNN) model, which creates a unique vector representing each molecule. Similar vectors indicate similar molecules, allowing GAT to identify and utilize molecular structures with desired aromatic properties.

The potential impact of NotCo’s GAT is substantial. Developing new formulations for flavors and fragrances has traditionally been a time-consuming and resource-intensive process, often requiring weeks or months of expert work. If GAT can achieve the same outcome in mere seconds, it could significantly reduce flavor and fragrance development costs.

If you’d like to learn more about NotCo’s new generative AI tool for developing flavors and fragrances, NotCo’s head of machine learning, Francisco Clavero, and one of their key flavor and fragrance scientists, Cindy Sigler, will be our guests at the next Food AI Co-Lab on October 17th. You can register for this virtual event here.

April 1, 2024

When It Comes to Making Generative AI Food Smart, Small Language Models Are Doing the Heavy Lifting

Since ChatGPT debuted in the fall of 2022, much of the interest in generative AI has centered around large language models. Large language models, or LLMs, are the giant compute-intensive computer models that are powering the chatbots and image generators that seemingly everyone is using and talking about nowadays.

While there’s no doubt that LLMs produce impressive and human-like responses to most prompts, the reality is most general-purpose LLMs suffer when it comes to deep domain knowledge around things like, say, health, nutrition, or culinary. Not that this has stopped folks from using them, with occasionally bad or even laughable results and all when we ask for a personalized nutrition plan or to make a recipe.

LLMs’ shortcomings in creating credible and trusted results around those specific domains have led to growing interest in what the AI community is calling small language models (SLMs). What are SLMs? Essentially, they are smaller and simpler language models that require less computational power and fewer lines of code, and often, they are specialized in their focus.

From The New Stack:

Small language models are essentially more streamlined versions of LLMs, in regards to the size of their neural networks, and simpler architectures. Compared to LLMs, SLMs have fewer parameters and don’t need as much data and time to be trained — think minutes or a few hours of training time, versus many hours to even days to train a LLM. Because of their smaller size, SLMs are therefore generally more efficient and more straightforward to implement on-site, or on smaller devices.

The shorter development/training time, domain-specific focus, and the ability to put on-device are all benefits that could ultimately be important in all sorts of food, nutrition, and agriculture-specific applications.

Imagine, for example, a startup that wants to create an AI-powered personalized nutrition coach. Some key features of such an application would be an understanding of the nutritional building blocks of food, personal dietary preferences and restrictions, and instant on-demand access to the application at all times of the day. A cloud-based LLM would likely fall short here, partly because it would not only not have all the up-to-date information around various food and nutrition building blocks but also tends to be more susceptible to hallucination (as anyone knows who’s prompted an AI chatbot for recipe suggestions).

There are a number of startups in this space creating focused SLMs around food and nutrition, such as Spoon Guru, that are trained around specific nutrition and food data. Others, like Innit, are building their food and nutrition-specific data sets and associated AI engine to be what they are terming their Innit LLM validator models, which essentially puts food and nutrition intelligence guardrails around the LLM to make sure the LLM output is good information and doesn’t suggest, as Innit CEO Kevin Brown has suggested is possible, a recommendation for “Thai noodles with peanut sauce when asking for food options for someone with a nut allergy.”

The combination of LLMs for generation conversational competency with SLMs for domain-specific knowledge around a subject like food is the best of both worlds; it provides the seemingly realistic interaction capability of an LLM trained on vast swaths of data with savant-y nerdish specificity of a language model focused on the specific domain you care about.

Academic computer scientist researchers have created a model for fusing the LLM and SLMs to deliver this peanut butter and chocolate combination that they call BLADE, which “enhances Black-box LArge language models with small Domain-spEcific models. BLADE consists of a black-box LLM and a small domain-specific LM.” 

As we envision a food future of highly specific specialized AIs helping us navigate personal and professional worlds, my guess is that the combination of LLM and SLM will become more common in building helpful services. Having SLM access on-device, such as through a smartwatch or phone, will be critical for speed of action and accessibility of vital information. Most on-device SLM agents will benefit from persistent access to LLMs, but hopefully, they will be designed to interact independently – even with temporarily limited functionality – when their human users disconnect by choice or through limited access to connectivity.

January 18, 2024

January AI’s New App Uses Generative AI to Predict How Food Will Impact Your Blood Sugar

If you’ve been diagnosed with a metabolic health issue, you might have used a continuous glucose monitor (CGM) at some point to track the impact of your food intake on your blood sugar. However, as of March 2023, only 2.4 million people used a CGM in the U.S., and because of the relatively small adoption rate of this technology, the vast majority of folks with diabetes or who are in danger of metabolic health issues may not have access to real-time insights into what the impact different foods may have on their glucose levels.

January AI aims to change this with its latest innovation: a free app that performs predictive analysis on the impact of various foods on blood sugar. The company, which unveiled its newest tool at CES last week, has developed an AI-powered app that analyzes meal photos and offers users immediate feedback on glucose impacts, macros, and healthier meal alternatives.

January says its app uses generative AI to automatically generate accurate food titles and estimates of ingredients and ingredient quantities within complex meals.

“It uses three kinds of generative AI to tell you your blood sugar response,” said Noosheen Hashemi, CEO of January, speaking at The Spoon’s CES Food Tech Conference last week. “It uses our own generative AI for glucose, and then it uses a vision generative AI to pick what’s in the food, and then it uses that language model to give it a title.”

According to the company, its AI-driven predictions are based on millions of data points, including wearable data, demographic information, and user reports. The company says this approach enables the app to provide personalized glucose level estimates and insights, making metabolic health management more accessible and actionable.

“It’s as simple as scanning a food,” said Hashemi. “You can also scan a barcode. You can also do a search. And we can tell you all the macro, its total calories, how much fiber, protein, fat, and carbs it has. And we can also show your blood sugar.”

According to Hashemi, the company’s platform can be customized and trained for specific users by taking data from a wearable such as a smartwatch, a person’s glucose monitor, or even food logs. With that data, the app can create highly customized predictions around a person’s biomarkers and dietary preferences.

“One out of three people in America has pre-diabetes, and 90% of them don’t know it,” said Hashemi. “And one out of nine people has diabetes, and 20% of those people don’t know it. So blood sugar is something we should all be managing, but we just don’t know that we should.”

Given the increasing popularity of GLP-1 medications, my guess is that more Americans will start to consider how their diet affects their blood sugar in the coming years. And, even if they don’t use a glucose monitor or get a prescription for a medication like Ozempic, increased awareness will push many to use apps like this one to help them better understand how a given food will impact their blood sugar and overall health.

You can hear Hashemi discussing the app and showing a demo in the video below.

January AI CEO Talks About New Generative AI App at CES

May 9, 2023

Wendy’s Announces FreshAI, a Generative AI for Drive-Thrus Powered by Google Cloud

Today Wendy’s announced it is working with Google Cloud to develop a generative AI solution for drive-thrus called Wendy’s Fresh AI.

The new solution, which is powered by Google Cloud’s generative AI and large language model technology, will go into a pilot test next month at a Wendy’s company-operated store in Columbus, Ohio. According to the announcement, the new tool will be able to have conversations with customers, the ability to understand made-to-order requests, and generate responses to frequently asked questions. 

In contrast to general-purpose consumer interfaces for LLMs such as ChatGPT and Google Bard, Wendy’s Fresh AI will be walled off and tailored around interacting with customers ordering food at a Wendy’s drive-thru. According to the company, Wendy’s Fresh AI will have access to data from Wendy’s menu and will be programmed with rules and logic conversation guardrails, ensuring that the conversation bot doesn’t spout off about politics or culture when prompted, but focuses solely on helping customers get their burger order right.

The deal is a nice pick-up for Google, which has been on its heels to a degree since last fall when the OpenAI released ChatGPT. Google’s strength in enterprise platforms through its Google Cloud infrastructure services could possibly give it a leg up on other generative AI platforms, even though OpenAI beat the company to the fast food drive-thru lane through its partnership with Presto.

Wendy’s says that it will use the learnings from the pilot to inform future expansion of the platform to other Wendy’s drive-thrus.

Where Is This All Going?

The restaurant quick-service industry has been embracing digital transformation in a big way over the past few years as a way to remedy the industry’s continued struggle with finding qualified workers, and the fast food drive-thru is probably one of the roles could be largely automated with a well-tuned generative AI model. I can envision a hybrid model that utilizes a gen-AI as the first point-of-contact customer interaction layer, but has it backstopped by a remote carbon-based life form (i.e. human) that can step in when there is the first hint of something out of the ordinary. Think of it as a Gen-AI/Bite Ninja hybrid model (while Bite Ninja hasn’t announced any AI solution partnerships for its cloud labor platform, I would be surprised if those conversations aren’t already underway).

March 24, 2023

Instacart Announces ChatGPT Plugin to Power Conversational Shoppable Recipes

The wave of ChatGPT integration announcements is just getting started, and this week Instacart debuted its first effort to tap into the generative AI zeitgeist with the debut of its ChatGPT Instacart plugin.

The plugin, explained in detail in a blog post by the company’s chief architect JJ Zhuang on the company’s website, allows Instacart users to ask for recipe advice and guidance using natural language with ChatGPT. From there, the OpenAI-powered chatbot will respond with a recipe suggestion followed by a prompt that tells the user that Instacart can turn the recipe into a shopping list.

In the video below, you can watch a fish taco recipe magically transformed into a shoppable recipe via ChatGPT.

The news of the new plugin comes after OpenAI namechecked Instacart earlier this month when announcing the release of its developer APIs for integration of ChatGPT into their apps. In the announcement, they hinted that they were thinking about the same fish taco recipe Instacart showcased in this week’s news.

To use the new plugin, users must log in to ChatGPT and look for the Instacart carrot under enabled plugins. The plugin is only available to ChatGPT Plus paying subscribers, but Instacart says that they and OpenAI plan to make it available to all ChatGPT users in the “coming weeks.”

One interesting detail in the announcement was the mention of what are essentially guardrails the Instacart team has built into the plugin. From the post: “At Instacart, we know that large language model technology is still in its early stages, so our ChatGPT plugin is a custom, constrained tool that will be triggered only in response to relevant food-related ChatGPT questions, and people won’t be able to use it for non-recipe related tasks.”

What this means is the company wants to ensure that folks are only using its plugin for food-relevant content and not trying to get it to, say, write a poem about the virtues of its personal shoppers or to give suggestions about who to pick for their fantasy baseball team. That said, ChatGPT is a bit unpredictable, and there’s always the chance a clever query crafter could get a brand’s plugin to hallucinate and spit out something off-brand or off-topic, which is why Instacart lets us know they will be rolling the plugin out “thoughtfully and make any modifications as needed along the way.”

I like the move, but I think the tool’s adoption will likely be somewhat limited until we see the integration of the AI tool into the Instacart app. While the announcement doesn’t say when ChatGPT will be embedded within the Instacart app, I’m pretty sure that’s something the developers are working on.

Stepping back, it’s clear that food retail will be one of the most active sectors to integrate generative AI, and not just ChatGPT. Earlier this week, I wrote about the launch of a new proprietary generative AI platform called Open Quin. Open Quin’s first targeted vertical is grocery shopping, where users can ask for food guidance in natural language.

February 20, 2023

Video Sessions: How ChatGPT & Generative AI Will Change The Food Biz

The summit included the following panels:

The Potential Applications of Generative AI – Speaker: Neil Sahota (UN AI Advisor, former IBM Master Inventor, Author “Own the A.I. Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition”

Generative AI & The Future of Restaurants – Speakers: Hadi Rashid (cofounder, Lunchbox) and Matt Wampler (CEO & cofounder, ClearCOGS)

Creating Next-Gen Proteins with AI – Speakers: Geoffroy Dubourg-Felonneau (Machine-learning lead, Shiru)

Customer Interaction & AI: What’s the Future? – Speakers: Deon Nicholas (CEO, Forethought AI), Benjamin Brown (Head of Marketing, ConverseNow).

These sessions are available for subscribers of Spoon Plus. To get access to these sessions, you can subscribe to Spoon Plus here.

February 14, 2023

The Latest, But Not The First: Five Ways AI Altered The Food Industry Before ChatGPT

Generative AI has shaken the tech industry to its foundations. For the first time, Google’s search dominance looks vulnerable, while ChatGPT has elevated Microsoft’s Bing from second banana to sexy beta. Meanwhile, hundreds of new startups are creating vertically-focused SaaS offerings powered by OpenAI, and tech corporations, big and small, are evaluating how to jump on the generative AI bullet train.

In the food world, we have some early arrivers in spaces like restaurant tech software such as ClearCOGS and Lunchbox leveraging OpenAI to add additional functionality. On the content creator and influencer side, we’re already seeing recipe creators and culinary pros tap into the power of generative AI.

But if you think the arrival of ChatGPT is the first AI with the potential to have a big impact on the world of food, you’d be wrong. In fact, over the past decade, we’ve watched as artificial intelligence has started to transform significant portions of the food world. Here are five ways AI has changed food over the past decade:

AI-Generated Recipes

Over the past decade, one of the most significant milestones for artificial intelligence in the world of food is the application of IBM Watson’s general AI to recipe creation. About ten years ago, the Watson team figured it needed to do something besides beat human contestants on Jeopardy to demonstrate its AI’s powers. Before long, Watson had its own cookbook of what IBM called ‘cognitive recipes’. Eventually, CPG brands like McCormick partnered up with IBM to see how they could apply Big Blue’s AI to their business.

Novel Food Discovery and Creation

Over the past few years, a new cohort of startups using AI to accelerate the discovery of novel food ingredients or plant-based recipes have emerged, causing ripples through the consumer packaged food market as they present a direct challenge to the more conventional – and slow – way in which food companies traditionally discover new food products. Over five years ago, companies like Gastrograph started to use AI to create predictive modeling around how different consumer cohorts may react to new food products, and more recently, we’ve seen a new generation of food companies like NotCo base its entire roadmap around AI-generated recipes for its plant-forward product lineup. On the novel ingredient discovery side, companies like Shiru and Kingdom Supercultures are using machine learning to find new ingredients that can help replicate the functional and taste properties of more traditional animal-based inputs.

Alexa’s Personalized Meal Planning and Recipes

When Amazon showed off Alexa almost a decade ago, in late 2014, most thought it was a cool home-based voice interface for weather forecasts and kitchen timers. But Amazon’s AI-powered virtual assistant helped launch a new way for consumers to do everyday things, including buying food and checking on that roast in the oven. But it wasn’t long before Amazon started to help me automate and personalize our shopping lists, and eventually started to create personalized recipes based on our past behavior.

Computer Vision Is Everywhere

A little over two years after Amazon debuted Alexa, it opened its first Amazon Go store featuring its Just Walk Out technology. Powered by sensors and computer vision, the new storefront lets shoppers pick up things off the store shelves and walk out without going through checkout. Soon, a whole bevy of human-less retail startups emerged to offer grocery and convenience store operators platforms to create more friction-free shopping powered by computer vision. We also saw computer vision-powered home appliances enabling consumers to identify their food in the fridge or the oven. Computer vision has also taken off in the restaurant back-of-house for solutions that help reduce food waste and help optimize food inventory.

Food Robots

While robotics and AI are not always synonymous, many robots are deploying some form of AI to help feed us. Whether it’s Google Mineral’s farm robot modeling plant traits and phenotyping crop varieties or server robots dynamically mapping the layout of a restaurant dining room, we are seeing a proliferation of AI-assisted food robots up and down the food value chain.

As far as generative AI goes, we’ve only begun to see how it could change the food industry. Initial applications are more likely to be in restaurant marketing (like the image created for this post using DALL-E), operations, and customer service systems. But as the technology becomes more powerful and creative programmers figure out ways to integrate generative AI technology into their platforms, the impact of ChatGPT and similar AI systems holds massive transformative potential for the food industry.

If you’d like to learn more about how generative AI will change the food industry, you’ll want to attend The Spoon’s mini-summit, How ChatGPT & Generative AI Will Change the Food Biz, tomorrow. You can sign up here.

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...