• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

data

March 29, 2018

Sage Project Uses Adorable Graphics to Break Down Nutrition Labels

When you’re in the grocery aisle deciding which type of crackers or canned tomatoes or granola to buy, you might check the label for nutritional information. Maybe you seek out the calories, the protein, or the fiber — but what does it all really mean? 

New York-based startup Sage Project is hoping to cut through the confusion with their nutrition data platform. The best part? It looks great doing it.

Started in 2015, Sage Project initially came out of CEO and co-founder Sam Slover’s research thesis at NYU in which he tracked his food intake for a year. During that time he realized that there wasn’t a tracking tool out there that was user-friendly and gave a helpful context to what you were eating. So Slover decided to create a nutrition platform that provided a lot of data points, but which also personalized the information and made it accessible — even to someone that didn’t have a nutrition degree.

What separates Sage Project from other services that quantify food’s nutritional value, such as Weight Watchers, is its level of contextualization. When you click on a branded product on Sage Project’s website, it will immediately give you a visually appealing nutritional breakdown, listing calories, protein, carbs, vitamins, and fat per serving. Which is what you would find on any nutritional label. But then Sage Project takes it further by giving information on where the product brand’s headquarters are, which diets it works well with, and other notable health aspects (gluten-free, vegan, etc).

Food items may also have something called “badges”. Which are sort of like Girl/Boy Scout Badges, but for health. For example, a pre-prepared bean and cheese burrito might get a badge for being certified organic, or a chocolate bar might earn a badge for having fewer than 5 ingredients.

“We’re not giving advice through the app,” said Sage Project Community Manager (and registered dietician) Georgia Rounder. “We’re providing information — and we want to provide it in a way that’s educational and fun.”

And fun it is. My personal favorite part of the site is a gif that shows how many minutes you would have to bike, run, swim, jump rope, do yoga, or dance in order to burn off the calories in one serving of your selected food. Which can be a little shocking (20 minutes of dancing to burn off one serving of crackers?!), but I didn’t mind. Because the visuals are just so. Darn. Cute.

“There’s a whimsical feel that sets it apart from other nutritional services,” said Rounder. After poking around the Sage Project site for a while, I would have to agree; the platform presents a lot of information, but the clean layout and pastel colors (not to mention the animated foodstuffs) somehow make it not overwhelming. Which is especially important since Sage Project is providing nutritional data, which can feel pretty intimidating — especially if you’re trying to figure out how to eat within the guidelines of a diet like diabetes-friendly or ketogenic.

Sage Project currently sources its library of data directly from partner brands. So far it has relationships Whole Foods, Kroger, Walmart and more. In the future they hope to expand their partnerships and work with a lot more macro and micro brands, growing their offering until, presumably, they catalog every product on the market.

In fact, the privately funded company has quite a few plans for expansion. In the next month or so, the startup will be rolling out a bigger, more comprehensive platform, as well as a mobile app. The new platform will have a new name (which Rounder said she couldn’t reveal), new features, and will expand on old features. It will also let users get a lot more personalized with their diets. For example, if you’re trying to eat a kidney-friendly or vegan diet, you can let the platform know and it will tell you if each food you select is a good pick or not.

It won’t only be for people trying to follow particular diets, however. Users can also create an individual dietary profile if they want to make it even more customized.

But the most critical aspect of Sage Project’s update will be the addition of a mobile app. In fact, without a mobile app component, Sage Project is basically just a cute novelty tool; a fun way to look up particular food products, but not especially useful in the day to day.  Because if you’re shopping in a grocery store the last thing you want to do is pull up a website on your phone to get some nutritional context for chocolate-covered almonds.

The app will also give consumers a way to add new foods to the database. If they come across a product or brand that isn’t listed, users can take a photo of the label with the app and Sage Project will add its nutritional information. In the future, I could see the startup teaming up with a grocery delivery service (like Amazon, since Sage Project’s first brand partner was Whole Foods) so that users could shop for foods that were in line with their diet directly from their phone.

Jones told me that the Sage Project team is also working on image recognition. They hope that the app will be able to recognize what foods are on your plate so it can give you a nutritional breakdown of each separate ingredient or dish, plus advice on how they fit into your personalized nutrition profile.

Until that day we’ll have to settle for getting our nutrition breakdown for a break-dancing milk carton and a bicycling watermelon. I’m not mad about it.

December 21, 2017

Quick-Service Restaurants Are Quickly Turning to Facial Recognition

Once upon a time in the not so distant past, most considered ordering food via facial recognition either a gimmick that was either unrealistic or just creepy.

Times have changed, thanks in large part to technologies like the iPhone X, which you can unlock using your own mug. And while we’re some distance from facial recognition becoming a facet of everyday dining everywhere, there’s a growing number of restaurants now offering customers this option when it comes to ordering.

CaliBurger was the latest to join that group this week when it launched self-ordering kiosks at its Pasadena, California location. If customers like this move, the company said it plans to roll out kiosks to all 40 of its locations in the future.

It’s the quick-service restaurants like CaliBurger where facial-recognition ordering appears to be making the biggest impact. It’s not hard to understand why. Quick service got its name for a reason, and facial recognition can certainly speed up the order and payment process. 

Just look at UFood Grill, who earlier this year debuted self-order kiosks in its Owings Mills, Maryland location. According to the restaurant, customers using the kiosks can order and pay in less than 10 seconds. Kiosks use facial recognition to remember customers’ orders for future visits; they’re powered by technology from Michigan-based Nextep. 

Addressing the need for restaurants to increase speed, Nextep president Tommy Woycik recently said, “Imagine visiting your local drive-thru and ordering your favorite customized coffee drink with a quick glance at the camera.” Likewise, if you’re in China, you can pay for your next KFC order just by smiling at the camera.

While it’s not a quick-service restaurant, Dallas’ Malibu Poke opened this past November with the option to order via facial recognition already in place. Talking to the Dallas Observer, owner John Alexis referenced the new iPhone, saying that thanks to the phone, ordering via facial recognition is no longer gimmicky. He was also quick to point out that the system Malibu Poke uses actually prevents him or anyone on staff from accessing the scanned faces from customers: “I literally would not know how to find [a customer’s face] if I wanted to. If you want extra cucumbers, that’s between you and the machine.”

Data—who sees it, where it’s stored—is definitely one of the challenges that has to be addressed in order for facial recognition to become a more widely used way of ordering. (See this year’s lawsuit against Lettuce Entertain You, who owns the Wow Bao quick-service chains.) Just because, for example, Malibu Poke can’t access your facial scan doesn’t mean some ill-humored cyber criminal can’t.

Data will continue to be both a question and a challenge moving forward, but so far, it doesn’t appear to be raising too many concerns for businesses. Restaurants themselves are more likely to be occupied by things like increasing the speed of these kiosks and dealing with some of the reported glitches around lighting and camera angle—basically things that will impact the business’s bottom line today.

November 11, 2017

Google Assistant Is Becoming A Guided Cooking Platform. We Talk To The Person Leading The Charge

Back in April, Google’s Emma Persky wrote a post telling the world that Google has been working on a recipe guidance capability for Google Assistant.

At last month’s Smart Kitchen Summit, she gave a little more detail on what exactly the company has been up to.

Persky, who runs point for Google Assistant’s cooking guidance team, told the Spoon’s Allen Weiner how much of the focus has been on building in contextual understanding of recipes.

“We can talk you through step by step how to cook a recipe and answer contextual questions about how to do that,” said Persky. “To do this, we have to build a deep model of what a recipe actually looks like. Have to take the text of a recipe and understand that text so we know what pasta is, we know what type of pasta you’re talking about, we know what the units are, we know the cooking temperature.”

“On the other side, we know what the user is saying. We have a whole bunch of machinery at Google that is able to understand what a user is saying and turn that into a machine question. We have a whole bunch of data about how different people ask these questions, which we use to build a model and understand these types of questions.”

When asked about how Google is utilizing company competencies like search and YouTube, Persky said that while there’s been significant work done here, there’s an opportunity to get better.

“We do a pretty good job now when you ask on your phone or desktop ‘how do I sauté an onion?’ we show you a nice video of how to sauté an onion.”

But, she said, “there’s a lot of opportunity for guided cooking feature to more deeply integrate with this, so when your recipe says sauté the onion, and you don’t know how to sauté the onion, we are able to return these types of video answers on on Google Home platform to help you become a better chef over time.”

Persky also discussed how she thought web content schemas could evolve to create a foundation for richer content through platforms like Google Assistant.

“When it comes to companies that have this kind (recipe) data available to them, there are a lot of opportunity for finding ways to increase the fidelity of the data that we have access to. At the moment we have schema.org markup, which is a good first pass. We don’t have a lot of detail and use machine learning to extract a lot of the context from that. And I think where there’s an opportunity to where a lot of people working on this stuff is to find ways to access more high fidelity data that we could offer to the users as an improved experience.”

When asked by Weiner how schema.org and other web markup languages could improve, she had this to say: “There’s a lot of work we can do to improve the quality of that markup. For example, right now the markup just has one block of text for all the instructions in the recipe, but actually if we could break that down and have a step by step, it would be easier for us to parse that out. Right now we have to apply machine learning across that to do that.”

It’s a really good conversation to understand what Google has been up to as they look to combine recipe content with their voice AI platform. You can watch the full conversation between Allen Weiner and Emma Persky below:

Ed note: Answers in this interview have been edited slightly for brevity/clarity

April 13, 2017

Podcast: Creating A Common Language For The Internet of Food

A traumatic early experience as a young medical intern set Dr. Matthew Lange on a career course to change the way the world thinks about nutrition. He realized early on that medical community had it backward: by focusing on treatment of people sick from years of poor eating habits rather than helping people to better understand nutrition and make food choices over the course of a lifetime, they were never going to solve the growing problem of obesity and diet-related health concerns.

Since these early days, Dr. Lange has spent his career at the intersection of food, nutrition, and information. His latest project is helping to create a common language for the food industry to describe information that new and ever better technology can ow extract from the food itself. Lange and other believe that a common language describing food that can then be utilized by these emerging technologies will help usher in new ways to create, handle, distribute, cook and consume food.

The end result if Lange achieves his vision? Healthier outcomes for people equipped with better information about their food.

After you check out the podcast, you can find out more about Lange and IC-FOODS here.

February 25, 2017

Full Transcript: Creating A Food Data Layer With Edamam’s Victor Penev

 

I recently caught up with Victor Penev, CEO of Edemam, about his company’s effort to create a data layer for the Internet of Food. You can hear that conversation on the Smart Kitchen Show podcast here, or you can read the full transcript of the conversation below. The conversation has been edited slightly for readability.  

Michael Wolf: How are you doing, Victor?

Victor Penev: I’m doing very well. Good to be here, Michael.

Michael Wolf: Now you started your company back in 2011, and you’re one of the early companies I think really to go after this idea of trying to organize food data. Tell us about the original concept for Edamam.

Victor Penev: So the original concept actually started a little bit before 2011. I’ll give you just a little bit about my personal history. I’m a serial entrepreneur. I had a good exit at my last company. We built the largest Internet company in Bulgaria and I had taken a year off and I was looking to do something new. Eventually because I’m a passionate cook, and I cook every day in my life, I decided I’m going to do something at the cross section of food and technology.

I started looking around the space and very quickly came across one of the biggest problems I think that’s related to food is that people even looking 50, 100 years from now will still want to know what’s in their food and how it impacts their health and wellbeing. What I realized is that the information about food is not that readily available. It’s oftentimes contradictory, inaccurate, and so on and so forth. We decided we’re going to try and organize the world’s food knowledge and give it back to people, so they can make smarter food choices and live healthier and happier lives. That was kind of the original idea. Then that was probably 2010.

Then we looked at various technologies of how to approach it and eventually ended up with semantic technology. There was a very simple hypothesis. Semantic technology is one of those things that fail quite often. People try to boil the ocean with it, but we thought that food is a fairly contained domain without too much spillover what an ontology can do, so that semantic technology can actually work in the food space. I spent some of my own time and money before launching another one formally in building a little bit of technology just to make sure that semantic would work.

Officially I think I called it October 2011, but somewhere around that date, we launched the company. We initially started as a B2C company and now after a couple of years, we switched to a B2B model.

Michael Wolf: In those early days of being a B2C company, you guys not have any success. You guys garnered hundreds of thousands of downloads through your application. Talk a little bit about those days and why you switched to be a B2B company?

Victor Penev: It’s a very simple business decision, so initially we thought we’ll organize high-quality recipes. We’ll take them for all kinds of nutrition and calories. We’re just going to provide smart suggestions for what to eat and maybe even meal plans on a weekly basis, and so on and so forth. We did have about 800,000 folks that installed our app, both on iOS and Android. What we found out after a couple of years is that as we tried to create paid product that consumers will be paying for, there were very few takers. We realized consumers just take back anything that’s related to food in terms of data to be free, so we couldn’t figure it out. A few hundred thousand users don’t have enough of people to create advertising-supported model, and I didn’t personally believe in advertising-supported model.

That was the time when we realized, “Okay, we got to do something different.”

Then just at that exact time, a few catering companies started coming to us and said, “Can you do the nutrition for our recipes?”

Then we said, “Sure! We’ll charge you $20 a recipe.”

They said, “Okay.”

Then we looked at ourselves and said, “Okay, well somebody is willing to pay for what we have,” so we re-productized everything and launched as a B2B player. That’s what happened. That was probably end of 2013, beginning of 2014, and since then we’ve been doing a B2B model. We went through API, custom implementations of the API, as well as data licensing.

Michael Wolf: I want to get into that and talk a little bit about your customers and how that’s kind of grown overtime, but talk a little bit about this idea of building an ontology of food using semantic web and association to create this database. What was the goal there and what does that mean? Was there a lot of work early on to kind of create the categories in trying to figure out where to put things?

Victor Penev: The hard work is not actually building the ontology. Like I said, the food space is relatively self-contained and fairly easy and straightforward to organize. I think a lot of the difficulty around organizing data around food is that it’s fuzzy. It’s not well structured. It’s not like physics, or chemistry, or any other hard subject that you have taxonomy and you have ways of organizing and so on and so forth.

I mean food has been around since humanity existed, so people talk about food in all kinds of different ways. They have a lot of implied meanings. There’s a lot of cultural background around it. The difficulty around structuring food data is not just the ontology itself, but actually the layering and what we build a natural language understanding on top of it. The ability to capture any data in terms of what people say about food and then transform it into something that’s quantifiable ‑ nutrition in our case.

We went with semantics because we were looking 10, 20 years down the road to be able to provide smart suggestions to people what they should be eating that necessarily imply influencing. We want to be able to know things about the person, maybe they’re allergic to something, if they’re on a diet, if they have a heart condition, but also their biochemistry. They are like sensors that take real-time blood samples, what have you, and also know what their goals are in terms of fitness and health and start inferring from all the structure that we have, what will be the best meal for them to eat. That process is endless.

I mean new data can be added constantly. I think there’s a big new field coming on board, which is the microbiome, which will be probably in the next 10 years change drastically the notion of how we should be eating. Obviously, there are sensors that are trying to constantly measure what’s in your blood and that’s a new thing that will probably hit the market again in the next 10 years.

Our goal was to organize and structure all these data in a way that can do meaningful suggestions to people what they should eat and that required inferencing, that’s why semantic technology.

Michael Wolf: And over time, you accumulated a huge database. I think you said you have a database of about 1.7 million recipes and you’re working with companies like the New York Times, Epicurious. Talk about how you provide that information and then how it’s used by these companies.

Victor Penev: that’s one of our major use cases is companies that have lots of recipes. The New York Times and Epicurious are great clients, but we do the same thing also for catering companies, for restaurants, anyone that has a lot of recipes that need nutritional analysis. We really replace the human nutritionists so to speak because that’s the alternative for most of those companies. For some of them, it’s just not affordable like if you’re Epicurious, you have 300,000 or 400,000 recipes. Even hiring an army of nutritionists, it becomes very expensive and obviously a no-go proposition.

The way we work with all those companies is very simple. It’s an API integration based on their recipe in the format they have it. We process it on our end. We do the analysis. It takes less than 400 milliseconds per recipe to get analyzed and it’s not just cooking up ingredients to nutrients. We also take into account techniques such as what happens to the food if it’s fried, or marinated, or baked in salt, and so on and so forth, and we return back the data.

The data that we return to them has up to 70 different nutrients. It’s automatically tagged for about 40 most popular diets, so all the allergens, anything that is for example low-sodium, low-sugar, paleo, vegan, and so on and so forth, I mean any diet that you can imagine that has been the popular culture, we tag the recipes for.

We just return this data to them, and then after that they decide whether to display the data to the end-consumer. Some of those companies use it to improve their searchability and also for SEO purposes because that’s metadata that is very relevant to the content they have, so that’s how we work with them.

Michael Wolf: And so, when you look at the evolution of the connected kitchen, you guys have started to look at that space. Increasingly companies who were adding connectivity also were trying to add value on top of that. How would you envision yourself possibly working with a company that is making a device for the consumer and then the consumer wants to understand what they’re reading from the nutrients and health perspective?

Victor Penev: I mean there’s a couple of major use cases here. Again we’re coming from the perspective that people want to know what’s in their food and how that will impact their health and wellbeing, there are a couple of things that people can do. One is obviously find what they should be cooking, and that’s where our database of 1.7 million recipes comes in. They’re all nutritionally tagged and analyzed. You might be sitting in front of your smart fridge and a touchscreen or you might be talking to a virtual assistant that’s part Alexa or Cortana or whatever it is, and you might be saying, “Hey, I have broccoli in my fridge.”

We can actually know that you have broccoli in your fridge if the fridge is smart enough.

“I’m diabetic, and my husband is on a paleo diet, and my kids are allergic to peanuts. What can I do?”

We can suggest very high-quality meals that you can cook, and then from that point on, there is transactional capability to create a shopping list. They might be kind of what you mentioned earlier about the ability about guided cooking, so that particular aspect has a set of video instructions that take you through the cooking process, and so on and so forth. That is one use case.

The other use case, which probably is even simpler and more prevalent, would be people would be just cooking things and then finding out what’s in their food. It’s surprising to me that in this day and age, the majority of meals that people eat are home-cooked meals and there is no way for them to figure out the nutrition of those meals. Maybe you read a box of cereal and maybe you know what’s in a cup of milk but if you do anything a little bit more complicated, you start to track actually what you’re eating. You got to have to be very, very precise, take a lot of time doing it or kind of give up. That’s where we come in.

You can just in natural language speak, “This is what’s in that recipe and this is what I did with it,” and within a second, we will return the nutrition.

We can tell you, “Okay, well the [unintelligible 0:14:03] that you did is actually 700 calories per serving and it’s got that much salt and that much fat.” Then you can decide whether next time you’re going to cook it or modify the recipe, or maybe serve less of it, and so on and so forth.

Michael Wolf: This seems like the perfect Alexa Skill [laughter] I hear you talk about that. Have you guys talked about either through your partner or kind of have been the backend for an Alexa Skill that I can ask in making this, I have these ingredients, what is the calorie count?

Victor Penev: Yeah. I mean we’ve talked to Alexa from day 1 ever since Alexa was launched. Our challenge there was that we never figured out a business model, much like with the B2C space Alexa is a platform that says build an app and that app can be used by our consumers except there is no transaction. We don’t get paid by the consumers to do that and we know the B2C companies. We couldn’t figure out the business model on Alexa, but that is top of our minds.

We’re building for our nutrition research, which is a tool we sell to dieticians and nutritionists and restaurants, which leverages natural language. We are building voice recognition capabilities into mobile devices, and eventually we want to do in the kitchen as well. I will want to do it in every room actually, but we have to figure out the business model. In addition to Alexa, I know Microsoft is working on Cortana and they are pushing very hard in that direction.

If we figure out a way for a business to use our capability or somebody to sponsor an app that is voice-powered app for the Echo device or any other device that any company is putting out there that is powered by voice recognition, we’ll very quickly build it. It is very easy because we’ve done all the natural language, understanding the work upfront, and so for us, it’s just hooking up the voice recognition to that.

Michael Wolf: Couldn’t you basically build a white label skill that you then go to appliance company X or CPG Company Y said, “This is just you plug in. Here’s your skill. You put your skin on it, Maybe you add a few kind of cast components and then they create their own Alexa Skill with all this nutritional information?

Victor Penev: That’s a wonderful idea. The only thing I would correct with the idea is that I personally want to have the appliance manufacturer or the retailer to come to us and say, “We’ll pay for that for you to build that skill,” and then we’ll build the skill.

We scrapped the startup and we try not to put resources against something that is not going to have guaranteed revenue. That is the only thing, but I can definitely see ourselves working with Whirlpool, or Samsung, or Bosch, or any of those companies and be able to power that particular skill for them.

Michael Wolf: It’s still so hard to figure out what is the nutrition of this thing I’m making every night, and then you start to throw in all this different branch predictions. I’m going to fry it, I’m going to put it in an oven, I’m going to put olive oil on it. I mean there’s just so many and if you guys have the data, I mean I think we’re going to get to the point where consumers can access that information in a fairly quick labor. We’re not there yet, so it takes companies like you in combination with the consumer-facing brands whether that’s a hardware supplier, or apps, or whatever to do that.

Victor Penev: Uh-huh.

Michael Wolf: I definitely am in line with you. I think that’s going to happen. I think most consumers will want that.

Victor Penev: Absolutely, I think so, too. To my mind, that’s not a question of if but when and whether 2017 is going to be the year or we’re going to have to wait another year. That is the big question I think.

Michael Wolf: You mentioned a little bit about sensors and being able to kind of detect. Have you been observing what’s going on in that space? I think it’s an interesting space. We had a company called Nima at our event that does gluten sensing. I saw at CS this year finally the company is making the SCIO, which is making basically an infrared food scanner, which there’s been a lot of debate whether or not you can a low-cost infrared food scanner like the kind they’re doing. It’s usually that will be an interesting area as well. Have you guys looked into that that space?

Victor Penev: We looked into that space. I think like many other space in that area, that is still very early stages and it’s evolving. The challenge for all those companies, they serve a particular use case. If you are checking for gluten, there is a 0.8 percent of probes that have celiac disease. That’s a godsend product for you. The problem with most of those solutions is that they’re not serving the general public because to serve the general public, you have to do full chemical analysis of the food. You have to be able to say not just the content of gluten or if there is like a pathogen in it, but also to tell how much fat, or how much carbs and sugar, and how much vitamin A.

Right now, this has been done in chemical analysis labs and the largest one in Wisconsin is 1 million square foot, so it’s a lot of equipment that you have to fit essentially into a small device. Is that going to happen? I think so. It’s just going to take time to kind of get this million-square foot fitted into devices ‑

Michael Wolf: In your pocket?

Victor Penev: In your pocket, yeah, in your hand, or something like that [laughter]. I think that will happen. The other thing that’s interesting about sensors and I think that’s actually more evolved is it probably requires a lot more regulation and idea of program whatnot is those kinds of implantable sensors in the human body and/or stickers that constantly take blood samples in real-time, and so they track your biochemistry. To an extent, it’s not even that important what you eat; it’s important how what you eat impacts your body and your own blood chemistry.

It’s important to know how it impacts what’s in your food, so that you can make informed decisions whether you’re going to eat that or not eat it, but once you’ve eaten it, it’s interesting to understand how that impacts your blood chemistry and what corrective action you want to take if you need to take that corrective action. There are many people that monitor particular nutrients like that people with diabetes or kidney disease that is absolutely necessary. But for folks that are just checking calories or fat or sodium, that can be very useful, and so that’s a whole different set of sensors other than the ones that are analyzing food.

Michael Wolf: Speaking of sensors, in a way I think what Apple is doing with HealthKit is an interesting health layer. I think it would be interesting once you start to fuse the type of data you have with what for example they’re doing with HealthKit. Have you guys looked at integrating?

Victor Penev: Yeah. We’ve looked at obviously HealthKit. We looked at Fitbit. We’ve looked at every single platform out there that does health record management and personalized record management, and we’re very careful not to get into the space where we have to manage electronic health records. We need to be HIPAA compliant and whatnot. But obviously, food intake is an important thing.

I think for a lot of those companies, it’s an important thing. I think for a lot of those companies, they’re still trying to figure out who’s going to win the race on the device wearable, and the wearables for better or worse, are just too focused on sensors measuring energy output, how many steps, if I jump, if I’ve done 100 crunches and so on and so forth. The energy input, which is essentially food, is lagging behind, and part of the reason why it’s lagging behind is because it’s hard to do the energy input. Unless you find a way to do it automatically, which will be measuring the bloodstream of somebody, it will be ‑ people are not disciplined enough.

We made a conscious choice to hold off until we see enough of a use case of people being willing to input data through their mobile devices. The interface is still not there, and I think we probably have the most advanced interface with voice recognition with the accuracy. There’s a lot of people that do voice recognition but we have a very, very high accuracy in management and situational analysis. Even that is still more of a case where people that are health nuts or they have particular disease than the general public.

I think there’s going to be a watershed moment when Apple probably or one of the other companies in this space that’s really big. Apple and Fitbit look like are going to be the winners, but they go and say food is important to us now, so let’s build tools around food. I think when they start pushing it into their devices, that’s going to be the moment we’re going to jump on the bandwagon.

Again, we’re very careful. There’s a lot of trends we can put our resources against, so many different things, and we decided that this is not something that’s going to happen in 2017.

Michael Wolf: But I like that, the way you phrase that. They’re all geared today toward measuring energy output. Food really is the energy input, and I wasn’t necessarily suggesting – I guess I was a bit suggesting going into healthcare and competing with them but like a fusion of the capability and the data that you have with HealthKit data maybe in the consumer-facing app, maybe it’s on an iOS device, that would just be very powerful. It sounds like you’re thinking the same thing: you’re just kind of waiting for the right time.

Victor Penev: Yeah.

Michael Wolf: Maybe it’s one of your partners. Maybe it’s Apple using your data to do that.

Victor Penev: Yeah. I mean that’s exactly our play. We hope that eventually we’re going to plug into HealthKit. We’re going to plug into every single platform. We also integrate with Validic. I don’t know if you’re familiar with this company. But essentially, we’ll have data output that will go into personal profiles when people eat something and we’re starting to do that. But right now, we’re focused on providing these tools to dieticians, to restaurants, professionals that are paying for this service because they need it to run their business. That same thing is a B2C product very easily, which has dropped the price I think. We removed some of the feature that food service professionals or dieticians may need then. It becomes a B2C experience that can be plugged into HealthKit or anything else and just become an app that does that.

We do have something very similar with Samsung. We did maybe 2 or 3 years ago, a partnership with them for S-Health, which is essentially the equivalent of HealthKit. We were the first food app there and our recipes search based on nutrition with ability to log in the recipe account directly into S-Health, so with all the nutrition.

We’ve done that sometime ago and that was part of the experience of why we think the market is not ready yet. But we’re closely monitoring that market.

Michael Wolf: Take a step back and just kind of if you look 5 years from now, what does the kitchen look like with regards to nutrition information, all this kind of devices and data layers like yours? I mean will it have arrived at that point?

Victor Penev: Well, I don’t know if it’s going to be 5 years, but I’ll tell you what I think ultimately the situation is going to be, and I think the kitchen is moving maybe 2 years behind the smart car in terms of there was a lot of investment in the space and eventually started to become a reality and now it’s a question of somebody just putting the right regulation in place and the smart cars can become reality.

I think the smart kitchen is probably a couple of years behind, so maybe in 5 years it will happen because there was a lot of investment from big name companies into the space. I think that every single device in the kitchen will be connected. It’s the IoT dream that the devices in the kitchen don’t need to be connected to that many other things. They need to communicate with each other, and so the fridge and the stove, and the sous vide and your food processor only to have kind of have the same platform and be able to communicate with each other. If the fridge has onions, what can you be doing with those onions and have some kind of a communication to the oven where maybe you’ll be I don’t know putting them in the oven or whatnot and that will form a particular recipe, or a particular way or cooking, or they might be chopped in a food processor.

There’s got to be connection there, but I think every single device will have an interface. A device probably will have a touchscreen, will probably have voice recognition interface, or both. It will have probably some kind of a display to display to you important information. It might be a video that teaches you how to cook something that’s on top of your stove, but it might be like a shopping list that is displayed or recipe suggestion, which I hope we will be powering a new fridge, or even just a timer on your kitchen appliance, or in-built weight measurement that tells you how much of whatever you’re cooking with.

Those are the things, and I think that in addition to those interfaces, there’s going to be an overall software that runs them and the kitchen operating system and there’s going to be a data layer because this kitchen operating system an interface with a human will have with all devices will necessitate data, and it’s data about specialized nutrition but it can be data about cooking, about the provenance of the food, anything that might be related to your experience in the kitchen becoming much more seamless.

I’m going away from technology, but in kind of winning the kitchen back for the human. We used to enjoy being in the kitchen and sharing food and whatnot and we kind of went away with that with microwaves and TV dinners and whatnot. I think actually technology can bring us back to the kitchen and the joy of cooking because it’s going to make it a lot easier, food is going to be delicious every time you make it, and everybody will love it.

That’s kind of the vision I see and I hope us to be part of that solution specifically on the data layer with regard to foods and recipes and nutrition, so that we can help people make those smart food choices and eat better.

Michael Wolf: It sounds good. We’d love to have you out in Seattle, the Smart Kitchen Summit, to talk a little bit about it, so thanks for spending some time with me today.

Victor Penev: Thank you, and yes, we’re planning to be in Seattle.

Why don’t you subscribe to our free weekly newsletter to get great analysis like this in your inbox?

January 30, 2017

Podcast: Creating A Food Data Layer With Edamam’s Victor Penev

Victor Penev set out five years ago to create a semantic data layer for the world of food. Now his company, Edamam, has one of the world’s biggest databases and provide food and nutrition data to companies like the New York Times and Epicurious. Now Penev wants to bring his data to the world of connected cooking appliances.

You can subscribe to the Smart Kitchen Show on iTunes or via RSS.

You can find out more about Edamam here.

November 19, 2016

You’ll Be Surprised How Millennials Discover New Recipes

If you think Millennials only find recipes through Snapchat, Facebook or YouTube, think again. As it turns out, twenty somethings are actually more likely to ask Aunt Judie or mom for her casserole recipe than get it on social media.

As shown in a recent survey conducted of over one thousand households by NextMarket Insights and The Spoon, recipe discovery behavior of Millennials – as well as Gen-Xers and Baby Boomers – is surprisingly diverse. In fact, the number one source for recipes among twenty-somethings by a long-shot is friends and family at 63%, followed by social media (53%) and recipe sites (46%).

One technology savvy friend that is likely to provide more and more recipes in the future? Alexa. And as I wrote this morning, chances are are recipe sites like Allrecipes will be there to capitalize.

October 28, 2016

Podcast: SKS16 Sessions – The Kitchen Operating System

This podcast is the audio from a session at the Smart Kitchen Summit called the Kitchen OS: Bridging Islands in the Connected Kitchen.

The panelists include:

  • Moderator – Michael Wolf (host of the Smart Kitchen Show)
  • Kevin Brown, CEO, Innit
  • Charlotte Skidmore, Director of Energy & Environmental Policy, Association of Home Appliance Manufacturers
  • Ben Harris, CEO, Drop

The panel description was as follows: ‘Today’s smart kitchen is a series of disparate platforms and experiences. Can technology help us bridge shopping, discovery, storage, prep and cooking? A discussion of creating a common ‘smart kitchen stack’.’

Previous

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...