• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Google

May 24, 2017

More Than Hot Dogs: Pinterest & Google Image Recognition AI Make A ‘Shazam For Food’ Possible

In this season of Silicon Valley, one story line has housemate and programmer Jian-Yang developing a food recognition app called ‘See Food.’ Since the idea was born out of a spitballed pitch for a “Shazam for Food” by Yang’s landlord Erlich Bachman, it’s not altogether surprising that when Jian-Yang finally gets around to hacking together the app, it’s only good for one thing: telling us whether whatever is in front of the camera is a hot dog or not a hot dog (and yes, the once-fake app is now a real fake app you can now download for real in the app store).

Silicon Valley: Not Hotdog (Season 4 Episode 4 Clip) | HBO

While a “Shazam for Food” pitch seems like the perfect sendup concept for a satirical show about the tech world, the truth is there has been significant advances in machine vision and learning in the past few years that make food recognition a very real and potentially useful application.

These advances were on display this month by both Google and Pinterest as they both touted image recognition services called “Lens”.  While Pinterest has been working on image search since at least 2015, they rolled out their Lens this past February. In a blog post, Pinterest CEO Evan Sharp highlighted a food use-case as an example of how Lens could work.

“You can also use Lens with food,” wrote Sharp. “Just point it at broccoli or a pomegranate to see what recipes come up.”

And this week, after Google launched a similar feature with the same name, Pinterest apparently felt they needed to emphasize the food recognition capability of their Lens offering in the form of a new blog post that repeated what Sharp told us in February: users can use Lens to serve up recipe suggestions.

“Our visual discovery technology already recognizes objects in more than 750 categories, and people have been busily pointing Lens beta at everything from lemons to strawberries to find new recipes to try. And now we’re rolling out a way for you to Lens an entire dish and get recipes to recreate the meal.”

Google introduced its Lens image recognition technology last week at its annual developer conference, Google I/O. Not that Google is new to image AI or even food recognition. The company has been working on image search for probably close to a decade, and in 2015 introduced an app called Im2Calories that gave calorie estimates of food based on image analysis of food.  And while Google didn’t highlight any specific food use cases for their version of Lens at I/O, there’s no doubt that the company and its partners will explore using image AI to surface information such as recommendations for recipes like Pinterest or nutritional information (like Im2calories).

Of course, all of this follows Amazon’s recent push into image AI with the debut of its own camera-enabled Echo devices and the continued maturation of its AWS AI service, Rekognition. My guess is that as big players double down on voice assistant services, image recognition and analysis has reached a maturation point that makes it ready for consumer applications.

While clumsy efforts at food recognition like the Smart Plate  – as well as two big companies launching an image recognition service with the same exact name – make food image recognition a ready topic for satire, the reality is the technology is reaching a point of maturity and usefulness that maybe – just maybe – we’ll soon have a Shazam for Food that consumers really want.

May 6, 2017

Google Hopes Investment in AgTech Bears Fruit

Google and Apple together in an investment deal? Wait, it appears to be Google and apples—as in the fruit.

GV (formerly known as Google Ventures) is among those leading an investment round for Abundant Robotics, a Hayward, Calif.-based company whose robotic apple-picking machine has gained a lot of interest from the agricultural and financial sectors. Well into several trials, the device uses a combination of robotics and AI (in the forms of computer vision) to gently pluck apples from trees and carefully place them in bins for washing and packaging.  The $10 million Series A round will be used to commercialize the product.

Dan Steere, co-founder and CEO of Abundant Robotics says the path from prototype to the current stage—ready for commercialization within a year—is a slow and patient process. “The biggest challenge is understanding when you’re at a point with a set of technology that you can move from research to actually building a useful product, and that’s the hard part about startups; understanding when the capabilities are really ready,” Steere told AgFunder News in a recent interview. “In our case, there was a lot of foundational work and early research.  It wasn’t clear what we were trying to do would work, so there was a lot of interaction with growers though prototypes and real world testing to prove we were on the right track.”

Robotic apple picker trials continue in Washington

Steere’s team at Abundant Robotics came from SRI International, a non-profit research center, with its members bringing backgrounds in robotics and software. The initial funding to build a prototype came from The Washington Tree Fruit Research Commission which seems logical given Washington is among the leading apple-producing states in the country. Another investor in the company is BayWa, a German agricultural investment firm with interests in the New Zealand Envy Apple business.

The harvesting art of the robotics device is a refinement of previous technology-based attempts at reducing labor costs and increasing efficiencies in harvesting. What sets Abundant Robotics apart is its computer vision system which is able to identify the apple on the tree aside from the tree branches and other foliage. By carefully pinpointing the apple, there is less chance of damaging the fruit and the tree. The apple is then carefully plucked from the tree and softly put into the device’s bin.

Abundant Robotics is by no means alone in its quest to bring technology to the harvesting process. FFRobotics, an Israeli firm also is working on a similar solution which it claims can gently pick 10 times as much fruit as the average field worker. The company hopes to be in field testing by the end of the year.

Beyond fruit picking, Blue River Technology, based in Sunnyvale, Calif., has smart boxes that hook on to tractors which identify the optimal location to plant, the right time to fertilize as well as the best time to harvest. Autonomous harvesting is gaining a lot of traction in Europe (perhaps even more than in the U.S.) with projects such as CROPS (Clever Robots for Crops) a Pan-European project sponsored by the EU. The goal is automation of the agricultural business and has manifested in such work as a greenhouse robotic pepper picker.

While certainly a delicate subject, the issue of replacing farm workers with robots must be addressed.

“Who knows what this administration will do or not do (related to immigration)?” said Jim McFerson, head of the Washington State Tree Fruit Research Center in Wenatchee told the Minneapolis Star-Tribune. For farmers, “it’s a question of survival.”

Beyond immigration issues, there is grave concern among those farm workers whose livelihoods depend on regular work. The eventual loss of jobs for humans will be huge, said Erik Nicholson of Seattle, an official with the United Farm Workers union.

“They are scared of losing their jobs to mechanization,” Nicholson said. “A robot is not going to rent a house, buy clothing for their kids, buy food in a grocery and reinvest that money in the local economy.”

March 6, 2017

Wine and AI: A Perfect Pairing of Technology and Tradition

If you have trouble figuring out what is the best wine to pair with tonight’s dinner, we have some good news: artificial intelligence may soon be able to help you with that age old question, ‘Chardonnay or Sauvignon Blanc?’ That’s because a new wave of AI-powered virtual sommeliers are now available to help make those decisions.

Old Problem, New Solution

For decades the wine industry has struggled to overcome the anxiety associated with selecting a wine. Now thanks to technology you no longer have to have an awkward conversation with the clerk at the wine store, but can turn to a virtual sommelier to pick the perfect bottle.

There have been many virtual wine selectors available for some time.  However, we are now seeing increasing intelligence integrated into these solutions, making them both more powerful and more personal.

Wine Ring, headquartered in Syracuse, New York and founded in 2010, offers one of the most personal wine selection experiences available.  Unlike other apps that offer wine suggestions based on pairing suggestions or expert ratings, Wine Ring bases suggestions on your individual preferences.  This app uses advanced algorithms to develop a personal profile based on your rating of wines and then recommends bottles based on your taste profile. The more wine you drink and rate, the better the AI and the better the wine recommendations.

Google is also serving up wine suggestions.  Google’s new “My Wine Guide” is a conversation action added to Google Assistant for wine pairing suggestions.  While My Wine Guide is currently limited in its depth of AI and personalization of wine suggestions, what makes Google’s virtual sommelier most promising is how it integrates easy conversation format with computer based wine queries. Looking forward, “My Wine Guide” could become even more useful is to take the food pairing suggestion and then offer a variety of wines matching that paring at different price points which the user could verbally order and have delivered via a service like Drizzly, all from conversation based commands.

Once you get your recommendation from Wine Ring or “My Wine Guide” you can take it to Wine Searcher, a tool for locating and pricing wine (and beer and spirits) across all online stores.  Wine Searcher uses artificial intelligence to classify wines, linking the hundreds of thousands of products and tens of thousands of retailers to produce wine suggestions and pricing based on inputted search terms.

Wine Searcher is also integrating label recognition technology and developing a chatbot to improve user interaction with the site.

Vivino is already using label recognition technology to help guide wine purchases.  With Vivino the user simply takes a photo of the wine label they are considering and is instantly provided the wine’s rating, average price and review from the community of 22 million users.  The app then tracks which wines you scan and rate, but does not at this point offer suggestions based on your profile.

All of these tools aim to take the age-old mystery out of picking wine by applying artificial intelligence. While the wine industry is steeped in tradition and ancient ways, it could be the very modern application of artificial intelligence that makes wine and wine selection relevant to today’s consumers.

Why don’t you subscribe to our free weekly newsletter to get great analysis like this in your inbox?

Image credit: Flickr user a.has

March 3, 2017

Full Transcript: Talking VR, Virtual Eating & Smart Design With Google’s Basheer Tome

Last month I talked with Basheer Tome, the lead designer for Google’s virtual reality hardware. We covered a lot of ground, from how virtual reality will change how we live in the home to virtual eating to smart design and new interfaces.

You can listen to the podcast here or read the full transcript of this very informative conversation below.

The conversation was edited slightly for readability.

Michael Wolf: I don’t think you’re the only one at Google on their virtual reality side of things – but you’re a hardware interface designer for virtual reality with Google. How are you doing, Basheer?

Basheer Tome: Good, good. I’m actually the only one by title…

Michael Wolf: You are?

Basheer Tome: [laughter] Yeah. I’m trying to make this a thing. There is a sort of unproven space between industrial design and user experience design that generally gets covered by one or the other, but there is like the people, the actual designers who care about a product looks and how it sort of has its own brand and the aesthetics, and there’s the user experience designers who are worried about like what buttons are on it and how do those work, how do those function, and how people use the product, but very often one or the other has to do both of those jobs when it comes to the actual buttons on a piece of hardware.

Usually it ends up getting skewed one way or the other or you get something beautiful that’s hard to use, or you get something easy to use that’s quite not so appealing. I try and sit in between the two and help connect them together, and I think a lot about how something like feeling in your hand, how good it clicks, and how easy it is to use and how you can remember where all the things are and how they work.

I think part of the interest on the food side for me is that for some reason all these kitchen manufacturers, the minute they put a chip on it, they forget everything they’ve ever learned and just go nuts.

Michael Wolf: [laughter] I will definitely dive into where your thoughts are around some of the products being designed for the kitchen because there are a lot of different attempts, a lot of different things going on. But let’s talk a little bit more about what you’re doing every day at Google. Google, as many people know, is investing heavily in virtual reality, but the way you think about it and based on our early conversation, you guys are trying to make it somewhat more of a platform for something other than just gaming because a lot of people are thinking of virtual reality in the context of let’s play a really immersive video game, but you guys are going to try to make it more broadly applicable to our wife in a lot of different respects.

Basheer Tome: Yeah, we really virtual reality and augmented reality, and there’s like a third variant – mixed reality which is more mediated – but we really see these spatial computing platforms as really like the next version as we know it. It may not take over 100 percent of the way you interact with computers, but we do see that as a major step and a major piece going forward. For that to be true, it has to break away from being just about gaming. I think there’s a lot of really great gaming experience you can have and experiences I personally I already had, but it really comes down to trying to take this thing that people tend to enjoy already a little bit on the gaming side and trying to open it up to the rest of everyone, and we do that through creating apps that actually are relevant to them, that helps solve needs and problems but then we also think about how to bring down the cost to make it more accessible, work with other manufacturers and partners so that there is a wide variety of platforms with different abilities that sort of market to those people, and we’re really trying to turn this into a more broad generalized platform more than just a gaming rig.

Michael Wolf: For people who aren’t into this idea of virtual reality, augmented reality, mixed reality, very briefly explain what the differences are between those three.

Basheer Tome: Sure. virtual reality is where you’re putting a headset on your face made out of cardboard, plastic, or fabric, and really there is a screen in there, and in an ideal world we’re tracking where it is either rotationally in an ideal world in a free space and it really feels like you’re replacing your entire reality around you with a virtual one and you’re interacting with that and that can happen over digital space and you can interact with other friends. But for the most part, you’re taken to a different place.

With augmented reality, you stay in the same place and we’re actually overlaying information on top of that. What that generally means is that’s you have a transparent display rather than something opaque and ‑

Michael Wolf: Like Google Glass?

Basheer Tome: No. I think that’s one of the biggest mischaracterizations of Google Glass because Google Glass never put stuff directly between the majority of your vision and what you’re looking at. Google Glass, we call it the heads-up display, which sort of stays in the corner of your eye and it’s supposed to be like a more comfortable easier to use notification center and image capturing device rather than actually augmenting everything you see.

Mixed reality is more of a hybrid in between step, so right now with augmented reality, one of the biggest drawbacks is that you overlay light on top of what you’re seeing, and so a lot of times you can’t really replace what you see. You can only add stuff on top. Mixed or mediated reality ends up being more of you actually have opaque pixels and you can actually replace things that you see in your vision and oftentimes especially you see nowadays it means using a screen but then you’re using a camera to actually show you what’s going on outside and then you’re editing that video live rather than just having to overlay it purely on top.

Michael Wolf: When you talk about augmented reality, some of the earliest I guess instantiations of that coming into the world, I remember we’re on Android phones, you would have a smartphone and you would kind of move it around, and through the camera, you would overlay information on top. Is that basically what we’re talking about the most common form of augmented reality today when you’re using some sort of smartphone app?

Basheer Tome: Yeah, I think nowadays that’s definitely the most common variety of it. I think one of the biggest drawbacks it has right now is oftentimes it doesn’t really have a whole lot of awareness about what’s actually in your world. It knows generally what you’re looking at and that you’re rotating the phone and it can put stuff on top. I think Pokémon Go is a great example of that where you see the camera behind and the Pokémon is sort of getting there and they might try to do some clever stuff with the Pokémon laying on the ground. But for the most part, it doesn’t know if there’s a chair there or it doesn’t know if there’s a door. It’s not as smart as it could be and I think when people talk about the future of augmented reality, it’s much more in-depth and much more involved and that’s part of why we haven’t really gotten there as much yet because you actually have to start scanning the world, tracking the world, and knowing where every single object is in free space.

Michael Wolf: In Pokémon Go, correct me if I’m wrong, I think it basically takes GPS data to give it a basic understanding of the world, but it hasn’t really gone through the physical world and created like this really nuanced process database of all the spaces. Am I right on that?

Basheer Tome: Yeah, that’s correct. I think in a true augmented reality world like the way people use the term, if you were looking at a Pokémon standing in the parking lot, then someone else could be looking at it in the exact same space, it in the exact same position and you’d be both looking at the same one. Whereas today with the game, you’re generally in the same location, you end up finding the same Pokémon but they might not be in the same spot.

For Pokémon Go, it’s still a great fun experience and so I think finding and intuiting the fidelity of games and applications to the fidelity of the actual experience you are able to bring I think can help still make compelling experiences. You don’t have to have like a fully augmented reality world for you to have a great time.

Michael Wolf: When you look at the capabilities of what we have today in augmented reality, in a home context, I want to talk a little bit about how possibly you can apply this type of technology within our homes to make it a more rich experience. I can certainly see how it would be valuable for people with maybe they have mobility issues. When it’s dark, they can’t see but they can maybe you use augmented reality to identify things in the surroundings. Do you see huge potential using augmented reality or even virtual reality within the home context?

Basheer Tome: Yeah, I think there is definitely a crazy amount of possibilities for that, and I think the vast majority of them are stuff we can’t even really think of until we have the technology out there. I work on the input teams specifically and we all sort of report up to this more virtual augmented reality and mediated reality. It’s a big combined department.

Michael Wolf: When you say you put the information, you’re taking in information from around the world to process?

Basheer Tome: No, it’s less on the sensing side and more on how a person interacts with this device.

Michael Wolf: Got it, okay.

Basheer Tome: Yeah, much more design-centric and people centric than technology centric, and so we interact with the techno team a lot actually, and we work with them, trying to think of different ways people can use this technology in integrating it in their daily lives. I think Tango especially has gotten me pretty excited about all of these possibilities and how fast they will be coming. I mean there’s already multiple things coming down the pipe, and we’re really looking at sort of a more broad option of the technology.

Michael Wolf: For the audience, very quickly explain the Tango concept.

Basheer Tome: Sorry. Tango is our first-party augmented reality tracking system that uses a camera and a few other extra sensors that are relatively cheap to reconstruct your world and actually figure out where you are absolutely positioned to the entire Earth, which is crazy and awesome, and so it actually can figure out like not just where are you in the world or where are you generally but it knows that you’re in this room in this exact spot and you’re looking at this door that it has seen and scanned before.

Michael Wolf: For each person’s reality in their space, their life space that they’re moving around in, it creates its own unique personalized database, it’s going to know my home, it’s scanned it before. Then from there, it can add richness to that. It can maybe add information and interactivity?

Basheer Tome: Yeah. I mean I think what’s unique about is that yeah it can keep personalized data about you and sort of you can save your own data, but I think what’s unique is that it actually has a unique recollection of say your desk, and it knows where it is and that you’ve looked at it, and so that if you had placed a box on your desk, then your friend can come over, open up their phone, start to boot up Tango and start up an app and they can also see that same box in the same place.

Basheer Tome: Yeah. It’s really starting to catalog and connect all these things in real life, in real space.

Michael Wolf: So I don’t have to actually buy my wife like a real gift. I can just create a virtual gift and leave it there for her?

Basheer Tome: [laughter] You can let her know that she has a real gift.

Michael Wolf: [laughter] I think it’s probably a good idea to keep buying stuff. That’s the general rule. Yeah, my wife will probably like my virtual reality in part because she doesn’t necessarily like ‑ I don’t think she likes the idea of virtual reality and that’s another topic entirely.

You know when you go to like Universal Studios, all the rides now are basically you’re moving around in this chair that goes around, but you’re basically looking at a screen and it’s creating the illusion of moving fast through space, but it’s almost like a 3D experience. She doesn’t like that. She doesn’t like watching 3D movies, so she’s really scared about this idea of virtual reality. Is that something that you guys are trying to work with and make it more so people aren’t just getting that weird sense of disorientation when they’re in these 3D environments?

Basheer Tome: Yeah. That’s definitely one of our biggest things we’re trying to help fix. I think in particular virtual reality these days seems pretty in opposition to a lot of women’s sensibilities, and a lot of this is because it’s this big expensive annoying thing, and I think in general women they’d much rather not deal with useless stuff.

Michael Wolf: By the way, have you seen those Samsung advertisements where they showed a person in the middle of a room with a big mask on and there’s a bunch of people around them, looking at them? Those are just terrible commercials because they just look so awkward [laughter].

Basheer Tome: We’re actually huge fans of their product. I think they got a lot of things right really early in and that’s not Google forced me to say that. I think it actually is a pretty great product.

Michael Wolf: I think this idea of a person sitting there with the headset on in the middle of a room with people looking at it, that just seems like an awkward social situation for a lot of people, and maybe even more so for a woman who is always my wife.

Basheer Tome: Yeah, I think some of the biggest drawbacks that a lot of virtual reality marketing today is it’s aimed at gamers. It’s aimed at dudes for the most part like it’s black, it’s big. It has all these big cables, and all these crazy high setup costs, and it’s just like a lot of hassle for not a lot of gain, and we are really trying to fix a lot of that and we’re really trying to open it up to a lot more people.

Michael Wolf: Let’s talk a little bit more about some of those types of applications that we can use with them. One of those is I think eating. I know that that’s a particular passion for you. Talk a little bit about and how where we are on that because it seems like to some people, this seems like maybe this weird, unfathomable. You can never kind of create a virtual eating experience. Some people are actually working heavily on this because it seems like maybe there’s a lot of applications for like weight loss, etc.

Basheer Tome: Yeah. I think there’s a lot of interesting possibilities there. I mean I just want to be clear that I mean as of today, the visual of having this big black box strapped to your face while getting chili all over your shirt as you’re trying to eat is I think pretty dystopic and comical. But I think there’s a lot of interesting possibilities for sure. I mean there’s a lot of interesting studies revolving around using audio to manipulate your sensations a bit while you’re eating and there’s a lot about color and visual information.

I think the weight loss I think could get there. I’m a little less bullish on that, but I think there’s a lot of interesting use cases and I can try and dive into a few of those, but I think one of my favorite examples for sure is Heston Blumenthal’s dish the, Sound of the Sea.

Michael Wolf: Yeah, so explain that for people.

Basheer Tome: Yeah, I think he made it around 2010 at The Fat Duck, but it’s this super ocean-y dish where you got scallops, seafoam and flowers, and it looks like this beautiful ocean wave on the beach. He was really fascinated with trying to really enhance the sea and the seafood taste that you get there. I think he ran across this one research paper that said that if you listen to the sound of waves, or eat at the beach or at the ocean that you sort of get that sensation a lot more strongly and you taste more strongly the ocean if you can hear those waves.

What he actually does is he got the dish and then next to it comes a conch, and inside the conch is an iPod playing the sounds of waves, and then it has two little earbuds that stick out and they instruct you to place them into your ears and then you eat the dish, and it sort of puts you in the right time and the right place and really opens up your senses.

Michael Wolf: For people who don’t know Heston Blumenthal, he’s basically a celebrity chef, worked at Fat Duck, actually probably one of the most famous chefs in the world. And so what you’re saying is he’s also a pioneer in a way in virtual eating?

Basheer Tome: Yeah, I guess it depends on how you define the virtual aspect of it.

Michael Wolf: Yeah, yeah, or it’s multisensory cooking or another term.

Basheer Tome: Yeah, yeah. Another term I’ve seen is cross-modal sensory where you’re trying to use sound or some other non-taste sense to augment your taste.

Michael Wolf: You talked a little bit about using sound. There’s obviously this strong olfactory sense or smelling sense. There’s also been research in natural, virtually recreating taste sensations using some weird kind of contraption that send electrical impulses to your tongue. But do you follow all this stuff that’s on the cutting-edge of maybe even manipulating your taste senses.

Basheer Tome: I’m really skeptical about where they’re at with some of that stuff. I work a lot in haptics and thinking about how you can use like vibrations and different types of stimuli to different sensations in virtual reality and there are these electrical pads that you stick to your arm or different parts of your body, and you can send electrical current actually it’s your muscle.

I’ve seen a lot of that stuff and it’s almost marketed as it feels like your arm just moved and it feels a lot more like a reflex than it does like your body intentionally moved it. It gets really stinky and tingly. I’m skeptical that if you put it on your tongue or on your throat that it really would feel just like eating a steak.

 

Michael Wolf: But this is an area that you have taken a lot of interest, so talk a little bit about how you think about it briefly more broadly. You talked about Heston Blumenthal, but it’s an area you have a personal interest in. How do you think this idea of combining either augmented or virtual reality with food in some way, what are some of the possibilities in the future?

Basheer Tome: I think some fun little examples of that are like which frequencies you hear while you’re crunching or eating especially as it relates to texture has a huge effect on that texture sense, so there’s fun huge tricks can try at home like if you put on some really noise-cancelling headphones like it’s some nice Bose ones or whatever brand you prefer and you’re trying to eat some potato chips, they don’t feel as crunchy because you don’t hear those high-frequency sounds. Similarly, you can do that if you still wear the noise-cancelling headphones, then you put your hands on a chalkboard and slide them across, it feels a lot smoother because a lot of your sensory information for those higher frequency sounds come through your ears rather than through your fingers. It’s really your fingers that more detect than the lower frequency sounds.

Michael Wolf: Is it less horrible if you’re scratching it with your fingernails with that?

Basheer Tome: Actually, yeah. It is less horrible if you wear the noise-cancelling ear – it feels a lot more smoother. There’s a guy, Charles Spence, who’s done a lot of these experiments and published them. There is a podcast I love, Gastropod from Cynthia and Nikola, and they had a whole episode where they interviewed Charles, and I think it’s called Crunch, Crackle, and Pop, and yeah, it’s fascinating. There’s a lot of really interesting work there on just the audio component.

Michael Wolf: Because we don’t think about the separation of all these senses and what you’re saying is the experience completely changes if you manipulate one part of it. Maybe it’s the hearing. You put on noise-cancelling headphones, something that could be like entirely horrible like running your fingernails down a chalkboard, you’re saying it changes the nature of that. That idea is maybe applicable to a lot of different things.

Basheer Tome: Yeah, yeah. I think the general concept of replacing your senses with some digital ones I think is a little bit more far off, but I think the concept of augmenting them is a lot closer to reality today, and I think when people talk about augmented reality or virtual reality, it’s so rarely due to they think of anything but the visual, and when they do, thy go straight to audio, and those are some of the lower-ranking fruits and a lot easier for computers to do. We still haven’t corrected the knot on creating digital aromas and even [unintelligible 0:21:58] is still quite rudimentary. It’s just different variations on a vibrating motor.

Michael Wolf: It’s funny because you talk about digital smells, I remember back in early 2000’s at CES, people were talking about that, I thought it was kind of the sign of the impending bubble that came [laughter]. But what I’ve seen at CES this year and maybe the last year or two is people are getting back into this idea of trying to crack that nut. Where are we on that in terms of like creating digital olfactory senses or digital smells? Is there some interest in working down there? Is this still a long way off?

Basheer Tome: I think it’s still crazy far off, but I think part of it is they haven’t cracked the nut on actually sensing the different smells, and I think that’s one of the first major components is being able to understand and deconstruct a smell and understand what parts of it make it that actual sense.

I think they kind of do that through pretty intense laboratory studies, but they haven’t figured out a way to have similar to what you have a microphone for audio, you don’t really have like a digital nose for smells, and I think once you get to that point, then you can start working backwards a little bit better and then start reproducing those smells. Today, there’s all this different fits that you have to create and most of the products and demos I’ve seen involve just having a large myriad of vials and sprayers and then you combine a few.

Michael Wolf: But they’re certainly I think I’ve seen some startups. There are some at CES, talking or just aspiring to be that digital nose, but the digital nose, as you’re saying is kind of this sensor that can just know what the smells are, that research is still way far off. It’s still in laboratories. It hasn’t been something that you can create in a consumer device at this point.

Basheer Tome: Yeah, and it is only the first half. You have to build a sense and then you can produce.

Michael Wolf: You said at the beginning of the show you had some ideas or you had some feedback for people who were creating new devices in the kitchen, smart kitchen devices. What are some of the biggest pitfalls you’ve seen around design around today’s current crop of smart kitchen devices?

Basheer Tome: I think a lot of it is not just thinking through the way a person would actually use the product and really trying to create a product that represents bullet points rather than an actual journey or a task that human needs to do while using the product. I think like a kitchen timer is a great example of this bifurcation where before you have a chip in it, it’s round, it’s metal. It sort of hums while it’s running and you rotate it. As you rotate it, you feel that cranking and you have a visual representation of how much time is left. It’s basically like a pie graph. It goes every time it moves and you can see from far away where it’s at and then when it finishes it dings.

Inexplicably when they jumped to digital, they decide, “Screw that! Let’s just do a grid of buttons, 1 through 9, and it’s going to beep every time you hit a button and if you need to restart it, then you have to hold down start stop.”

It’s one of those things where I understand that’s way cheaper. I understand that if you want to buy the $1 kitchen timer, then sure that’s the cheap one, but you can’t pay good money for a nice digital timer that aesthetically works and that works the way your hands work and the way you’ve been using them over time, and honestly like a rotating cutter, what you would use to make a digital kitchen timer work rotationally is not expensive. It’s like cents on a dollar, so there’s no real excuse other than just the way you approach the problem, the way you think about it.

Michael Wolf: What you’re saying is when you move from mechanical to physical world of controls towards the digital world, it pays or should you just stick to paying homage to the learned behavior that we’ve had over our lifetime not just jump into this world of work. It’s complete departure, and it’s kind of in the way like when I’m reading an eBook on an iPad even though I’m dewy with my fingers and it’s on the screen like they actually try to create the visual look and flipping of a page, so we need to have some of those hands and kind of pay homage to the world, the physical world?

Basheer Tome: Yeah, in a sense. I feel like homage is a little bit more ceremonial that I wouldn’t even go for. I would say like you’re throwing away decades of learning like we’ve already figured out a lot of really great ways and you could at least try and learn from that and utilize that. I think toaster is another great example where for some reason every time they want to add a feature, it’s another button and they just don’t really think through how a normal person would use it like I guess to not be overly critical, I think one of my favorite complaints that I really feel like gets it a lot of the time is Bravo.

I really feel like they actually think about how a normal human would work through these problems and then they design around that. As of last time I checked, they have one of the highest-selling toasters on Amazon and it’s like on order of magnitude more expensive than everyone else and people still buy it and they have all these amazing little things on their that now almost everyone just belligerently copies but they were the first ones to have a little button on the toaster that says, “A little bit more.” Honestly does 30 seconds mean anything more or less to you? Like not really. It gets the thought across and it really connects with you on a human level.

Michael Wolf: I get that. I think having those fine touches that differentiate like a little bit more, I love that, but when we talk about like generational shifts towards maybe it’s Millennials, people who have been younger than that and using new technologies. I mean clearly none of those people know how to use a rotary dial phone. None of us really think about rubbing rocks together to create fire, ultimately an extreme example, but I mean is this just like do we need these hands or these homages to like older ways of doing things as like a bridge to get to the new thing or do you think it’s just something you need to keep in perpetuity over time even into the new generation?

Basheer Tome: I think it’s less about sort of keeping a tradition and more of adapting the technology to work. You take the good parts about what works and you keep those and then you let go of the parts that don’t work. Another great example of where I see a lot of this bifurcation is in the sous vide machines. I love the Joule. The app is beautiful. The people who made it some of the nicest people in the world, but I just don’t understand why there’s no buttons on it. It really basically ‑

Michael Wolf: It was a brave choice. You have to admit it was brave, but I agree with you. I think that was ‑

Basheer Tome: It’s a bold choice.

Michael Wolf: Yeah, bold. For some people, it’s a deal breaker I think.

Basheer Tome: It’s an implicit promise saying that we will update this. We will keep the app updated. We will staff this, and we will be around because the minute we stop being around or the minute we stop staffing this up, your machine won’t really work anymore because unless you’re going to dedicate a phone and an app and never update them ever again to just being this walled off garden that only operates your sous vide machine, you will update, you get a new phone and your app won’t work anymore. I think the way in general when we build the apps, it’s kind of like building a sandcastle next to the ocean. Everything is constantly in flux. You also have the waves and if you don’t keep updating, you don’t keep repairing, it stops working.

Michael Wolf: Yeah, the great thing about the old world is like if I go on to like my grandparents’ basement and I find this old clock, if it hasn’t been water damaged or whatever, there’s a good chance that it will work still. But you were to like go into your basement 50 years from now, finding an old Joule and the last time they created the app was like in 2025, then this thing is a piece of junk. You can’t use it.

Basheer Tome: Yeah. I think. To give you a good example, that’s not just on the over extreme version of like it’s all analogue and steampunk. I think Anova does a great job of straddling the line between the two where they have a physical knob, they’ve got a few lightweight buttons on top, and you can use the entire device without ever having to know that there is a phone app, or that it even has Bluetooth or Wi-Fi.

But if you want some of the more advanced controls and you really want it to time itself and you really want to have direct access to recipes and have it automatically pick and choose the times, that’s when you borrow the phone and that’s when you get some of the more complex features, and so you take the simplest most critical aspects of how you use the product and you build it into the hardware, and the more advanced features where it would really be a high cost and low gain to integrate into the hardware, you then offload it into a borrowed screen like your phone. I think that’s absolutely the right way to approach it.

Michael Wolf: Those are actually the two sous vide machines I switch back and forth in between, and obviously with the Joule, I have to use the app, right? But when I use the Anova, quite honestly a lot of times I just plug it in and I use the on-screen buttons and dial, and I don’t ever go to my phone just because it’s quicker to me. That’s one of the things I think in general I love connected devices. I mean I have been interfacing with my Sonos for the past decade using a great app. I’ve increasingly started to use my Amazon Alexa just using my voice, so I see how that transitioned from capacity touch to voices happening for me, but even with something like a sous vide cooker, if it’s on an on device button and a dial, that’s just quicker to me than getting a phone involved.

But I do see the value of the Joule-guided cooking app like this idea of visual guidance. Using that I think is valuable, so I can see you using that I think is valuable, so I can see using that. But you’re still if you don’t need that necessarily, then you don’t need necessarily, then you still have to use the phone meaning it’s one extra step.

Basheer Tome: Yeah. I like this idea of having the options because you’re not purely relying on their good faith and good will to keep updating and supporting the hardware while still allowing them to provide you with some of these more advanced features. But that said, I mean consciously not including those buttons and that interface, it does allow them to make a lot smaller, a lot sleeker, and I completely understand why they made the decisions they made and I’m not trying to  criticize them specifically and more just kind of like it’s my personal opinion.

Michael Wolf: I’ve asked you mainly about by virtual reality and augmented reality around eating, but have you given thought to using that, which you do every day, and creating these controls in cooking like how can we maybe apply virtual reality or augmented reality in some way around making food? Is that something you think about?

Basheer Tome: A bit. Yeah, I think there’s a lot of obvious potential ways to incorporate augmented reality into cooking where you’re getting tips. It might be live. It might even connect you to another chef or someone remote who might be giving you instructions step-by-step or you might say everyone has this idea but you overlay something on to a cutting board and then it tells where and how to cut things. But I think even walking back a step further from that and even something like you could use something on our current Daydream platform where we really hope and are looking to see learning as a major category of applications or experiences you have in virtual reality because I think when you have this complete virtual and fully immersive environment, there’s a lot of things you can teach someone and learn that’s a lot harder to explain without diagrams or really complex text.

Michael Wolf: Some of these devices from the Bosch, they have this thing called Mikey. It’s like basically a soulful robot that they demoed at CES. It actually projects a video onto like a surface and I love this idea of taking video, putting it on a kitchen surface, maybe it’s for instruct yourself how to cook, and I always think about this stuff when I think when I see it, I think about going on Star Wars, and I always go back to like the hologram like that’s the ultimate to me like getting to that hologram phase, that type of projection of video and maybe we get into 3D video projected into like a space without any sort of goggles on. Would you call that virtual reality?

Basheer Tome: Well, I think specifically projecting video would count more as augmented reality. It’s yeah, projected AR, as the trendy way to call it would be, but we do think about that a little bit. I think there’s a little world of ways you can implement Tango like tracking into wide variety of objects. But it still feels like to me like to me, it’s a little further off. I think part of it is my dream kitchen isn’t necessarily windowless and it has a lot of natural light and nice surfaces. That sometimes to be at odds with projection screens.

Michael Wolf: A lot of your job is just like dorks like me saying, “Hey, this is what I want. I want Star Wars. I want to live it.” Like you just kind of have to dial us all back a little bit?

Basheer Tome: No, no. sometimes I like to join along for the ride. I think one of the best parts about the job is talking to a bunch of people what they are excited some really cool ideas, so it’s not just me saying no all the time.

Michael Wolf: [laughter] Hey, Basheer, thank you for saying yes to this podcast. I appreciate you coming on. This has been a lot of fun, and I look forward to talking to you soon again.

Basheer Tome: Awesome, thank you!

Image credit: Samsung

February 22, 2017

Google Home Adds Shopping Feature And Partnerships With National Retailers

When Google launched the Google Home voice assistant speaker, they demonstrated their commitment to the smart home and the growing trend of voice control, a trend so far led by Amazon. Taking aim at the Echo market, Google launched Home with a big vision but not as many features as enthusiasts would have liked.

Still, 2017 is the year that “works with” Google Home becomes the new “Alexa skill” and no one is counting Google out of the game. The one area where Amazon previously had an inherent lead over Google, of course, was in its powerful commerce engine. Alexa makes it easy to order items from Amazon and, if you’re a Prime member, have them brought straight to your door in two days or less.

But news last week changed that. Google announced via a blog post that it has partnered with a number of big name retailers and added the ability to voice shop from them using Google Assistant. Retail Dive has the full story on Google’s new partnerships, with brands like Costco, Walgreens, Whole Foods, Bed, Bath & Beyond and even Petsmart, Google is clearly trying to cover its commerce bases with everything from food and grocery to household and pharmacy.

Google’s blog post on the announcement gives users a quick walk-through of how to get started. With simple commands like “Ok Google, how do I shop?” Google Assistant will walk users through the process. Or you can also just say “Ok Google, order paper towels” and presumably, it will. It’s less clear exactly who you’re getting paper towels from, though it’s likely you have to set up store preferences in your Google Home account. You also have to add a credit card on file to automatically charge upon ordering.

Credit: Google

The blog also says the service is free – for now. After April 30, it appears Google may have plans to create some type of Prime-esque membership for users to be able to take advantage of voice command shopping.

Google’s massive search and data engine give Google Home inherent advantages in the long run over competitive voice assistants like Amazon’s Echo. And with this new shopping feature, they’re looking to grab some of Amazon’s native territory as well.

February 14, 2017

Is The Anova Deal The Nest-Google of the Smart Kitchen?

Back in January 2014, I had just caught a ride to the Las Vegas Convention Center for the Consumer Electronics Show when I struck up a conversation with the two men in the back seat of the shuttle. They were executives from Nest, makers of the learning thermostat that had been the talk of the smart home industry for the past year, so I was naturally interested to hear what the company was up to at the big consumer trade show.

While we had a nice conversation, nothing stood out to me when I recalled the encounter a week later other than the two seemed to be in a pretty good mood. The reason I was even thinking about the chance meeting was I had just heard about Google’s acquisition of Nest for $3 billion, a huge sum of money and certainly enough to make any Nest executive happy.

I had similar thoughts a week ago when I first heard about Electrolux’s acquisition of Anova. I had just co-hosted a party with Anova at CES, and while everyone at the mixer had a good time talking smart kitchen with industry colleagues, the only indication from the Anova team that something may be in the works was everyone seemed to be in a good mood.

Last week’s news also got me thinking about other similarities between the two deals.  Much like Nest at the time, Anova was the leading independent startup in a nascent but fast-emerging connected home market, and so their acquisition by a deep-pocketed and established player helps to validate their market just as Google’s acquisition of Nest validated the smart home.

Which naturally leads one to ask, “Does that make Anova deal the Google-Nest of the smart kitchen?”

The answer to that question is yes…and no. In other words, it’s a bit complicated.

In the way of similarities, both Anova and Nest were experiencing fast growth. Anova saw its sales double year over year in 2016 and is on the verge of a million customers in the first half of 2017, while Nest hit the million customer market just around the time of acquisition.

Both deals also came at a time when awareness of their specific markets – smart home and smart kitchen – was starting to seep into the broader consciousness of the early mass market.

And of course, both made connected products with really high levels of consumer satisfaction.

But there are some big differences, perhaps the biggest of which being the types of companies who acquired them.

Yes, both were deep-pocketed suitors, but Google and Electrolux are very different types of businesses with different motivations. For Google, their core business is data and information. Sure, they have dreams of a growing hardware business, but these efforts, including their more recent Google Home product, is often motivated by a desire to further their ability to gather and distribute information to consumers in new and interesting ways.

As the world’s second-largest home appliance maker, Electrolux’s business – and motivations – are much more transparent: With the acquisition of Anova, they now have a new precision cooking hardware line they can sell. Anova and other early precision cooking companies proved this is a legitimate segment and Electrolux now has the opportunity to enter this market in a big way.

But perhaps the biggest difference between the two acquirers is their platform motivations.  Google clearly had platform aspirations with the acquisition of Nest, who’s technology they saw as the foundation for not only more of their own products, but as a platform around which they could offer to the broader industry to build third party products. Sure, the long and complicated story of Nest post-acquisition and the rise of newer approaches such as Amazon’s Alexa have changed the calculus a bit for Google and everyone else, but there’s no question that was the original vision.

For Electrolux, it’s clear they envision Anova’s product line as the foundation for more precision cooking and smart cooking products. And as is often the case when an established company buys a fast-growing startup, I could also see them trying to instill Anova’s innovation-centric startup culture and even let their newly acquired team take the lead on some of those efforts. But Anova’s precision cooking products are not a platform in the same way Nest products are a platform, nor were they intended to be, which is fine because Electrolux is not a platform company in the same way Google is a platform company.

There are other important differences. Valuations are much different today than in 2014. Hardware startups are not getting the same multiples we saw in early 2014.  And while Steve Svajian and Anova’s team are hugely capable, Google paid a premium to get an exec team led by industry legend Tony Fadell, recognized as the father of the iPod, the previous decade’s defining consumer hardware product. And while Anova has certainly filed for patents for innovation related to its immersion circulators, Nest’s IP portfolio was fairly broad in the area of the smart home.

In summary, while these deals have some similarities, in the end, the acquiring companies had very different visions and motivations. Google’s platform-centric vision of the world meant Nest’s technology would soon be positioned as a de facto standard around which the industry choose to coalesce, while Anova’s technology will serve as a platform for a company of one – Electrolux – to launch themselves into the smart, precision cooking market.

February 4, 2017

Podcast: Talking Virtual Reality and Food with Google’s Basheer Tome

In this episode, Mike talks with hardware interface designer for Google’s virtual reality team, Basheer Tome. Basheer is passionate about VR, food and cooking and product design, so Mike talks to him about all three and how they might intersect.

December 27, 2016

The Year in Food Delivery

Despite a distinct cooling off of investment in the food delivery space this year, some big names like Uber, Google, and David Chang threw their hats in the ring.

That’s because the online food delivery market is estimated around $210 billion, with companies like FreshDirect raising $189 million in the past 12 months. It’s become such a pervasive part of our way of life that Google even added a food-delivery shortcut to Maps. And there are plenty of food-delivery crowdfunding projects to go around.

But enough with the numbers. Here are the highlights in this space over the past 12 months.

More Big Players Joined the Party

This year everyone wanted a piece of the pie. Google started to ship fresh food to customers in California through Google Express. Instacart and the Food Network launched a meal-kit delivery service, and Square acquired startup Maine Line Delivery in Philadelphia to boost Caviar. Meanwhile Facebook and Foursquare made it easier to order food from within their apps through Delivery.com.

NYC darling chef David Chang decided to blow up the entire idea of a nice restaurant by launching Ando, a restaurant that only does deliveries, and he raised the bar on delivery food everywhere by launching Maple, his own delivery service that promises a daily delicious menu.

Plus, where would the year be without a few gimmicks? Taco Bell and Whole Foods both came up with ChatBots that help you order food or suggest recipes, respectively, solely through the power of emojis. And Domino’s will now let you order pizza with one tap on your Apple Watch.

The Year of UberEats

So far I haven’t mentioned the biggest player, though: Uber. The company has had quite the year in food delivery. It shut down Instant Delivery in New York City, then launched UberEats in both the U.S. and London. Next UberEats drivers staged protests over the way the pay structure has been changed, and in November a courier filed a lawsuit against the company for missing food delivery tips. Yikes.

All of this commotion from big names and turmoil within UberEats suggest that the food delivery space is still young enough that no one has solved some of the primary problems within it. Companies are grabbing on to any stronghold they see (emojis! self-driving trucks! drones! more drones!), without regard to the longevity of the solution. Uber has faced the brunt of this fast-paced growth, but we expect to see more struggles in the coming years for other players as well.

Eat Local

This year the quest to eat healthily expanded even more into food delivery. Whole Foods hinted at a “meal solution spectrum” with some sort of delivery component in the future. Good Eggs, which many thought was defunct by this point, rose from the ashes with a $15 million round of funding to help it deliver local, quality food.

And Amazon, never one to be shown up, expanded its Amazon Fresh program to Boston, among other major cities. The difference here is that Boston customers can shop from local markets, a feature that we imagine will be implemented elsewhere if it’s successful in Beantown.

You Say Potato, I Say Share Economy

In such a young and moneyed space, different business models are flying around faster than those drones I mentioned earlier.

Some want to deliver fresh ingredients to customers to help simplify cooking at home. Juicero, for example, delivers prepackaged ingredients for green juice, made in its blender that doesn’t even require cleaning. Similarly, Raised Real wants to deliver ingredients for homemade baby food, thereby making it that much easier to make your baby’s food from scratch (sounds ambitious to me).

Speaking of raising babies and tapping new markets, Drizly raised $15 million for its liquor delivery service, among other parts of its ecommerce model. And DoorDash added alcohol to its food delivery options in California (what about the rest of us?!).

Meanwhile Foodhini calls itself a “for profit social enterprise” and delivers ethnic food made by immigrant chefs: Foodhini and the chefs each receive $2.50 from each meal, after costs.

And BringMe wants to out-Uber Uber by combining delivery with the share economy in Fairfax, VA, enlisting regular folks to deliver food as “bringers.” There are already a few models out there like this, such as Favor in Texas and Tennessee, and we expect to see more too.

Of course, while all of these business models are innovative and interesting, none of them beat the ultimate and original delivery food: pizza.

Previous

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...