• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

VR

May 10, 2023

Amazon Now Lets You Buy Physical Goods in Virtual Worlds. Could It Work For Food?

This week, Amazon announced a new platform called Amazon Anywhere that enables the discovery and purchase of physical products from within virtual environments such as virtual and augmented reality and video games.

The platform, which the company showed off through an integration with an augmented reality pet game called Peridot (from the same company that made Pokemon Go), allows customers to buy physical products without leaving the game environment. Game players and VR explorers can see product details, images, availability, Amazon Prime eligibility, price, and estimated delivery date as they would on Amazon’s website. They tap the “buy” button and check out using the linked Amazon account without leaving the game, and from there, products will ship out and can be tracked and managed via the Amazon app or website.

Today in-game and virtual world purchases are limited to digital goods like currencies or digital characters, but Amazon’s new platform opens up a potentially interesting new way for players to buy physical products. The Peridot demo enables players to buy merch like t-shirts, hoodies, phone accessories, and throw pillows with game art on them, but what if shelf-stable food or food-related items were sold from within the virtual environment? Would emerging CPG brands, which often use DTC strategies early on, see this as a potential new channel to market?

While the idea is an intriguing one, the main problem with Amazon’s platform is it’s Amazon’s platform. Amazon is a relatively expensive place to purchase food, and smaller emerging DTC brands tend to prefer selling on their website using white-label e-commerce platforms like Shopify, WooCommerce/WordPress, Magento, and Squarespace until they finally graduate to retail.

However, in-world physical product purchases might get traction with bigger multichannel CPGs. Amazon tried to court big CPG brands early on with its IoT-powered Dash buttons, but eventually abandoned the project in 2019 (though they are still selling a Dash smart shelf). The company also tried to get a return on its massive investment in Alexa through sales of everyday consumables, but the division’s recent struggles show consumers, for the most part, still like to click buttons on a web page or an app to complete a purchase.

Which brings us back to Amazon Everywhere. The use of virtual or augmented worlds will grow in time, meaning Amazon’s early effort to build a platform could pay big dividends in the long run. Brands could tie products to stories or characters through experiences that would be pretty much impossible through more traditional advertising. With in-world purchases, they would be able to convert in an entirely new way.

While it’s too soon to tell if consumers will bite, I have no doubt Amazon will attempt to find out. My guess is we’ll also see other players like Facebook and Microsoft follow Amazon’s lead and build out VR and video game in-world purchase platforms for physical products as well, but for now, it looks like Amazon has got the jump on them.

February 3, 2022

Your Next Job May Be in the Metaverse. Here are Four Ways Web3 Will Change The Way We Work

This week at SimulATE, the first-ever event examining the intersection between food and beverage and Web3, The Spoon hosted a panel on the future of work in the metaverse.

To look at how our jobs may change in a Web3-powered world, we welcomed Amogha Srirangarajan, the CEO of delivery robot startup called Carbon Origins, and Barry Herbst, a VP and executive recruiting firm The Elliott Group.

Below are four ways our panelists saw work changing as we enter the era of the metaverse:

The Metaverse Will Become A Way to Do All Forms of Work

Srirangarajan, whose company Carbon Origins hires virtual reality experts to operate their sidewalk robots, said in the long term, the work-from-anywhere nature of the metaverse would open doors for workers who have previously been shut out of the labor market.

“You’re going to have jobs that anybody can do in the metaverse.” said Srirangarajan. “The jobs that we’re creating are from all walks of life are moms and dads, as they’re called students, people with disabilities, people from all around the world. They can choose what time they work, how long they work with who they work, and what type of jobs that they work on.”

Web3 Jobs Will Be Community Driven

According to Herbst, jobs in Web3 will be centered around communities.

“Web2 is all about profit, while Web3 is all about that community,” said Herbst. “Brands coming into this space are going to add value to their consumers in a whole different way, connecting with their consumers, their customers, their patrons.”

Srirangarajan agreed that community is essential and sees the metaverse as a great place to find enthusiastic new talent.

“This is how we found every single one of our Skipsters,” said Srirangarajan, referring to the term the company uses for their remote VR robot operators. “The technology’s there now to support large groups of people coming into this virtual world with affordable hardware, and hanging out and building communities.”

The Metaverse as The New Job Training Ground

According to both panelists, the metaverse will be used to teach us new skills we will need for our jobs.

“Training is why we started using VR at Carbon Origins,” said Srirangarajan. “Training is going to be a big application early on. Training for all sorts of industries, starting with the service industry.”

“I foresee a lot of continued training and development not only in the real world but also with the headset, which will save companies a lot of money,” said Herbst. “Having that experience, whether it’s learning how to clean or cook, or even going on a date, having that shopping experience, eating a meal in the metaverse. Humans are social animals, and while we thrive on having that connection with people whether it’s face to face, it’s also fun having just as we’re doing now.”

Meet The Chief Virtual Officer

According to Herbst, Web3 will push brands to create a new executive job role, the Chief Virtual Officer (CVO), and one of the skills required by the CVO is that of digital real estate manager.

“The chief virtual officer will have to be very fluent first in digital land,” said Herbst. “What are the best in class metaverses to buy digital land?”

Another necessary CVO skill will be an understanding of digital security and the ability to fortify a brand’s metaverse presence.

“How do you secure those assets safely? How do you handle a hardware wallet to ensure that you’re not going to get hacked, or have your assets stolen or fished?”

And finally, the CVO will also be the chief Web3 community officer.

“Managing that community and paying it forward is going to be a really large focus for this chief virtual officer,” said Herbst. “At the end of the day, it’s all about relationships and keeping that community intact.”

You can watch the video from the panel, Jobs in a Web3-Powered Food Industry, moderated by The Spoon’s Ashley Daigneault, below.

SimulATE Summit: Jobs in a Web3-Powered Food Industry

December 30, 2021

CES 2022 Preview: Carbon Origins Wants to Merge Robot Delivery With the Metaverse

If you’re looking to get a fresh start on a new career in 2022, may I suggest a new occupation as a virtual reality robot delivery driver?

Yes, that’s a job – or at least a new gig – being offered by a startup out of Minneapolis called Carbon Origins. The company, which is building a refrigerated sidewalk delivery robot by the name of Skippy, is looking to assemble a roster of remote robot pilots who will utilize virtual reality technology to pilot Skippy around to businesses and consumer homes.

The company, which launched in early 2021 and participated in Techstars Farm to Fork accelerator this year, will be showcasing the new technology at CES 2022 in January. This past summer, the company started testing an early version of the VR-piloted robot in the above-street skyway system around St Paul, Minnesota and plans to begin testing deliveries to offices and homes in the Minneapolis market starting in January.

You can watch a video of the company’s CEO, Amogha Srirangarajan, piloting a prototype of the Skippy robot using a virtual reality headset below. According to Srirangarajan, the robot uses machine vision to navigate the world using a neural network.

Skippy Demo (04/21)

“What you’re seeing now is Skippy’s neural network, detecting and classifying objects, analyzing the sidewalk, and segmenting safe zones for navigation,” explains Srirangarajan in the demo video.

The Skippy operators – which for some reason the company calls “Skipsters” – use virtual reality headsets to supervise and correct the robot as it navigates through the world.

“Remote human operators, who we lovingly call ‘Skipsters,’ use fully immersive virtual reality headsets to monitor and train Skippy’s neural network in real-time,” said Srirangarajan. “Like an augmented reality PacMan game, Skipsters monitor and correct Skippy’s trajectory, giving Skippy the ability to navigate the human world unlike any other robot on the planet.”

The company emailed me and asked if I wanted to try out piloting a Skippy while in CES next week, and, of course, I said yes. If you also want to become an, um, Skipster too, you can visit the company’s booth or fill out an application to become a driver here.

October 2, 2018

The Many Ways VR is Changing How We Eat

It would be easy to dismiss virtual reality (VR) as something that won’t impact what and how we eat, especially since, you know, virtual eating won’t fill a physical stomach. But there’s actually a lot going on in VR that is transforming our relationship with food.

I started thinking about this after reading a study in Food Navigator about how VR can provide sensory context for food taste testing. The idea behind the study is that where we consume food can have an impact on how we perceive it. Eaters can perceive food served in a Michelin-starred restaurant differently than the same food served at a suburban mall food court.

As Food Navigator writes, in this VR study conducted at Cornell, participants were given identical samples of blue cheese that they tasted in three virtual settings: a lab, a park bench and a dairy barn. Participants rated the cheese as more pungent in the barn setting that the other settings.

This means that food companies may be able to construct virtual testing environments to see how consumers react to products in different settings, rather than having to actually build or re-locate tests to different real world settings. They could taste whether a frozen lasagne tastes better at home, or in an office lunch room to better understand consumers and market accordingly.

Cornell researchers could take this sensory study up a notch with Givaudan’s virtual smell technology. We tried it out at the Food IT conference earlier this year and were amazed at how it recreated virtual smells like strawberry and banana as we made virtual smoothies in a blender, or virtual garlic and onions on our steak.

But in addition to tricking our senses, virtual reality is also poised to make real changes up and down the food stack.

There are already plenty of restaurants that will dazzle you with a VR experience, and even using VR to better train restaurant employees. Companies like Kabaq are laying the groundwork for augmented reality in restaurants for food selection and virtual reality for digital food courts where people peruse and order food for delivery.

Virtual reality is being used to make training robots easier so we don’t just order food for delivery from our favorite restaurant, we recreate that same food in our homes. As we’ve written before, it’s possible that world class chefs could use VR to train robots with precise movements. Someday not too far off, the training software for these robots could then be downloaded to your home cooking robot so you could have a virtual Thomas Keller make your dinner.

On the farm, companies like AI Reverie are using virtual reality to create synthetic data that can be used to train artificial intelligence. Robot weed killers could be trained to spot specific types of weeds in a virtual setting, which means the robot could learn how that weed looks in all different kinds of lighting and weather conditions, without needing to wait for those conditions to occur in the real world.

VR has been overhyped before, so most people are probably wary of its true promise at this point. But the technology is making small in-roads right now and will have a very real impact in the near future.

September 20, 2018

Kabaq Wants to Create the World’s First Virtual Reality Food Court

There’s no denying that we live in an age of curated images, where every Instagram photo is cropped, edited, and put through a filter before it’s sent into the stratosphere.

Kabaq, one of the 13 companies pitching at the Startup Showcase for the Smart Kitchen Summit (SKS) this October, is leveraging virtual and augmented reality (VR and AR) to create a more immersive food experience for our image-obsessed society. Thus far, the company has worked with Magnolia Bakery to help engaged couples see their wedding cake before the big day, and also led an AR campaign to let Snapchat users play with 3D-visualized pizzas.

But Kabaq founder Alper Guler has much grander ambitions for his company. Read our Q&A with Guler to get a better picture (pun intended) of their vision for a future in which our food choices are guided by VR and AR.

This interview has been edited for clarity. 

The Spoon: First thing’s first: give us your 15-second elevator pitch.
Kabaq: As Kabaq, we create the most lifelike 3D models of food in the world through AR/VR. Our main goal is to help customers to decide what to eat, while at the same time helping restaurants to push premium items and tell stories about their food.

What inspired you to start Kabaq?
The era of Instagram, Facebook and Snapchat has changed what and how we eat at restaurants. Today food isn’t just about taste; it’s also about the visual experience. Now social platforms and smartphone manufacturers have created this shift in food, investing and pushing heavily in immersive technologies like augmented and virtual reality. These two emerging trends inspired us to bring Kabaq into life.

What’s the most challenging part of getting a food tech startup off the ground?
Food and technology are connected to dining experiences more than ever. Technology is improving our experience of how we grow, source, discover and order food. But adaptation of new technology has been slow, and we are experiencing a relatively slow response from the market.

How will Kabaq change the day-to-day life of its users?
In the future I believe smart glasses will replace smartphones. Everybody will use these smart glasses to engage with digital experiences around them. Imagine you are in this restaurant, using your augmented reality glasses: you can see the whole menu on your table virtually, and even order through your glasses. Don’t worry about the check — it is already paid through your glasses.

VR can also change how we order delivery. Think about how we used to connect to the internet through dial-up modems. We needed to disconnect from the internet to call and order food for pick-up. Then, companies like Seamless created platforms to order food online. With mobile phones and location-based services like UberEats, the experience became even more smooth.

In the near future I believe when you are connected to VR, you will also order your food in VR. We will create the world’s first virtual food court for people to visit through VR and order directly through the same system.

What’s next for Kabaq?
We are creating beneficial use-cases for using AR in-restaurants, delivery apps, marketing, catering and cookbooks. We’re working to bring AR to all aspects of food — and soon.

—

Thanks, Alper! Get your tickets to SKS to hear him pitch alongside 12 emerging food tech companies at our Startup Showcase and get a taste of how Kabaq applies VR to food.

July 16, 2018

How Will AR and VR Change the Way We Eat? Jenny Dorsey Has Some Thoughts

Part chef, part entrepreneur, all innovator, Jenny Dorsey has become to go-to expert in the intersection of augmented and virtual reality. When Smart Kitchen Summit founder Michael Wolf spoke with her on our podcast last year, he called her “foremost authority on the nexus point between AR/VR and food.”

So of course we invited Dorsey to speak about it on stage at SKS. To whet your palate, we asked her a few questions to discover more about what exactly we have to look forward to in culinary future — virtual and otherwise.

Want to learn more? Make sure to get your tickets to SKS on October 8-9th to see Jenny Dorsey talk about how augmented and virtual reality will change the way we eat.

This interview has been edited for clarity and content.

Q: What drew you to explore AR and VR through food, something seemingly very separate and disconnected?
A: It is the strangest story. I went to acupuncture in the spring of 2017 totally confused about what I wanted to do with my life and art. I had this random idea pop into my head at acupuncture that I should focus on AR and VR…which I literally knew nothing about. I went home to my husband and he just said, “Okay, I support you…but I don’t know what you’re talking about.”

Fast forward a year, and I’ve been experimenting with different ways to merge these various things together. I’ve learned a lot about what doesn’t work (eating with headsets on) and what makes people prone to distraction (AR apps), but I also found some pretty awesome ways to communicate and strengthen my food through AR/VR. For instance, I hosted a tasting event in Nicaragua where we profiled three different types of Nicaraguan agricultural staples using 360° video, then served guests both headsets and the final tasting menu after they watched — and learned — the seed-to-harvest process of these ingredients. It was really educational, fun (for many, it was their first time in VR!) and the process added some extra meaning to the food and drink we prepared.

Our next big thing is a series called “Asian in America”, which explores the Asian American identity through a symbolic meal, paired with a stroke-by-stroke Tilt Brush recreation of each dish for viewers to watch, while listening to the symbolic explanations, before eating. (You can see more about both of those events over at Studio ATAO.)

Q: Tell us about your experimental pop-up series, Wednesdays.
A: Wednesdays started in January 2014 as a personal creative outlet while I was working in a restaurant and feeling pretty burned out. At the time, my then-boyfriend, now-husband was still in business school (where we met) and I remember us commiserating on how hard it was to get to really know people around us. He was interested in making cocktails, and we thought: why don’t we host a dinner party? We wanted to create an environment where people would be comfortable enough to be themselves and be vulnerable around others.

We hosted a beta-series of dinners with friends for the first month, then we started getting strangers coming to the table to eat, which prompted us to say “Hey, maybe we are onto something”. Fast forward 4 ½ years and we’ve hosted hundreds of dinners for thousands of guests across New York City and San Francisco, been written up in many major food media outlets, and usually sell out in 30 minutes or less!

We aren’t your average dinner party — we do ask a lot from our guests. There’s mandatory questions to answer before you even purchase your ticket (everything from “What’s your biggest failure and how has it motivated you?” to “Are you in the job you want? If not, how are you getting there?”), lots of bizarre things to eat and drink when you arrive (like bugs!) and direct, in-your-face realness from me, my husband and our team. There’s no small talk. It’s not for everyone, but for the people who follow us I think it’s really what they are looking for.

Q: What’s the coolest/craziest way you’ve seen technology changing the food system? Blow our mind!
A: I’m currently very interested in how blockchain could help the food system. Seeds & Chips just put out a call for blockchain influencing the egg supply chain, so I’m really excited to see what different companies come up with. I also spent some time at a winery last year and was amazed to see they have drones which tell them literally when and which plot of vineyards to pick for a certain Brix (sugar) count in that specific grape. That sort of detailed information would’ve taken constant field-walks to ascertain years ago.

There’s also technology that will calculate exactly how much food waste your restaurant generates in a week/month/year, AND a system that will turn that waste into compost. While technology has done a lot in terms of streamlining of our food system, I’m still waiting for it to solve some of the biggest issues we face today: a living wage, worker rights, consistency and training, preventing food waste, educating consumers, etc. — pieces that require more politics and facetime. Overall, we still have lots of work to do!

Q: How do you see AR/VR — and technology in general — shaping the future of food?
A: I still stand by the major points in my TechCrunch article from late last year. I think the biggest areas of impact will be food products (CPG) and how they are marketed — both experientially (through VR), but also packaging (through AR).

In terms of restaurants, I just wrote a piece about VR training, which I do think will be a fantastic and hugely influential piece of the technology — but it really needs to come down in price point first.

Overall, I think artists and creators are still getting acclimated to how this technology works and what they can do it with. I hope to see AR/VR become almost an expected point of interaction or engagement between food business (product, service or restaurant) and the customer as we continue finding artistry in it.

Q: What’s your desert island food or dish?
A: I feel I should say something cold, because I would be hot, but most likely I would be craving pho. LOL!

June 26, 2018

Smell-O-Vision Meets VR with Givaudan’s Technology

Sure, you may have walked, flown, or even blasted aliens on a virtual world — but did you ever stop to smell the virtual roses?

With Givaudan‘s technology you can smell not only the roses but a variety of other scents in a virtual kitchen.

Here at the Food IT conference presented by The Mixing Bowl in San Francisco, Givaudan had their virtual-smells-in-a-box on display. Being the intrepid reporters we are, we strapped on a VR headset, grabbed a hand controller and stuck our nose in a scent emitter to smell bananas and strawberries as we made virtual smoothies, as well as (very strong) garlic and onions for our virtual steaks.

This tech will be making its way to consumers at some point, so if you plan on gaming in the near future, you’ll be able to smell gunfire or smoke as you wander the apocalyptic wasteland, or, more appropriately, the enticing aromas of cooked steak in The Legend of Zelda.

Check out the sites of virtual smells in our video below:

Smell-O-Vision Meets Virtual Reality with Givaudan from The Spoon on Vimeo.

June 25, 2018

Podcast: Covariant.ai’s Pieter Abbeel on Using VR to Train Chef Robots

Pieter Abbeel is a professor of robotics and AI at UC Berkeley, startup advisor, co-founder of Covariant.ai and this week’s guest on The Automat.

Covariant’s (formerly Embodied Intelligence) technology gets pretty futuristic pretty quick, so stay with us. Covariant is developing a way for people to train robots using virtual reality. While this approach is new (Covariant just launched last November), as Abbeel explains, it means that some day in the not-all-that-distant-future, a Michelin-star chef will be able to use VR to train a robot how to cook. All the nuances and technique of that chef will be captured and even better, software will record it all so you could basically “download” that chef to a cooking robot in your home.

Abbeel was a great guest, and we covered a ton of topics including the importance of good data, advances in computer vision, and the mismatch between what people think robots are good at and what they are actually good at.

If you like robots, and who doesn’t, be sure to subscribe to The Automat for our weekly discussions about food-related robots and AI.

May 27, 2018

Podcast: Using VR to Train AI to Kill Weeds (Among Other Things)

Using virtual reality to train artificial intelligence to better interact with the real world almost sounds like what you’d get if Inception and Westworld had a synthetic baby. But there’s no deep mystery or ambiguous ending here; it’s the work of a company called AI.Reverie, and it has applications in agriculture technology.

This week on The Automat (our weekly podcast about food-related robots and AI), we sit down with Daeil Kim, Founder and CEO of AI.Reverie to talk about the power of virtual worlds for training AI, applications for the technology in agriculture (weed killing robots!), and how it can help identify and sort food in the supply chain.

It’s a fascinating deep dive into really the cutting edge of both AI and VR. Listen to it here and subscribe to all our Spoon Feed podcasts in iTunes.

February 2, 2018

Podcast: A Chef’s Journey To The Intersection Of Virtual/Augmented Reality & Food

Ever since I saw Chewie and CP30 playing hologram chess in Star Wars as a kid, I’ve been intrigued by the idea of creating virtual images and worlds.

A generation later, I more fascinated than ever by what we now call augmented and virtual reality. I’m especially intrigued about where these new technologies intersect with food, and a week doesn’t go by where I read about an innovator creating a new way to enhance the shopping, restaurant or cooking experience with AR or VR.

Another person excited about this fast growing space is Jenny Dorsey. A year ago, the professional chef had an epiphany: she needed to become the foremost authority on nexus point between AR/VR and food.

On the podcast, I catch up with Jenny to hear how her journey to become the go-to expert in this exciting area is going and learn about some new and interesting ways that augmented and virtual reality are changing food.

You can listen to the podcast below, download here or find it on Apple podcasts.

March 3, 2017

Full Transcript: Talking VR, Virtual Eating & Smart Design With Google’s Basheer Tome

Last month I talked with Basheer Tome, the lead designer for Google’s virtual reality hardware. We covered a lot of ground, from how virtual reality will change how we live in the home to virtual eating to smart design and new interfaces.

You can listen to the podcast here or read the full transcript of this very informative conversation below.

The conversation was edited slightly for readability.

Michael Wolf: I don’t think you’re the only one at Google on their virtual reality side of things – but you’re a hardware interface designer for virtual reality with Google. How are you doing, Basheer?

Basheer Tome: Good, good. I’m actually the only one by title…

Michael Wolf: You are?

Basheer Tome: [laughter] Yeah. I’m trying to make this a thing. There is a sort of unproven space between industrial design and user experience design that generally gets covered by one or the other, but there is like the people, the actual designers who care about a product looks and how it sort of has its own brand and the aesthetics, and there’s the user experience designers who are worried about like what buttons are on it and how do those work, how do those function, and how people use the product, but very often one or the other has to do both of those jobs when it comes to the actual buttons on a piece of hardware.

Usually it ends up getting skewed one way or the other or you get something beautiful that’s hard to use, or you get something easy to use that’s quite not so appealing. I try and sit in between the two and help connect them together, and I think a lot about how something like feeling in your hand, how good it clicks, and how easy it is to use and how you can remember where all the things are and how they work.

I think part of the interest on the food side for me is that for some reason all these kitchen manufacturers, the minute they put a chip on it, they forget everything they’ve ever learned and just go nuts.

Michael Wolf: [laughter] I will definitely dive into where your thoughts are around some of the products being designed for the kitchen because there are a lot of different attempts, a lot of different things going on. But let’s talk a little bit more about what you’re doing every day at Google. Google, as many people know, is investing heavily in virtual reality, but the way you think about it and based on our early conversation, you guys are trying to make it somewhat more of a platform for something other than just gaming because a lot of people are thinking of virtual reality in the context of let’s play a really immersive video game, but you guys are going to try to make it more broadly applicable to our wife in a lot of different respects.

Basheer Tome: Yeah, we really virtual reality and augmented reality, and there’s like a third variant – mixed reality which is more mediated – but we really see these spatial computing platforms as really like the next version as we know it. It may not take over 100 percent of the way you interact with computers, but we do see that as a major step and a major piece going forward. For that to be true, it has to break away from being just about gaming. I think there’s a lot of really great gaming experience you can have and experiences I personally I already had, but it really comes down to trying to take this thing that people tend to enjoy already a little bit on the gaming side and trying to open it up to the rest of everyone, and we do that through creating apps that actually are relevant to them, that helps solve needs and problems but then we also think about how to bring down the cost to make it more accessible, work with other manufacturers and partners so that there is a wide variety of platforms with different abilities that sort of market to those people, and we’re really trying to turn this into a more broad generalized platform more than just a gaming rig.

Michael Wolf: For people who aren’t into this idea of virtual reality, augmented reality, mixed reality, very briefly explain what the differences are between those three.

Basheer Tome: Sure. virtual reality is where you’re putting a headset on your face made out of cardboard, plastic, or fabric, and really there is a screen in there, and in an ideal world we’re tracking where it is either rotationally in an ideal world in a free space and it really feels like you’re replacing your entire reality around you with a virtual one and you’re interacting with that and that can happen over digital space and you can interact with other friends. But for the most part, you’re taken to a different place.

With augmented reality, you stay in the same place and we’re actually overlaying information on top of that. What that generally means is that’s you have a transparent display rather than something opaque and ‑

Michael Wolf: Like Google Glass?

Basheer Tome: No. I think that’s one of the biggest mischaracterizations of Google Glass because Google Glass never put stuff directly between the majority of your vision and what you’re looking at. Google Glass, we call it the heads-up display, which sort of stays in the corner of your eye and it’s supposed to be like a more comfortable easier to use notification center and image capturing device rather than actually augmenting everything you see.

Mixed reality is more of a hybrid in between step, so right now with augmented reality, one of the biggest drawbacks is that you overlay light on top of what you’re seeing, and so a lot of times you can’t really replace what you see. You can only add stuff on top. Mixed or mediated reality ends up being more of you actually have opaque pixels and you can actually replace things that you see in your vision and oftentimes especially you see nowadays it means using a screen but then you’re using a camera to actually show you what’s going on outside and then you’re editing that video live rather than just having to overlay it purely on top.

Michael Wolf: When you talk about augmented reality, some of the earliest I guess instantiations of that coming into the world, I remember we’re on Android phones, you would have a smartphone and you would kind of move it around, and through the camera, you would overlay information on top. Is that basically what we’re talking about the most common form of augmented reality today when you’re using some sort of smartphone app?

Basheer Tome: Yeah, I think nowadays that’s definitely the most common variety of it. I think one of the biggest drawbacks it has right now is oftentimes it doesn’t really have a whole lot of awareness about what’s actually in your world. It knows generally what you’re looking at and that you’re rotating the phone and it can put stuff on top. I think Pokémon Go is a great example of that where you see the camera behind and the Pokémon is sort of getting there and they might try to do some clever stuff with the Pokémon laying on the ground. But for the most part, it doesn’t know if there’s a chair there or it doesn’t know if there’s a door. It’s not as smart as it could be and I think when people talk about the future of augmented reality, it’s much more in-depth and much more involved and that’s part of why we haven’t really gotten there as much yet because you actually have to start scanning the world, tracking the world, and knowing where every single object is in free space.

Michael Wolf: In Pokémon Go, correct me if I’m wrong, I think it basically takes GPS data to give it a basic understanding of the world, but it hasn’t really gone through the physical world and created like this really nuanced process database of all the spaces. Am I right on that?

Basheer Tome: Yeah, that’s correct. I think in a true augmented reality world like the way people use the term, if you were looking at a Pokémon standing in the parking lot, then someone else could be looking at it in the exact same space, it in the exact same position and you’d be both looking at the same one. Whereas today with the game, you’re generally in the same location, you end up finding the same Pokémon but they might not be in the same spot.

For Pokémon Go, it’s still a great fun experience and so I think finding and intuiting the fidelity of games and applications to the fidelity of the actual experience you are able to bring I think can help still make compelling experiences. You don’t have to have like a fully augmented reality world for you to have a great time.

Michael Wolf: When you look at the capabilities of what we have today in augmented reality, in a home context, I want to talk a little bit about how possibly you can apply this type of technology within our homes to make it a more rich experience. I can certainly see how it would be valuable for people with maybe they have mobility issues. When it’s dark, they can’t see but they can maybe you use augmented reality to identify things in the surroundings. Do you see huge potential using augmented reality or even virtual reality within the home context?

Basheer Tome: Yeah, I think there is definitely a crazy amount of possibilities for that, and I think the vast majority of them are stuff we can’t even really think of until we have the technology out there. I work on the input teams specifically and we all sort of report up to this more virtual augmented reality and mediated reality. It’s a big combined department.

Michael Wolf: When you say you put the information, you’re taking in information from around the world to process?

Basheer Tome: No, it’s less on the sensing side and more on how a person interacts with this device.

Michael Wolf: Got it, okay.

Basheer Tome: Yeah, much more design-centric and people centric than technology centric, and so we interact with the techno team a lot actually, and we work with them, trying to think of different ways people can use this technology in integrating it in their daily lives. I think Tango especially has gotten me pretty excited about all of these possibilities and how fast they will be coming. I mean there’s already multiple things coming down the pipe, and we’re really looking at sort of a more broad option of the technology.

Michael Wolf: For the audience, very quickly explain the Tango concept.

Basheer Tome: Sorry. Tango is our first-party augmented reality tracking system that uses a camera and a few other extra sensors that are relatively cheap to reconstruct your world and actually figure out where you are absolutely positioned to the entire Earth, which is crazy and awesome, and so it actually can figure out like not just where are you in the world or where are you generally but it knows that you’re in this room in this exact spot and you’re looking at this door that it has seen and scanned before.

Michael Wolf: For each person’s reality in their space, their life space that they’re moving around in, it creates its own unique personalized database, it’s going to know my home, it’s scanned it before. Then from there, it can add richness to that. It can maybe add information and interactivity?

Basheer Tome: Yeah. I mean I think what’s unique about is that yeah it can keep personalized data about you and sort of you can save your own data, but I think what’s unique is that it actually has a unique recollection of say your desk, and it knows where it is and that you’ve looked at it, and so that if you had placed a box on your desk, then your friend can come over, open up their phone, start to boot up Tango and start up an app and they can also see that same box in the same place.

Basheer Tome: Yeah. It’s really starting to catalog and connect all these things in real life, in real space.

Michael Wolf: So I don’t have to actually buy my wife like a real gift. I can just create a virtual gift and leave it there for her?

Basheer Tome: [laughter] You can let her know that she has a real gift.

Michael Wolf: [laughter] I think it’s probably a good idea to keep buying stuff. That’s the general rule. Yeah, my wife will probably like my virtual reality in part because she doesn’t necessarily like ‑ I don’t think she likes the idea of virtual reality and that’s another topic entirely.

You know when you go to like Universal Studios, all the rides now are basically you’re moving around in this chair that goes around, but you’re basically looking at a screen and it’s creating the illusion of moving fast through space, but it’s almost like a 3D experience. She doesn’t like that. She doesn’t like watching 3D movies, so she’s really scared about this idea of virtual reality. Is that something that you guys are trying to work with and make it more so people aren’t just getting that weird sense of disorientation when they’re in these 3D environments?

Basheer Tome: Yeah. That’s definitely one of our biggest things we’re trying to help fix. I think in particular virtual reality these days seems pretty in opposition to a lot of women’s sensibilities, and a lot of this is because it’s this big expensive annoying thing, and I think in general women they’d much rather not deal with useless stuff.

Michael Wolf: By the way, have you seen those Samsung advertisements where they showed a person in the middle of a room with a big mask on and there’s a bunch of people around them, looking at them? Those are just terrible commercials because they just look so awkward [laughter].

Basheer Tome: We’re actually huge fans of their product. I think they got a lot of things right really early in and that’s not Google forced me to say that. I think it actually is a pretty great product.

Michael Wolf: I think this idea of a person sitting there with the headset on in the middle of a room with people looking at it, that just seems like an awkward social situation for a lot of people, and maybe even more so for a woman who is always my wife.

Basheer Tome: Yeah, I think some of the biggest drawbacks that a lot of virtual reality marketing today is it’s aimed at gamers. It’s aimed at dudes for the most part like it’s black, it’s big. It has all these big cables, and all these crazy high setup costs, and it’s just like a lot of hassle for not a lot of gain, and we are really trying to fix a lot of that and we’re really trying to open it up to a lot more people.

Michael Wolf: Let’s talk a little bit more about some of those types of applications that we can use with them. One of those is I think eating. I know that that’s a particular passion for you. Talk a little bit about and how where we are on that because it seems like to some people, this seems like maybe this weird, unfathomable. You can never kind of create a virtual eating experience. Some people are actually working heavily on this because it seems like maybe there’s a lot of applications for like weight loss, etc.

Basheer Tome: Yeah. I think there’s a lot of interesting possibilities there. I mean I just want to be clear that I mean as of today, the visual of having this big black box strapped to your face while getting chili all over your shirt as you’re trying to eat is I think pretty dystopic and comical. But I think there’s a lot of interesting possibilities for sure. I mean there’s a lot of interesting studies revolving around using audio to manipulate your sensations a bit while you’re eating and there’s a lot about color and visual information.

I think the weight loss I think could get there. I’m a little less bullish on that, but I think there’s a lot of interesting use cases and I can try and dive into a few of those, but I think one of my favorite examples for sure is Heston Blumenthal’s dish the, Sound of the Sea.

Michael Wolf: Yeah, so explain that for people.

Basheer Tome: Yeah, I think he made it around 2010 at The Fat Duck, but it’s this super ocean-y dish where you got scallops, seafoam and flowers, and it looks like this beautiful ocean wave on the beach. He was really fascinated with trying to really enhance the sea and the seafood taste that you get there. I think he ran across this one research paper that said that if you listen to the sound of waves, or eat at the beach or at the ocean that you sort of get that sensation a lot more strongly and you taste more strongly the ocean if you can hear those waves.

What he actually does is he got the dish and then next to it comes a conch, and inside the conch is an iPod playing the sounds of waves, and then it has two little earbuds that stick out and they instruct you to place them into your ears and then you eat the dish, and it sort of puts you in the right time and the right place and really opens up your senses.

Michael Wolf: For people who don’t know Heston Blumenthal, he’s basically a celebrity chef, worked at Fat Duck, actually probably one of the most famous chefs in the world. And so what you’re saying is he’s also a pioneer in a way in virtual eating?

Basheer Tome: Yeah, I guess it depends on how you define the virtual aspect of it.

Michael Wolf: Yeah, yeah, or it’s multisensory cooking or another term.

Basheer Tome: Yeah, yeah. Another term I’ve seen is cross-modal sensory where you’re trying to use sound or some other non-taste sense to augment your taste.

Michael Wolf: You talked a little bit about using sound. There’s obviously this strong olfactory sense or smelling sense. There’s also been research in natural, virtually recreating taste sensations using some weird kind of contraption that send electrical impulses to your tongue. But do you follow all this stuff that’s on the cutting-edge of maybe even manipulating your taste senses.

Basheer Tome: I’m really skeptical about where they’re at with some of that stuff. I work a lot in haptics and thinking about how you can use like vibrations and different types of stimuli to different sensations in virtual reality and there are these electrical pads that you stick to your arm or different parts of your body, and you can send electrical current actually it’s your muscle.

I’ve seen a lot of that stuff and it’s almost marketed as it feels like your arm just moved and it feels a lot more like a reflex than it does like your body intentionally moved it. It gets really stinky and tingly. I’m skeptical that if you put it on your tongue or on your throat that it really would feel just like eating a steak.

 

Michael Wolf: But this is an area that you have taken a lot of interest, so talk a little bit about how you think about it briefly more broadly. You talked about Heston Blumenthal, but it’s an area you have a personal interest in. How do you think this idea of combining either augmented or virtual reality with food in some way, what are some of the possibilities in the future?

Basheer Tome: I think some fun little examples of that are like which frequencies you hear while you’re crunching or eating especially as it relates to texture has a huge effect on that texture sense, so there’s fun huge tricks can try at home like if you put on some really noise-cancelling headphones like it’s some nice Bose ones or whatever brand you prefer and you’re trying to eat some potato chips, they don’t feel as crunchy because you don’t hear those high-frequency sounds. Similarly, you can do that if you still wear the noise-cancelling headphones, then you put your hands on a chalkboard and slide them across, it feels a lot smoother because a lot of your sensory information for those higher frequency sounds come through your ears rather than through your fingers. It’s really your fingers that more detect than the lower frequency sounds.

Michael Wolf: Is it less horrible if you’re scratching it with your fingernails with that?

Basheer Tome: Actually, yeah. It is less horrible if you wear the noise-cancelling ear – it feels a lot more smoother. There’s a guy, Charles Spence, who’s done a lot of these experiments and published them. There is a podcast I love, Gastropod from Cynthia and Nikola, and they had a whole episode where they interviewed Charles, and I think it’s called Crunch, Crackle, and Pop, and yeah, it’s fascinating. There’s a lot of really interesting work there on just the audio component.

Michael Wolf: Because we don’t think about the separation of all these senses and what you’re saying is the experience completely changes if you manipulate one part of it. Maybe it’s the hearing. You put on noise-cancelling headphones, something that could be like entirely horrible like running your fingernails down a chalkboard, you’re saying it changes the nature of that. That idea is maybe applicable to a lot of different things.

Basheer Tome: Yeah, yeah. I think the general concept of replacing your senses with some digital ones I think is a little bit more far off, but I think the concept of augmenting them is a lot closer to reality today, and I think when people talk about augmented reality or virtual reality, it’s so rarely due to they think of anything but the visual, and when they do, thy go straight to audio, and those are some of the lower-ranking fruits and a lot easier for computers to do. We still haven’t corrected the knot on creating digital aromas and even [unintelligible 0:21:58] is still quite rudimentary. It’s just different variations on a vibrating motor.

Michael Wolf: It’s funny because you talk about digital smells, I remember back in early 2000’s at CES, people were talking about that, I thought it was kind of the sign of the impending bubble that came [laughter]. But what I’ve seen at CES this year and maybe the last year or two is people are getting back into this idea of trying to crack that nut. Where are we on that in terms of like creating digital olfactory senses or digital smells? Is there some interest in working down there? Is this still a long way off?

Basheer Tome: I think it’s still crazy far off, but I think part of it is they haven’t cracked the nut on actually sensing the different smells, and I think that’s one of the first major components is being able to understand and deconstruct a smell and understand what parts of it make it that actual sense.

I think they kind of do that through pretty intense laboratory studies, but they haven’t figured out a way to have similar to what you have a microphone for audio, you don’t really have like a digital nose for smells, and I think once you get to that point, then you can start working backwards a little bit better and then start reproducing those smells. Today, there’s all this different fits that you have to create and most of the products and demos I’ve seen involve just having a large myriad of vials and sprayers and then you combine a few.

Michael Wolf: But they’re certainly I think I’ve seen some startups. There are some at CES, talking or just aspiring to be that digital nose, but the digital nose, as you’re saying is kind of this sensor that can just know what the smells are, that research is still way far off. It’s still in laboratories. It hasn’t been something that you can create in a consumer device at this point.

Basheer Tome: Yeah, and it is only the first half. You have to build a sense and then you can produce.

Michael Wolf: You said at the beginning of the show you had some ideas or you had some feedback for people who were creating new devices in the kitchen, smart kitchen devices. What are some of the biggest pitfalls you’ve seen around design around today’s current crop of smart kitchen devices?

Basheer Tome: I think a lot of it is not just thinking through the way a person would actually use the product and really trying to create a product that represents bullet points rather than an actual journey or a task that human needs to do while using the product. I think like a kitchen timer is a great example of this bifurcation where before you have a chip in it, it’s round, it’s metal. It sort of hums while it’s running and you rotate it. As you rotate it, you feel that cranking and you have a visual representation of how much time is left. It’s basically like a pie graph. It goes every time it moves and you can see from far away where it’s at and then when it finishes it dings.

Inexplicably when they jumped to digital, they decide, “Screw that! Let’s just do a grid of buttons, 1 through 9, and it’s going to beep every time you hit a button and if you need to restart it, then you have to hold down start stop.”

It’s one of those things where I understand that’s way cheaper. I understand that if you want to buy the $1 kitchen timer, then sure that’s the cheap one, but you can’t pay good money for a nice digital timer that aesthetically works and that works the way your hands work and the way you’ve been using them over time, and honestly like a rotating cutter, what you would use to make a digital kitchen timer work rotationally is not expensive. It’s like cents on a dollar, so there’s no real excuse other than just the way you approach the problem, the way you think about it.

Michael Wolf: What you’re saying is when you move from mechanical to physical world of controls towards the digital world, it pays or should you just stick to paying homage to the learned behavior that we’ve had over our lifetime not just jump into this world of work. It’s complete departure, and it’s kind of in the way like when I’m reading an eBook on an iPad even though I’m dewy with my fingers and it’s on the screen like they actually try to create the visual look and flipping of a page, so we need to have some of those hands and kind of pay homage to the world, the physical world?

Basheer Tome: Yeah, in a sense. I feel like homage is a little bit more ceremonial that I wouldn’t even go for. I would say like you’re throwing away decades of learning like we’ve already figured out a lot of really great ways and you could at least try and learn from that and utilize that. I think toaster is another great example where for some reason every time they want to add a feature, it’s another button and they just don’t really think through how a normal person would use it like I guess to not be overly critical, I think one of my favorite complaints that I really feel like gets it a lot of the time is Bravo.

I really feel like they actually think about how a normal human would work through these problems and then they design around that. As of last time I checked, they have one of the highest-selling toasters on Amazon and it’s like on order of magnitude more expensive than everyone else and people still buy it and they have all these amazing little things on their that now almost everyone just belligerently copies but they were the first ones to have a little button on the toaster that says, “A little bit more.” Honestly does 30 seconds mean anything more or less to you? Like not really. It gets the thought across and it really connects with you on a human level.

Michael Wolf: I get that. I think having those fine touches that differentiate like a little bit more, I love that, but when we talk about like generational shifts towards maybe it’s Millennials, people who have been younger than that and using new technologies. I mean clearly none of those people know how to use a rotary dial phone. None of us really think about rubbing rocks together to create fire, ultimately an extreme example, but I mean is this just like do we need these hands or these homages to like older ways of doing things as like a bridge to get to the new thing or do you think it’s just something you need to keep in perpetuity over time even into the new generation?

Basheer Tome: I think it’s less about sort of keeping a tradition and more of adapting the technology to work. You take the good parts about what works and you keep those and then you let go of the parts that don’t work. Another great example of where I see a lot of this bifurcation is in the sous vide machines. I love the Joule. The app is beautiful. The people who made it some of the nicest people in the world, but I just don’t understand why there’s no buttons on it. It really basically ‑

Michael Wolf: It was a brave choice. You have to admit it was brave, but I agree with you. I think that was ‑

Basheer Tome: It’s a bold choice.

Michael Wolf: Yeah, bold. For some people, it’s a deal breaker I think.

Basheer Tome: It’s an implicit promise saying that we will update this. We will keep the app updated. We will staff this, and we will be around because the minute we stop being around or the minute we stop staffing this up, your machine won’t really work anymore because unless you’re going to dedicate a phone and an app and never update them ever again to just being this walled off garden that only operates your sous vide machine, you will update, you get a new phone and your app won’t work anymore. I think the way in general when we build the apps, it’s kind of like building a sandcastle next to the ocean. Everything is constantly in flux. You also have the waves and if you don’t keep updating, you don’t keep repairing, it stops working.

Michael Wolf: Yeah, the great thing about the old world is like if I go on to like my grandparents’ basement and I find this old clock, if it hasn’t been water damaged or whatever, there’s a good chance that it will work still. But you were to like go into your basement 50 years from now, finding an old Joule and the last time they created the app was like in 2025, then this thing is a piece of junk. You can’t use it.

Basheer Tome: Yeah. I think. To give you a good example, that’s not just on the over extreme version of like it’s all analogue and steampunk. I think Anova does a great job of straddling the line between the two where they have a physical knob, they’ve got a few lightweight buttons on top, and you can use the entire device without ever having to know that there is a phone app, or that it even has Bluetooth or Wi-Fi.

But if you want some of the more advanced controls and you really want it to time itself and you really want to have direct access to recipes and have it automatically pick and choose the times, that’s when you borrow the phone and that’s when you get some of the more complex features, and so you take the simplest most critical aspects of how you use the product and you build it into the hardware, and the more advanced features where it would really be a high cost and low gain to integrate into the hardware, you then offload it into a borrowed screen like your phone. I think that’s absolutely the right way to approach it.

Michael Wolf: Those are actually the two sous vide machines I switch back and forth in between, and obviously with the Joule, I have to use the app, right? But when I use the Anova, quite honestly a lot of times I just plug it in and I use the on-screen buttons and dial, and I don’t ever go to my phone just because it’s quicker to me. That’s one of the things I think in general I love connected devices. I mean I have been interfacing with my Sonos for the past decade using a great app. I’ve increasingly started to use my Amazon Alexa just using my voice, so I see how that transitioned from capacity touch to voices happening for me, but even with something like a sous vide cooker, if it’s on an on device button and a dial, that’s just quicker to me than getting a phone involved.

But I do see the value of the Joule-guided cooking app like this idea of visual guidance. Using that I think is valuable, so I can see you using that I think is valuable, so I can see using that. But you’re still if you don’t need that necessarily, then you don’t need necessarily, then you still have to use the phone meaning it’s one extra step.

Basheer Tome: Yeah. I like this idea of having the options because you’re not purely relying on their good faith and good will to keep updating and supporting the hardware while still allowing them to provide you with some of these more advanced features. But that said, I mean consciously not including those buttons and that interface, it does allow them to make a lot smaller, a lot sleeker, and I completely understand why they made the decisions they made and I’m not trying to  criticize them specifically and more just kind of like it’s my personal opinion.

Michael Wolf: I’ve asked you mainly about by virtual reality and augmented reality around eating, but have you given thought to using that, which you do every day, and creating these controls in cooking like how can we maybe apply virtual reality or augmented reality in some way around making food? Is that something you think about?

Basheer Tome: A bit. Yeah, I think there’s a lot of obvious potential ways to incorporate augmented reality into cooking where you’re getting tips. It might be live. It might even connect you to another chef or someone remote who might be giving you instructions step-by-step or you might say everyone has this idea but you overlay something on to a cutting board and then it tells where and how to cut things. But I think even walking back a step further from that and even something like you could use something on our current Daydream platform where we really hope and are looking to see learning as a major category of applications or experiences you have in virtual reality because I think when you have this complete virtual and fully immersive environment, there’s a lot of things you can teach someone and learn that’s a lot harder to explain without diagrams or really complex text.

Michael Wolf: Some of these devices from the Bosch, they have this thing called Mikey. It’s like basically a soulful robot that they demoed at CES. It actually projects a video onto like a surface and I love this idea of taking video, putting it on a kitchen surface, maybe it’s for instruct yourself how to cook, and I always think about this stuff when I think when I see it, I think about going on Star Wars, and I always go back to like the hologram like that’s the ultimate to me like getting to that hologram phase, that type of projection of video and maybe we get into 3D video projected into like a space without any sort of goggles on. Would you call that virtual reality?

Basheer Tome: Well, I think specifically projecting video would count more as augmented reality. It’s yeah, projected AR, as the trendy way to call it would be, but we do think about that a little bit. I think there’s a little world of ways you can implement Tango like tracking into wide variety of objects. But it still feels like to me like to me, it’s a little further off. I think part of it is my dream kitchen isn’t necessarily windowless and it has a lot of natural light and nice surfaces. That sometimes to be at odds with projection screens.

Michael Wolf: A lot of your job is just like dorks like me saying, “Hey, this is what I want. I want Star Wars. I want to live it.” Like you just kind of have to dial us all back a little bit?

Basheer Tome: No, no. sometimes I like to join along for the ride. I think one of the best parts about the job is talking to a bunch of people what they are excited some really cool ideas, so it’s not just me saying no all the time.

Michael Wolf: [laughter] Hey, Basheer, thank you for saying yes to this podcast. I appreciate you coming on. This has been a lot of fun, and I look forward to talking to you soon again.

Basheer Tome: Awesome, thank you!

Image credit: Samsung

November 26, 2016

Virtual Eating: How Virtual Reality Can Make You Think You’re Eating Pizza When You’re Not

The rise of virtual and augmented reality systems have only just begun; we’re almost positive we’ll see even more VR demos at CES this year, and the convergence of smart home technology and VR/AR has only just begun. But what about virtual eating? Virtual reality is designed to simulate sounds and sights of an environment – but could it simulate taste and smell too?

That’s the premise of a project from researchers in Japan and Singapore who have been testing out electrical and thermal probs that can stimulate muscles and trick the human brain to believe it was tasting food that wasn’t really there. In one experiment, scientists focused on the neurons that are sensitive to hot and cold temperature changes that also play a role in how we taste things. By rapidly heating or cooling a square of thermoelectric elements on the tip of someone’s tongue, the user experiences a sweet taste. The thermal experiment also produced some strange results, with some participants reporting a spicy flavor when the probs were heated up and a minty flavor when they were cooled down.

In another experiment, electrical currents were used instead of heat to enhance or create a salty, sour or bitter taste in someone’s mouth.

The last experiment used electrode’s attached to the masseter muscle, one of four muscles in the jaw used for mastication (chewing), to simulate biting through actual food. The strength of the electric impulse controlled the texture, or hardness of the simulated food and the duration of the impulse controlled the elasticity sensation of the jaw opening and closing during chewing. By varying the strength and duration, researchers were able to more realistically produce the sensation of biting into real food.

Study on Control Method of Virtual Food Texture by Electrical Muscle Stimulation

The role of heat as it relates to taste isn’t a new concept, it’s one chefs have been using to transform dishes and create unique flavors. But using solely heat or electricity to mimic a specific taste or sensation So it turns out, your taste buds, and even jaw muscles can be hacked – making it possible to have a virtual reality dining experience without having to suffer the calories.

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...