• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Skip to navigation
Close Ad

The Spoon

Daily news and analysis about the food tech revolution

  • Home
  • Podcasts
  • Events
  • Newsletter
  • Connect
    • Custom Events
    • Slack
    • RSS
    • Send us a Tip
  • Advertise
  • Consulting
  • About
The Spoon
  • Home
  • Podcasts
  • Newsletter
  • Events
  • Advertise
  • About

Robotics, AI & Data

July 31, 2019

MetoMotion Raises $1.5M Seed Round for Its Greenhouse Robotics Worker

Israel-based MetoMotion announced today it has raised a $1.5 million seed round for its multipurpose robotics system called GRoW (Greenhouse Robotics Worker) (via AgFunder).

There aren’t a lot of details on specifics of the funding round. According to AgFunder, the team behind GRoW has only said that “a leading Netherlands-based company in the greenhouse industry” led the round. This brings MetoMotion’s total funding to date to $2.7 million.

Even if GRoW’s investors are mysterious, its overall purpose in making indoor agriculture more efficient is obvious. According to the MetoMotion site, GRoW is a robotic worker that can perform labor-intensive tasks in the greenhouse, like harvesting. The fully automated vehicle uses a combination of 3D vision systems and machine vision algorithms to identify and locate produce that’s ready for picking (its first crop is greenhouse tomatoes). Custom-designed robotic arms then pick the produce, and a proprietary harvesting end-effector protects against damage to fruits during the harvesting process.

The vehicle is also capable of a number of other functions, like pruning, monitoring, and even pollinating crops. According to the MetoMotion website, GRoW can save up to 50 percent in harvest-related costs.

Expectations around the promise of indoor farming are ramping up right now as farmers, governments, and industries alike are looking at alternative methods to supplement traditional agriculture. At the same time, there’s a major shortage of skilled labor for greenhouses. Therefore it’s not surprising that MetoMotion has some friendly competition in the greenhouse robotics space. A company called iUnu makes Luna, a robot that uses cameras, sensors, and computer vision to scoot about the greenhouse and monitor crop welfare. Cambridge Consultants, meanwhile, has a robot called Hank that uses a sensory system embedded into pneumatic fingers to mimic human hands, which makes it ideal for harvesting delicate fruits and veggies.

Since details of MetoMotion’s seed round are scant, we’re not yet sure what the company will put the money towards, or when we can expect to see GRoW distributed. Israel-based Trendlines, MetoMotion’s parent company, says a fully functional version of GRoW has been successfully alpha tested in a greenhouse setting.

July 31, 2019

Chipotle ‘Quietly’ Introducing AI Into More Stores

Chipotle has been implementing more artificial intelligence (AI) into its stores this year in the form of AI-powered voice assistants for phone orders, according to Nation’s Restaurant News. The chain has already implemented the technology into 1,800 locations so far this year, and plans to have the system in remaining U.S. stores by the end of 2019.

The as-yet-unnamed female voice greets customers and takes orders. According to NRN, she “actively listens and makes suggestions when she thinks a customer might have omitted a topping on a bowl or burrito.” Over time, her algorithm learns and can remember more and more complex orders.

Chipotle has actually been quietly testing this technology out since 2018, when it introduced its “voice” to just 10 stores. But so far as AI is concerned, the time to be hush-hush about these initiatives might be coming to an end. Domino’s has been using a chatbot since 2018, an AI-powered system called DOM that also takes phone orders. And ever since McDonald’s acquired Dynamic Yield and started aggressively rolling the company’s tech into its drive-thrus systems (and soon its self-order kiosks), AI has been not a buzzword but the buzzword flying around QSRs and fast casual chains.

Having AI-powered tech in the restaurant has a few advantages. In the case of phone assistants like Chipotle or Domino’s, it can speed up the order process and minimize the amount of time employees have to spend on the phone (assuming the voice assistant can learn and retain complex orders and not need human intervention to process them). AI can also improve how well restaurants can offer more personalized recommendations to customers — that is, ones tailored specifically to that customer and based on everything from a person’s past orders to the weather outside.

What it won’t do, at least in Chipotle’s case, is take away jobs. Nicole West, vice president of digital strategy and product at Chipotle, told NRN that the phone-order system is meant to make the phone order process more convenient and save restaurant managers and employees time.

July 29, 2019

Woowa Brothers Partners with UCLA to Develop Cooking Robots

Woowa Brothers Corp., the company behind South Korea’s popular food delivery app Baedal Minjok, is partnering with UCLA to research and develop cooking robots, according to a story today in The Korea Times.

The Times writes, “Under the project name ‘Yori,’ Woowa Brothers will develop cooking robots that can perform various tasks, from placing orders and preparing meals, to bring an innovation to the dining culture.” As the article points out, the move with UCLA helps expand Woowa Brothers beyond food delivery.

But robots have been on the brain for Woowa Brothers for a while. In April of 2018, the company invested $2 million in Bear Robotics, which makes “Penny,” the robot that shuttles food and empty plates about on restaurant floors. And in December of last year, Woowa received $320 million in funding, some of which was going towards developing its autonomous delivery robot program.

Adding cooking robots to its arsenal would give Woowa Brothers a more full-stack solution and is in line with the broader, 360 degree view the company has about robots. In an interview last July, Kim Bong-jin, CEO and Founder of Woowa Brothers, talked about how food delivery robots could be more useful with less idle time. One idea Bong-jin floated was in addition to dropping off food, having a delivery robot take away a customer’s recycling.

Woowa Brothers is just the latest company to partner with a university for robotics research. Last year, Sony teamed up with Carnegie Mellon to develop food robots, and Nividia has a robotic kitchen lab set up with researchers from the University of Washington.

Food is a great application for robotics for a number of reasons. First, everyone eats, so there will always be a market for developing systems that help prepare, cook or deliver food faster. Second, food is oddly shaped, with varying sizes and degrees of fragility, making it difficult to work with. Overcoming the idiosyncrasies of food can make working with more uniform materials easier.

July 28, 2019

The Food Tech Show: Mapping The Flavor Genome With Beth Altringer

Beth Altringer first became fascinated in flavor when she joined a competitive wine tasting league while in graduate school at The University of Cambridge.

While it had nothing to do with her field of study (product design and innovation), the very idea of breaking down the characteristics of wine and its flavor into highly descriptive and well understood categories was a revelation to Altringer, so much so she eventually began to think about the idea of applying this systematic and analytical approach to flavor to almost any type of food.

It was from there that the Flavor Genome Project was born, an initiative that “explores how components of flavor combine to create delightful multi-sensory, chemical, emotional, and cultural experiences.”  The goal of the project is to eventually create an “automated understanding of what people are intuitively searching for in a food or drink experience, regardless of the language they use to search for it, intelligently understanding flavor goals in context, and, ultimately, making it easier for people to discover experiences they are likely to enjoy.”

According to Altringer, the Flavor Genome Project is intended to be a platform that could be a foundation for other products, and the first of those product is a mobile game called Chef’s League.  The iOS game allows players to compete to master the usage of different flavor characteristics such as “salt, sweetness, acidity, fat, spice, and more.”

You can find out more about the Flavor Genome Project here and the Chef’s League game here. You can also learn about them from listening to my conversation with Beth on this episode of The Food Tech Show podcast. As always, you can listen to on Spotify, Apple Podcasts or wherever you get your podcasts, or you can download direct or just click play below.

Audio Player
http://media.adknit.com/a/1/33/smart-kitchen-show/q6pw3p.3-2.mp3
00:00
00:00
00:00
Use Up/Down Arrow keys to increase or decrease volume.

July 26, 2019

Squishy Tires and Delivery Sweet Spots: More on Refraction AI’s Three-Wheeled Delivery Bot

When Refraction AI came out of stealth a couple weeks back, the company provided a fair amount of detail about its REV-1 autonomous delivery robot. The REV-1 has three wheels, can ride in a bike lane and ditched LIDAR in favor of on-board cameras for its navigation.

But we still had a few questions about Refraction AI’s robot and its approach to autonomous delivery, so I got on the phone with Refraction co-founder and CEO Matthew Johnson-Roberson this week to find out more.

One of the REV-1 launch’s biggest messages was that the robot was built to handle more inclement weather, but early coverage didn’t spell out exactly how. Right now a lot of autonomous vehicle testing happens in sunny places like Phoenix, Houston and the Bay Area. Clear skies and lots of light make it easier for robots to “see” things like lines on the road as they navigate.

Johnson-Roberson said that Refraction AI combines software and hardware to battle bad weather. First is the environmental scanning provided by a 12-camera setup as well as ultrasound and radar sensors on the REV-1. To make the robot less expensive, the REV-1 foregoes the LIDAR systems popular with other autonomous robots. And according to Johnson-Roberson, Refraction AI’s camera rig also allows the robot to track things not on the ground like buildings and cars to navigate even when road lines are not visible. The other way the REV-1 takes on bad weather is rather low tech. “We’re using fat bike tires a low PSI so they are squishy,” said Johnson-Roberson. “They can run in snow and rain.”

While the REV-1 is autonomous, there are still human tele-operators who can take over should the vehicle get stuck at, say, a complex intersection with a mix of cars and pedestrians.

The REV-1 is about the same size and speed as a bike, making its form factor kind of like a Goldilocks. It’s not big, like a full-sized car, and not small, like a rover robot. But that means it is free from the limitations of those other form factors. Full-sized self-driving cars may go faster and farther, but they also require a safety driver on-board, which pushes up the price of operation. Rover bots are cheaper, but they are slower and can’t hold as much food.

So what is the best environment for this in-between vehicle?

“Suburban LA is not a good idea,” said Johnson-Roberson, “We can go half a mile to 2.5 miles. That’s the sweet spot for what we’re trying to do.” So more dense urban areas are better for the REV-1. Refraction AI is eyeing Boston, Madison, WI and Austin, TX as potential rollout cities.

Right now, the company is working with two restaurants in the Ann Arbor, Mich. area. Johnson-Roberson didn’t provide many details about business models, but said that as the company expands, it will work directly with restaurants, providing them REV-1s and charging a per-delivery fee that “Is better than [what] Uber Eats is charging.”

Once a restaurant gets an order, it will use a tablet provided by Refraction to tell the REV-1 where to go. A code is texted to the customer who uses it to unlock the REV-1 when it arrives with the food.

While it’s working directly with restaurants right now, Johnson-Roberson said that his company is open to working with third-party delivery services.

Refraction AI is definitely a company to watch in the emerging delivery space. Self-driving delivery isn’t a zero sum game, but the REV-1’s unique form factor should make it appealing because of its combination of size and speed.

July 25, 2019

Standard Cognition Raises $35M Series B for its Cashierless Checkout Tech

Standard Cognition, one of a host of startups looking to bring cashierless checkout to the retail experience, announced today that it has raised a $35 million Series B round of funding. The round was lead by EQT Ventures fund, with participation from existing investors Initialized Capital, CRV and Y Combinator. This brings the total amount raised by Standard Cognition to $86.1 million.

Standard Cognition retrofits retailers with Amazon Go-like grab-and-go capabilities. Using computer vision and artificial intelligence, it keeps track of items shoppers take from the shelves and automatically charges them via mobile app when they exit the store.

With this Series B round, Standard Cognition now has a far bigger warchest than the other players in the cashierless checkout space, which breaks down as follows:

  • Grabango – $17 million
  • AWM Smartshelf – $12 million
  • Trigo Vision – $7 million
  • Caper – $3 million
  • Zippin – $2.7 million

However, the one thing Standard Cognition doesn’t have right now is a publicly named customer. Standard Cognition opened its own proof-of-concept store in San Francisco last year, and according to today’s press release, “The company has signed several retail customers and is deploying for two already in multiple locations, with scheduled go-live dates in Q3 and Q4 of this year.” But Standard Cognition has to name names.

This is a fairly common scenario: most startups making cashierless checkout solutions can’t actually name their actual customers right now. In fact, the only two that have done so at this point are Grabango with Giant Eagle and Trigo Vision with Shufersal in Israel. The reluctance of retailers to officially announce a partnerships is understandable as they typically move slowly with such technology implementations, wanting to make sure they pick a solution that can scale appropriately.

The cashierless checkout sector is still wide open, and with today’s big raise, Standard Cognition has the money to withstand the competition.

July 24, 2019

There Should be an Open Brand Image Database to Train Robots and Cashierless Checkout Systems

It was a box of Cheez-Its that made me think of the idea.

We were recently at Nvidia’s Robotics Seattle Robotics Lab, watching a presentation on how the company uses computer vision and synthetic data to train robots in the kitchen. In order for a robot to grab a box of Cheez-Its, that robot needs to know what a Cheez-Its box looks like. In order to teach the robot what a Cheez-Its box looks like, Nvidia needs give the robot detailed information about that box including size, shape, and artwork on the front.

This isn’t that hard if your kitchen, like the one at Nvidia, is stocked with only one box of Cheez-Its. You scan that box, input the dimensions and imagery so the robot can match what you’ve scanned to the real thing. But what happens when when you want the robot to find a box of something other than Cheez-Its? Or if the pantry has many different types of Cheez-Its, that kinda look similar but have different flavors? Or if it’s the holidays and the box has been altered from that first model so it now has a snowman on it?

Being able to identify particular products via computer vision isn’t just an issue for robotic hands. Startups like Grabango and Trigo Vision are retro-fitting grocery stores with lots of tiny cameras that use computer vision for cashierless checkout. These cameras need to precisely recognize the items that shoppers pick up and so that the consumer can be accurately charged. That means the AI powering the system needs to know the differences between a bottle of Coke, Diet Coke and Coke Zero, and be able to understand any changes to branding, like a new logo or seasonal updates.

Rather than having each robotics company and every cashierless checkout company separately create their own database of product images, it seems like having some sort of central repository of brand images would be useful. Think of it as a giant library of constantly updating brand images for all the products in a grocery store. CPG companies would upload 3D models of the latest versions of their products to this database, giving computer vision companies access the most up-to-date imagery for training their respective applications.

This is definitely not the most pressing issue facing CPG companies or retailers; cashierless checkout and product picking robots are still very much in the early stages. But they are coming — and preparing for their arrival now would make the evolution of computer vision and robotics that much faster. After all, training those systems is much easier when you can just download an image rather than creating it yourself.

During our visit I asked Dieter Fox, Senior Director of Robotics Research at NVIDIA, if there was such a system. He said there was for common objects, but not brand specific. ShapeNet has a 3D database of 50,000 common objects, and its subset, PartNet recently launched with a database of more than 26,000 objects broken down into their various parts.

There are competitive issues that might have CPG brands balking at the idea. Coke may not want people knowing about a particular branding change or partnership in advance. But the overall concept could be a tide that lifts all boats. It gives computer vision-related companies the most accurate 3D models of products for training purposes. The faster computer vision systems can be trained, the faster they can work in the real world without any hiccups, which would ideally allow brands to sell more products. It would also make it easier for kitchen robots, when they eventually arrive, to autonomously grab ingredients needed while cooking (“Robot, grab the turmeric.”).

This isn’t just for food, obviously. This type of repository could work for any brand across any sector that will involve computer vision. Perhaps it’s something Dieter Fox can talk about when he speaks at our upcoming Smart Kitchen Summit in October. Get your ticket now and maybe you can talk with him about it over a box of Cheez-Its.

July 24, 2019

Takeoff to Provide Robotic Order Fulfillment for ShopRite

Wakefern Food Corp. announced yesterday that it has partnered with Takeoff Technologies to build an automated fulfillment center in Cliffton, NJ. The center will fulfill orders for online shoppers at select ShopRite stores in New Jersey and New York.

Takeoff builds robot-powered micro-fulfillment centers, which use a series of tote boxes, rails and conveyors to quickly assemble online orders for either delivery or pickup. These centers are on the smaller side and typically in the back of existing supermarkets. Through its automated system, Takeoff can put together online orders of up to 60 items in just minutes.

Wakefern, which is a cooperative of 50 retailer-owned grocery retailers across the east coast, including ShopRite, The Fresh Grocer, Price Rite Marketplace and Dearborn Market, said in the press release it has agreed to build out more fulfillment centers with Takeoff in the future.

While online shopping is currently a small percentage of overall grocery shopping, it’s growing, and retailers are preparing for it by testing automated fulfillment systems like Takeoff’s, expanded curbside pickup and self-driving delivery vehicles. All in an effort to get you your groceries faster and get more of your business.

Robotic order fulfillment seems to be picking up steam as Kroger recently announced it will open its fourth Ocado-powered automated fulfillment warehouse in Georgia, Common sense Robotics opened up its own micro-fulfillment center in Tel Aviv, and Walmart has partnered with Alert Innovation to test out its micro-fulfillment solution.

For its part, this is the fourth grocery retail customer for Takeoff, which already has agreements in place with Sedano’s, Albertsons, and Ahold Delhaize.

July 24, 2019

Newsletter: The New All-in-One Restaurant Tech Is Here, Digital Drive-Thru Goes Down Under

This is the web version of our weekly newsletter. Sign up for it here to get all the best food tech news an analysis direct to your inbox!

I was in a local coffee shop recently and overheard a rep from a well-known POS company trying to sell his product to the shop’s manager. But for every feature he offered up (“It’ll manage payroll!” “It makes tipping easier!”), the cafe manager had more or less the same rebuttal: more tech would make more work for her staff.

I suspect this conversation is happening all over the world. Tech’s march on the restaurant industry is here to stay, but that doesn’t mean it’s necessarily making life easier for restaurants. In a growing number of cases, too many digital tools actually make it harder to get work done, particularly as demands for delivery and mobile orders ramp up and those functions have to be integrated into an already chaotic workflow.

But this week, we got a different glimpse into the future of the digital restaurant — namely, one where disparate tech solutions are replaced by a single digital platform that can manage every corner of the restaurant, from the kitchen system in the back to the kiosk out front to the off-premises order on its way out for delivery.

At least, that’s what Brightloom hopes to launch to restaurants this fall. The newly rebranded company, formerly known as Eatsa, announced yesterday that it’s revamped its existing end-to-end restaurant tech platform, into which it’s also integrating Starbucks’ famed mobile technology.

This is a big deal because, while many products claim to be “all-in-one” restaurant management software packs that make it easier for restaurant owners and operators to manage the entire business, no one’s yet managed to seamlessly integrate the mobile aspect of business into their system.

And nobody does mobile like Starbucks. Love ‘em or hate ‘em, it’s hard to deny the mega-chain’s dominance when it comes to offering fast, highly personalized order and pickup functions for customers. Brightloom’s soon-to-be-unveiled system will integrate the Starbucks mobile order, pay, and customer loyalty tech into its own system. We don’t yet know exactly what that will look like, but it will undoubtedly raise everyone’s standards around what restaurant-tech systems should be able to do and put pressure on others to make their offerings just as useful and less of a burden for restaurants to implement.

Good-bye, Crackly Speakerphone. Hello Digital Drive-Thru
Will all these digital developments render the crackly speaker at the drive-thru null and void? Probably, and sooner than we think.

While major QSRs like Dunkin’ and Starbucks have been implementing digital and mobile ordering into the drive-thru experience little by little over the last couple years, KFC took things a step further recently by announcing its first-ever drive-thru-only concept store.

The store, which is slated to open in November, will feature multiple drive-thru lanes dedicated to customers who have ordered their food via the KFC website or mobile app. The idea is to streamline the order process and cut down on how long it takes customers — or delivery drivers — to get their food. But again, it’s all about the implementation. KFC’s concept store could raise the bar on what QSRs are expected to deliver in terms of speed and quality. Or it could just be introducing another digital process that stresses workers out. We’ll know more when the pilot launches in November, in Australia.

Delivery Bots on the Rise
Or you could just let the restaurant come to you in the form of a roving bot. There’s a growing number of these devices delivering food from restaurant to customer, often on college campuses, which hold a lot of people in a relatively small geographic area.

But as my colleague Chris Albrecht pointed out this week, Kiwi announced it will test its semi-autonomous delivery bots on the streets of Sacramento, CA this fall, which suggests we’re coming to a point where these li’l roving machines will start to become a more common sight on regular city sidewalks. Who needs drive-thru when you can have your meal brought to you by a cute little box on wheels? As Chris said, “it was pretty amazing to whip out my phone, order a burrito, have a robot fetch my lunch and bring it to my location.”

For now, roving delivery bots are probably not a priority for most restaurants’ overall digital solutions. But as all-in-one offerings like the Brightloom-Starbucks tech get more commonplace and digital ordering becomes routine for customers and workers alike, there may be room for most restaurants to accommodate a bot or two in their tech stack.

July 22, 2019

Kiwi Expanding its Robot Delivery to Sacramento in September

Kiwi continues to roll out its diminutive li’l food delivery robots to more cities, with plans to begin operations in Sacramento, CA in September.

CBS13 first reported the story last week. While CBS13 didn’t provide many details around the Sacramento program, what caught our eye about the report is that Kiwi is working on this latest expansion directly with the city, which wants to become an urban technology lab. Most of Kiwi’s expansion so far has been through universities. Kiwi started out at the University of California in Berkeley and announced plans to be in a dozen more schools starting this fall including Stanford, UC Davis, Purdue, Cornell, and NYU.

Starting with universities makes sense for the nascent technology as campuses provide a sizeable population in a limited geographic area. Typically campuses have or are surrounded by plenty of restaurants to feed hungry students and faculties, and using a robot could make delivering those meals more convenient. Going the campus route is a strategy also employed by Starship for its delivery robot and Robby’s mobile commerce robot.

For Kiwi, going through schools also provides an infrastructure for running delivery operations, as students will be running operations at each school. We don’t have a ton of details on those programs (like how any money is split) but students will be responsible for robot maintenance and deployments. We reached out to Kiwi to find out more information about how the Sacramento program will work and will update when we hear back.

Kiwi’s robots are “semi-autonomous,” as they still have human operators who monitor a robot’s route and drop GPS waypoints for the robot to follow. I used Kiwi earlier this year at Berkeley and it felt like ordering food from the future. Aside from one glitch, it was pretty amazing to whip out my phone, order a burrito, have a robot fetch my lunch and bring it to my location.

Kiwi will begin testing in Sacramento this fall and hopes to have a fleet of 50 robots running around the streets of the city at some point.

July 12, 2019

Report: Amazon Still Working on a Robot for Inside the Home

Amazon is still hard at work building an actual Rosie the Robot-type robot for your home, according to a report today in Bloomberg.

We first heard about this robot a little more than a year ago, but this latest report indicates Amazon has not given up on the project and has actually increased work on it — though details remain scant. We know it’s called “Vesta” internally at Amazon, and Bloomberg’s sources say prototypes have wheels and “are about waist-high and navigate with the help of an array of computer-vision cameras.” It also can be summoned with voice commands a la Echo and Alexa.

Evidently, Amazon had wanted to debut the robot this year according to Bloomberg, but it’s not ready for scaled up production. The robot could also never see the light of day, projects get killed all the time at large companies, but Bloomberg writes that Amazon has added engineers to the project, which could mean it plans to sell Vesta at some point.

Beyond all this, we still don’t know what this robot is for. It could be some type of mobile Alexa that follows you around, and, for our purposes here at The Spoon, potentially help you by controlling various connected kitchen gadgets or ordering groceries, but without more information, we’re just guessing.

When we first wrote about Vesta, I was skeptical that it would be just a mobile virtual assistant. First, we don’t know if the robot can do things like climb stairs, which is kind of important in a lot of houses. Second, Echos are so cheap, if you need a ubiquitous Star Trek-like computer assistant, you could just get a bunch of Dots and place them all throughout the house.

Others disagreed, however. Last year friend of The Spoon, Kevin Tofel, wrote:

So maybe Vesta is more of a roving Echo than a robot that can fold laundry or make your coffee.

I’m OK with that and here’s why. I’d rather have one smart speaker that also has a camera than speakers and cameras — think Echo Show and Echo Spot — in every room of the house. If that one device can either follow me around the house or be within earshot, I don’t need multiple devices that basically do the same thing. And I can send a camera-enabled device away if that device is a robot.

But again, without more information, it’s hard to say how useful this robot may or may not be.

What we do know, however, is that we have reported on three different versions of home-related Amazon robots over the past year. In January, Amazon launched Scout, a squat cooler-looking rover bot that deliver packages. And in February, The Spoon uncovered a patent for an Amazon autonomous ground vehicle that would live in a persons garage and go out to fetch packages.

In addition to the robot news, the Bloomberg story also reported that Amazon is also prepping a new version of its Echo with improved sound.

July 11, 2019

Refraction Launches Three-Wheeled Delivery Robot That’s Bigger Than a Rover, but Smaller Than a Car

When it comes to autonomous delivery robots, size matters. Full-sized self-driving cars can travel on most major roads and go long distances, but may not work well in dense, traffic-congested cities. Little rover robots are nimble enough to zip along on sidewalks, but have a pretty limited range.

Refraction AI, an autonomous robotics startup that just came out of stealth today, is looking to split the difference with its REV-1 delivery vehicle. The REV-1 is a three-wheeled vehicle that stands 5 feet tall, 4.5 feet long and 30 inches wide. It weighs 100 pounds and has a top speed of 15 mph (Starship’s small rover bots have a top speed of 10 mph). The inside holds roughly 16 cubic feet, which translates to four to five grocery bags. There’s also an on-board touchscreen customers use to enter a code to unlock the REV-1 to retrieve their goods once they arrive.

The REV-1 has a stopping distance of just 5 feet, and to navigate around humans and other objects, Refraction has forsaken LIDAR used by other robots for a system combining 12 cameras with radar and ultrasound sensors. Refraction says that its LIDAR-less setup will allow it to travel better in inclement weather.

The REV-1’s in-between size and speed allow it to travel on both the roadway and in bike lanes, which, Refraction says, will open up new delivery route possibilities. And by not using LIDAR, Refraction can keep the cost of the REV-1 to $5,000 (though, LIDAR is getting cheaper).

Refraction AI introduces the REV-1

Refraction is based in Ann Arbor, MI, and is the brainchild of University of Michigan professors Matthew Johnson-Roberson and Ram Vasudevan. The company is backed by eLab Ventures and Trucks Venture Capital and will start with restaurant food delivery before expanding into other last-mile logistics.

Refraction is certainly launching at the right time as delivery robots are hot right now. Starship and Kiwi‘s rovers are spreading across college campuses in the U.S. Udelv is piloting self-driving cargo vehicles with grocers like Farmstead and HEB. And Domino’s is testing Nuro’s pod-like autonomous low-speed vehicles for pizza delivery as well.

The REV-1’s form factor is interesting for a number of reasons. First, it might be easier for lawmakers to deal with as they make up rules around self-driving vehicles. The small(ish) size of the REV-1, the fact that it’s not on sidewalks and its small stopping distance could make it easier for regulators to allow it on the road (as opposed to full-sized, full-speed self-driving cars). Additionally, its ability to use bike lanes could make it faster than other robot options in urban and suburban environments.

It also seems like robot delivery won’t be a zero-sum game. Restaurants and grocery stores will probably need access to a number of different types of self-driving robots (and drones) depending on where they are delivering to: Rovers for around the block, REV-1’s for a little bit further out, and cars for across town.

Previous
Next

Primary Sidebar

Footer

  • About
  • Sponsor the Spoon
  • The Spoon Events
  • Spoon Plus

© 2016–2025 The Spoon. All rights reserved.

  • Facebook
  • Instagram
  • LinkedIn
  • RSS
  • Twitter
  • YouTube
 

Loading Comments...